Response-ability.Tech

Follow Response-ability.Tech
Share on
Copy link to clipboard

The annual Response-Ability Summit, formerly the Anthropology + Technology conference, brings together leading experts from the social sciences and technology to champion socially-responsible tech, and to foster dialogue and collaboration across the disciplines. The summit has been curated to help today’s leading technology companies understand the significant value of combining teams of technologists with social scientists. Together we can build a future in which socially-responsible tech is the norm.

Dawn Walter


    • Nov 28, 2022 LATEST EPISODE
    • infrequent NEW EPISODES
    • 36m AVG DURATION
    • 43 EPISODES


    Search for episodes from Response-ability.Tech with a specific topic:

    Latest episodes from Response-ability.Tech

    What data scientists can learn from feminist social scientists in India. With Radhika Radhakrishnan.

    Play Episode Listen Later Nov 28, 2022 36:18 Transcription Available


    In this episode, we're in conversation with feminist scholar and activist, Radhika Radhakrishnan. Radhika is a PhD student at the Massachusetts Institute of Technology (MIT) in the HASTS (History, Anthropology, Science, Technology & Society) programme. This programme uses methods from history and anthropology to study how science and technology shape – and are shaped by – the world we live in. Trained in Gender Studies and Computer Science engineering in India, Radhika has worked for over five years with civil society organisations to study the intersections of gender justice and digital technologies using feminist, qualitative research methodologies. Her research focuses on understanding the challenges faced by gender-minoritized communities with emerging digital technologies in India and finding entry points to intervene meaningfully. Her scholarship has spanned the domains of Artificial Intelligence, data governance pertaining to surveillance technologies and health data, and feminist Internets, among others.Radhika shares with us what she'll be researching for her PhD and why she moved away from computer science to social science. In 2021 Radhika's paper, “Experiments with Social Good: Feminist Critiques of Artificial Intelligence in Healthcare in India” was published in the journal, Catalyst, and we explore her findings, as well as why she was drawn to artificial intelligence in healthcare. We also discuss her experiences of studying up (see Nader 1972) as a female researcher and some of the strategies she used to overcome these challenges.Lastly, Radhika recommends Annihilation of Caste by B. R. Ambedkar, and explains why it's important that we openly discuss caste. (Check out this article in WIRED about caste in Silicon Valley.)Follow Radhika on Twitter @so_radhikal, and connect with her on LinkedIn. Check out her website, and read her blog on Medium.

    Why Human Rights Law is AI Ethics With Teeth. With Susie Alegre.

    Play Episode Listen Later May 23, 2022 30:33 Transcription Available


    Our guest today is Susie Alegre. Susie is an international human rights lawyer and author. We're in conversation about her book, Freedom To Think: The Long Struggle to Liberate Our Minds (Atlantic Books, 2022). Susie talks about freedom of thought in the context of our digital age, human rights, surveillance capitalism, emotional AI, and AI ethics.Susie explains why she wrote the book and why she thinks our freedom of thought is important in terms of our human rights in the digital age. We explore what freedom of thought is ("some people talk about it as mental privacy") and the difference between an absolute right and a qualified right, and why absolute rights are protected differently.Susie shares some historical examples including witch trials as well as the work of Ewen Cameron, a Scottish psychiatrist in Canada, who experimented on ordinary people without their consent to explore ways to control the human mind. Facial recognition technology is a modern attempt to get inside our heads and predict such things as our sexual orientation. Susie explains why researchers shouldn't be experimenting with facial recognition or emotional AI: you're “effectively opening Pandora's box”.Susie explains the difference between surveillance advertising, which uses data captured about our inner lives that is sold and auctioned on an open market, in order to manipulate us as individuals, and targeted advertising.Over the past few years there's been a great deal of focus on ethics and Susie suggests we need to move away from the discussion of ethics “back to the law, specifically human rights law”. She explains that human rights law is being constantly eroded, and says “one way of reducing the currency of human rights law is refocusing on ethics”. Ethics are simply a “good marketing tool” used by companies.The inferences being made about us, the data profiling, the manipulation means it's practically impossible to avoid leaving traces of ourselves, it's beyond our personal control, and privacy settings don't help. In her book Susie suggests that by looking at digital rights (data and privacy protection) in terms of freedom of thought, "the solutions become simpler and more radical". It's a point that Mary Fitzgerald, in her review of Susie's book in the Financial Times, suggested was a "unique contribution" to the debates about freedoms in the digital age, and that "reframing data privacy as our right to inner freedom of thought" might capture "the popular imagination" in a way that other initiatives like GDPR have failed to do. Susie explains for us how this approach would work. Follow Susie on Twitter @susie_alegre, and check out her website susiealegre.com. Read the full transcript.  Read the conversation as a web article.Watch the interview on our YouTube channel.

    Anthropology and Artificial Intelligence. With Veronica Barassi

    Play Episode Listen Later Apr 20, 2022 29:52 Transcription Available


    Our guest today is Professor Veronica Barassi. Veronica is an anthropologist and author of Data Child Citizen (MIT Press, 2020). Veronica campaigns and writes about the impact of data technologies and artificial intelligence on human rights and democracy. As a mother, Veronica was becoming increasingly concerned about the data being collected on her two children by digital platforms. Her research resulted in the book as well as a TED talk, What tech companies know about your kids, that's had over 2 million views. Since the publication of her book, she says there's been a huge acceleration in the datafication of children, partly due to the pandemic, and an increase in the ways in which AI technologies are being used to profile people.Veronica explores what she believes anthropology uniquely brings to the study of data technologies and AI. She asks (and answers), “why would an anthropological approach be different from say, for instance, Virginia Eubanks, who uses ethnographic methodologies and has a real context-specific understanding of what's happening on the ground.” Turning to anthropology's (late) engagement with AI, data, and algorithms, she says it used to be a niche area of research. But “we've actually seen a reality check for anthropologists because these technologies are…involved in deeply problematic and hierarchical processes of meaning-construction and power-making that there's no way that anthropologists could shy away from this”.One of the best books “that really makes us see things for what they are ["in this current time we're living in"] is David Graeber's The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy. Graeber “talks about how bureaucracy is actually there to construct social truth, but this type of bureaucratic work has been now replaced by algorithms and artificial intelligence”, a connection she tries to make in her article, David Graeber, Bureaucratic Violence and the Critique of Surveillance Capitalism.We discuss how anthropologists can make their work both academically rigorous and accessible to the public, and she talks about her own personal experience of doing the TED talk and how she felt a responsibility to bring the topic of child datafication to a wider audience, campaigning, and raising awareness.Veronica provokes anthropology scholars with a call to action given that one of her “major critiques of anthropology…is the fact that as anthropologists often shy away from engaging theoretically with disciplines that do not share their approach". And what does it mean when we say research is “not anthropological enough”? Lastly, Veronica suggests that, given machines must be taught basic concepts, like what is a child (“as anthropologists, we know that these concepts are so complex, so culturally specific, so biased”), what anthropology can do is “highlight the way in which these technologies are always inevitably going to get and be biased”. She ends on a note of excitement: “We're going to see such great research emerging in the next few years. I'm actually looking forward to that”. Follow Veronica on Twitter @veronicabarassi.  Read an edited version of our conversation together with reading list.

    Understanding Data and Privacy as a UX Researcher. With Laura Musgrave

    Play Episode Listen Later Feb 9, 2022 27:44 Transcription Available


    Our guest today is Laura Musgrave. Laura was named one of 100 Brilliant Women in AI Ethics™ for 2022. Laura is a digital anthropology and user experience (UX) researcher.  Her research specialism is artificial intelligence, particularly data and privacy.Laura gave a short talk at the inaugural conference in 2019 on privacy and convenience in the use of AI smart speakers. And at the 2021 event Laura chaired the panel, Data: Privacy and Responsibility. We start our conversation by exploring Laura's interest in data and privacy, and smart assistants in particular. During her research on smart speaker use in homes, she's noticed a shift in people's attitudes and a growing public awareness around privacy and technology, and the use of AI. This shift, she feels, has been aided by documentaries like The Social Dilemma (despite well-founded criticisms such as this article by Ivana Bartoletti in the Huffington Post) and Coded Bias. Laura talks about where the responsibility of privacy lies — with the technology companies, with the users, with the regulators — and that as a user researcher, she has a part to play in helping people understand what's happening with their data.I ask Laura what drew her to anthropology and how she thinks the research methods and lens of anthropology can be used to design responsible AI. She says, "The user researchers that really stood out to me very early on in my career were the anthropologists and ethnographers"  because "the way that they looked at things…really showed a deep understanding of human behaviour". It "set the bar" for her, she explains, and she wanted to know: “How do I do what they do”.Laura shares the book she'd recommend to user researchers, like her, who are starting out on their ethnographic journey, a book which helped her “make sense of how ethnography fitted into my everyday work “.Because Laura's been named one of the 100 Brilliant Women in AI Ethics™ for 2022, I ask her to share what the AI ethics landscape, with respect to data and privacy, looks like for 2022. As she explains, “in some senses it is much the same as last year but it's also a constantly developing space and there are constantly new initiatives” before sharing some of the key themes she thinks we are likely to see in 2022.Lastly, Laura recommends two books, both published by Meatspace Press: Fake AI, and Data Justice and Covid-19: Global Perspectives. (The former we picked for our 2021 Recommended Reads and the latter for our 2020 Recommended Reads.)You can connect with Laura on LinkedIn and on Twitter @lmusgrave. Read an edited version of our conversation which you can read online and also download as a PDF.

    Social Science-Led User Research in Tech. With Rosie Webster

    Play Episode Play 23 sec Highlight Listen Later Jan 12, 2022 26:30 Transcription Available


    Our guest today is Dr Rosie Webster. Rosie has a PhD and an MSc in health psychology. She's currently Science Lead for Zinc's venture builder programme. Prior to Zinc, Rosie worked as a UX researcher at digital health company, Zava, and was Lead User Researcher at Babylon Health. While at Babylon, Rosie established the foundations of an effective Behavioural Science practice, which is partly what we're here to talk about today.Rosie explains that if businesses are interested in delivering impact and making a difference, then social science can be really key.  She says that research, in similar ways to design, is often underestimated and under-utilised in tech.  Our power, she says, lies in understanding the problem and what the right thing to build is. This is a truly user-centred approach that requires trusting in the process and being willing to scrap an idea when the research points in a different direction.Often people don't know what social science is, says Rosie, and equate it to academic research, with the corresponding but erroneous perception that it's slow, when in actual fact it provides answers much more quickly.Rosie explains how she established the beginnings of a behavioural science practice at Babylon Health, with the support of two managers who understood its value and importance. She shares why she wanted to ‘democratise' behavioural research, the benefits of that approach, and how she ‘marketed and sold' behavioural science within the company.User research should utilise the existing academic literature more, “building on the shoulders of giants”, as Rosie calls it, “supercharging” primary research, and using evidence to understand what the solution might be. It's an approach she says results in understanding people deeply, while increasing impact and reducing risk, and without slowing down the fast-paced product development environment.As our conversation draws to an end, Rosie has a final piece of advice for businesses that are genuinely open to achieving impactful outcomes, and recommends two books for people who are looking to bring behavioural science into their work: Engaged by Amy Bucher, and Designing for Behaviour Change by Stephen Wendel.Follow Rosie on Twitter @DrRosieW, and connect with her on LinkedIn.Read an edited version of our conversation which you can read online and also download as a PDF.

    Engineering Cultures and Internet Infrastructure Politics. With Corinne Cath-Speth

    Play Episode Listen Later Dec 8, 2021 47:31 Transcription Available


    My guest today is Dr Corinne Cath-Speth. Corinne is a cultural anthropologist whose research focuses on Internet infrastructure politics, engineering cultures, and technology policy and governance.Corinne has recently completed their PhD at the Oxford Internet Institute (OII), which was titled, Changing Minds & Machines. It was an ethnographic study of internet governance, the culture(s) and politics of internet infrastructure, standardization and civil society. Drawing on their research, Corinne gave a talk as part of an event series hosted by the Oxford Internet Institute which explored the opaque companies and technologists who exercise significant but rarely questioned power over the Internet. As Corinne said during their talk, this mostly unknown aspect of the Internet is “as important as platform accountability". I invited Corinne onto the show to tell us more.Using the Fastly incident in June, Corinne explains who and what these largely invisible, powerful Internet infrastructure companies are and how an outage can have a “large impact on the entirety of our online ecosystem”. The incident shows “how power is enacted through the functioning and maintenance of Internet infrastructure design.” Corinne goes on to say that  “just because the Internet infrastructure is largely invisible to users doesn't mean that it's apolitical [in the case of Cloudflare and 8chan in particular] and it doesn't mean that these companies can claim neutrality”.Corinne talks about their PhD dissertation and says, “I was really interested in understanding how the engineering cultures of infrastructure organizations influence what but also whose values end up steering technical discussions”. Their fieldwork was conducted in an organization called the Internet Engineering Taskforce (IETF). (Corinne brilliantly summarised their PhD in a series of tweets.)Corinne explains what drew them to research this particular topic and notes that “it is so important to get at the personal drivers of our research and being really upfront and explicit about how those are key part of our research practice and the kind of decisions that we end up making.”Corinne shares why they believe cultural anthropology is relevant “to questions of Internet infrastructure of politics and power”, saying “I believe that anthropology really can provide new, novel perspectives on current Internet infrastructure dilemmas, including those related to the connections between cultures and code.”While there's rightly concern about platform accountability or the power of tech companies, what many people don't realise is that companies like Meta and Amazon are also infrastructure companies. We need to ask ourselves, says Corinne, “how comfortable we are with the fact that a handful of companies are starting to influence huge parts of the entire Internet”. Corinne “really wants to encourage people” to study aspects of the Internet “because the last thing we want” is for a small number of companies to have “a say over many parts of our lives….And us not understanding how it happened”.Lastly, Corinne says, “what we need is a balanced and well-resourced counter-power to the influence of corporate actors that are steering the future of the Internet”.Further readingCorinne has kindly supplied a list of resources and reading that they mentioned in the podcast.

    Recommender Systems and Inequality in the Creator Economy. With Matt Artz

    Play Episode Listen Later Nov 10, 2021 50:06 Transcription Available


    Our guest today is Matt Artz. Matt is a business and design anthropologist, consultant, author, speaker, and creator. As a creator he creates podcasts, music, and visual art. Many people will know Matt through his Anthropology in Business and Anthro to UX podcasts. We talk about his interdisciplinary educational background — he has degrees  in Computer Information Systems, Biotechnology,  Finance and Management Information Systems, and Applied Anthropology — and Matt explains what drew him along this path.He shares his recent realisation that he identifies primarily as a technologist ("I am still at heart a technologist. I love technology. I love playing with technology") and his conflict around the "harm that comes out of some AI, but I'm also really interested in it and to some degree kind of helping to fuel the rise of it."This leads to us discussing — in the context of recommender systems and Google more broadly — how we are forced to identify on the internet as one thing or another, either an anthropologist, a technologist, or a creator but not all three. As Matt explains, "finding an ideal way to brand yourself on the Internet is actually very critical...it's a real challenge".We turn next to recommender systems and his interest in how capital and algorithmic bias contribute to inequality in the creator economy, which is based on his art market research as the Head of Product & Experience for Artmatcher.  Artmatcher is a mobile app that aims to address access and inclusion issues in the art market. The work being done on Artmatcher may lead to innovations in the way the approximately 50 million people worldwide in the Creator Economy get noticed in our "technologically-mediated world" as well as in other multi-sided markets (e.g. Uber, Airbnb) where there are multiple players. It's a model he hopes will ensure that people's "hard work really contributes to their own success".Design anthropology is one approach to solving this challenge, Matt suggests, because it is "very interventionist, very much focused on what are we going to do to enact some kind of positive change". As Matt says, "even if this [model] doesn't work, I do feel there's some value in just having the conversation about how can we value human behaviour and reward people for productive effort and how can we factor that back into the broader conversation of responsible tech or responsible AI?".He recommends two books, Design Anthropology: Theory and Practice, edited by Wendy Gunn, Ton Otto, Rachel Charlotte Smith, and Media, Anthropology and Public Engagement, edited by Sarah Pink and Simone Abram.Lastly, Matt leaves us with a hopeful note about what we can do in the face of "really hard challenges" such as climate change.You can find Matt on his website, follow him on Twitter @MattArtzAnthro, and connect with him on LinkedIn.

    Communicating the Social Impacts of AI. With Nat Kendall-Taylor

    Play Episode Listen Later Oct 6, 2021 44:20 Transcription Available


    Our guest today is Dr Nat Kendall-Taylor. Nat received his PhD in Anthropology at UCLA and in 2008 he joined the FrameWorks Institute, a non-profit research organisation in Washington, D.C., where he is now the CEO. FrameWorks uses rigorous social science methods to study how people understand complex social issues such as climate change, justice reform, and the impact of poverty on early childhood development. It develops evidence-based techniques that help researchers, advocates, and practitioners explain them more effectively.Nat explains what drew him from pre-med to anthropology. He did his PhD at UCLA because of the Anthropology department's "unapologetic focus on applied anthropology". His fieldwork in Kenya on children with seizure disorders explored the question of why so few sought biomedical treatment. His experience there, working with public health officials and others, demonstrated the value of understanding culture, the importance of multi-modal transdisciplinary perspectives, and the often "counterintuitive and frequently frustrating nature of communications when you're trying to do this kind of cross-cultural work".For the past 18 months, FrameWorks has worked on how to frame and communicate the social impacts of artificial intelligence.  The project came to FrameWorks through their long-term collaboration with the MacArthur Foundation when it became clear that some of their Grantees "had been having a lot of difficulty advancing their ideas" about algorithmic justice to the general public. The project has explored "the cultural models, the deep patterns of reasoning that either make it hard for people to appreciate the social implications" of AI as well as how to allow people to "engage with the issue in helpful and meaningful ways". The report will be publicly available on the FrameWorks website.As Nat explains, if the public "doesn't understand what the thing is [artificial intelligence] that you are claiming has pernicious negative impacts on certain groups of people, then it becomes very hard to have a meaningful conversation about what those are, who is affected".  This is compounded when "people don't really have a sense what structural or systemic racism means outside of a few issues, how that might work and what the outcomes of that might be."Nat says their work "suggests that it is a responsibility, it's an obligation, for those who understand how these things work to bring the public along, and to deepen people's understanding of how [for example] using algorithms to make resourcing decisions...can be seriously problematic".Nat recommends three books (Metaphors We Live By, Finding Culture in Talk, and Cultural Models in Language and Thought) and ends with a call for more anthropologists to work outside the academy where they can also do impactful work.Read an edited excerpt [PDF] of this interview.You can follow Nat on Twitter at @natkendallt and connect with him on LinkedIn. FrameWorks are on Twitter @FrameWorksInst. Update: FrameWorks published “Communicating About the Social Implications of AI: A FrameWorks Strategic B

    The Ethics of Venture Capital Investors. With Johannes Lenhard

    Play Episode Listen Later Sep 7, 2021 45:38


    Our guest today is Dr Johannes Lenhard. Johannes received his PhD in Anthropology at Cambridge University and in 2017 started a post-doctoral research project, at the Max Planck Centre Cambridge for the Study of Ethics, the Economy and Social Change, on the ethics of venture capital investors.Johannes spoke at the 2021 Response-ability Summit. He shares what drew him to studying venture capitalists and how he does ethnography in this very closed, elite world across various field sites including Silicon Valley and London. Johannes explains that "not a single book" has been written about venture capitalists by someone who isn't one. As he says, "only an engaged anthropology" can enable someone to be both insider and outsider in this rarefied world.Johannes explains the impact of the lack of diversity in venture capital since not only are VC's hiring people who look like them (white, male) but they "also reproduce themselves into who runs these tech companies". The issue of venture funding is explored by Johannes and Erika Brodnock in their book, Better Venture, which will be published later in 2021.Johannes also briefly discusses Environmental Social and Corporate Governance (ESG) metrics which are starting to affect VC's and the “aggregate confusion” identified by an MIT paper.Johannes believes more scrutiny into venture capital investors is needed, saying "they are the ones deciding the big tech companies in the next 10-15 years....scrutinizing them now has an impact on everything in the future. They are the kingmakers, and we've been solely focussing on the kings, the Mark Zuckerbergs and the Jeff Bezos of this world".Scrutiny, he explains, will benefit both society and the VC's themselves.Drawing on his Medium post, "The Ultimate Primer on Venture Capital and Silicon Valley", Johannes shares his top reading picks for anyone eager to learn more: Doing Capitalism in the Innovation Economy by William Janeway; The Code by Margaret O'Mara; VC: An American History by Tom Nicholas; and a paper, "How Do Venture Capitalists Make Decisions?". And lastly, Johannes explains why more academics "of any kind" are needed to study the world of venture capital investors.You can follow Johannes on Twitter at @JFLenhard and connect with him on LinkedIn. Academics and articles also mentioned in our conversation:Saskia SassenJames LaidlawJohannes Lenhard, Can Tech Ever Be Good? Public Books, September 2020.

    Humanising Cybersecurity Through Anthropology. With Lianne Potter

    Play Episode Listen Later Jun 30, 2021 38:45


    Our guest today is Lianne Potter. Lianne is an anthropologist, self-taught software developer, cyber security evangelist, and entrepreneur. Lianne works at Covea Insurance as their Information Security Transformation Manager where she advocates for innovation in the cyber security field. Lianne's talk at the 2021 Response-ability Summit was titled, "Reciprocity: Why The Cyber Security Industry Needs to Hire More Anthropologists".In this episode Lianne is in conversation with Isabelle Cotton, a digital anthropologist and social researcher, who was curious to interview Lianne for us. As Isabelle explains, "I was interested to talk to Lianne, who uses anthropology to humanise cybercrime. I find her acute awareness of the digital divide in all of the work she does particularly powerful. She has managed to carve out a space for anthropology in an industry that favours faceless data and numbers".During their conversation Lianne explains why she's so passionate about the digital divide and why she believes a people-based, behavioural approach to cybersecurity is so important. Lianne also explains why the technical terms used in the industry can be off-putting to many general users and why she believes storytelling is a way to raise awareness and increase engagement. Isabelle and Lianne also explore biometric security, two-factor authentication, and the 'culture' of hacking. Lastly, Lianne shares some advice for anthropologists looking to get into cybersecurity and tech more generally.Follow Lianne on Twitter at @Tech_Soapbox  and connect with her on LinkedIn. Connect with Isabelle on LinkedIn and check out her website.

    Bringing an Anthropological Lens to Covid-19. With Gitika Saksena

    Play Episode Listen Later Jun 16, 2021 36:09


    My guest today is Gitika Saksena. Gitika is a Director at LagomWorks, a research and innovation consulting firm she founded in 2018. Before that she was a Vice President at Accenture Technology in India, where she led the strategy and design for various talent initiatives.Gitika gave a talk at the 2021 Response-ability Summit in May.Gitika has degrees in Economics and Business Management, as well as a second Master's degree in Social Anthropology from SOAS University of London. During our conversation, Gitika explains what drew her to anthropology and to study full-time at SOAS. She reflects how her experience at Accenture helped her on her new path, and shares some advice for other anthropologists looking to set up their own consultancies. She explains, for instance, that "clients won't engage with you for anthropology in and of itself. They demand, and will demand, very tangible outcomes and to be challenged and offered fresh perspectives."We explore the Covid-19 research that she and her colleague, Abhishek Mohanty, have variously conducted in both the UK and India on masks, well-being, as well as privacy with respect to contact-tracing apps. They presented their research at the RAI Film Festival 2021, ASA 2021, and the 2021 Response-ability Summit respectively.Lastly Gitika explains the value that she believes anthropology brings to understanding the unprecedented shifts that the world is undergoing, saying that it's essential that anthropologists "bring our conceptual rigour to understand these shifts".You can find Gitika on Twitter at GitikaSaksena. Follow LagomWorks on LinkedIn and check out their website where you can sign up to their newsletter.

    Dignity-Centred Technology: Enabling Human Flourishing. With Lorenn Ruster and Thea Snow

    Play Episode Listen Later May 17, 2021 35:34


    My guests today are Lorenn Ruster and Thea Snow. Lorenn has recently completed her Masters at the School of Cybernetics at the Australian National University and Thea is the Director at the Centre for Public Impact for Australia and New Zealand.Lorenn and Thea are speaking at the 2021 Response-ability Summit on May 20-21. Their talk is titled, "Dignity-centred technology — moving beyond protecting harms to enabling human flourishing". Thea and Lorenn explain how they came to work together, and their respective backgrounds.Lorenn shares her experience as a Masters student at the 3A Institute, which was established by anthropologist, Genevieve Bell, and the aims of the Institute, which is to "build the skills and knowledge needed to help shape the future safely, sustainably and responsibly".Thea describes the Centre for Public Impact's work, which is to re-imagine government, and together Lorenn and Thea share their dignity ecosystem model, which is a different way into a conversation about ethics in artificial intelligence. We discuss how their model might be used by technology companies as well as governments, given their model focuses on proactively enabling human flourishing rather than simply harm minimisation. You can download their report, Exploring the role of dignity in government AI ethics instruments, from the CPI website.Find Lorenn on Twitter at @LorennRuster and on Medium.  Find Thea on Twitter at @theasnow and on Medium. Follow the CPI at @CPI_foundation.

    Creating Emergent Socio-Digital Futures. With Susan Halford

    Play Episode Listen Later May 11, 2021 30:32


    My guest today is Professor Susan Halford, who is the co-Director of the Bristol Digital Futures Institute at the University of Bristol. Susan is our academic keynote at the 2021 Summit. The Bristol Digital Futures Institute (BDFI) is a University Research Institute that pioneers transformative approaches to digital innovation. It brings together researchers from across the disciplines and works with partners in industry, government and civil society. The BDFI is developing in-depth systematic understanding of sociotechnical futures to drive the creation of digital technologies for inclusive, prosperous and sustainable societies.During our conversation Susan explores the word ‘futures' in the sense of recognising that futures are not fixed rather than in the sense of prediction, and explains the work and research that the Institute does. She also discusses the difference between the terms ‘sociotechnical' and ‘socio-digital', and why it's important that social scientists and technologists know enough about each other's fields so we can collaboratively effectively.  Lastly, Susan talks about 'response-ability' and briefly explores some of the ideas from Donna Haraway's book, Staying With The Trouble, that she finds incredibly provocative.You can find Susan on Twitter at @susanjhalford and the BDFI at @DigiFutures.

    How Spotify and Google are Using Social Science to Innovate. With Tom Hoy

    Play Episode Listen Later May 5, 2021 35:07


    My guest today is Tom Hoy. Tom is one of the founding Partners at Stripe Partners, the London-based innovation consultancy. Alongside co-founders Tom Rowley and Simon Roberts, Tom has built Stripe Partners from a kitchen table to a thriving business, advising clients including Spotify, Facebook, Google, and Intel. Tom's particular interests lie in designing new ways to work collaboratively with clients to maximise the impact of Stripe Partners's work, and helping them to see the value of social science has in unlocking their most complex business challenges. His work has been featured in publications including the Financial Times and the Guardian.Stripe Partners are our 2021 Silver Partner, and Senior Research Consultant Anna Leggett will be sharing a research project at this year's Summit that explored opportunities to connect with marginalised communities during the pandemic. During our conversation Tom shares how Stripe Partners began and some of the reasons for their success as an innovation consultancy. He explains their main areas of practice and tells us about two of their projects, working with Spotify and Google.We also explore how Stripe Partners has adapted its methodologies during the pandemic, changing the way it does ethnographic research, and what has been gained and lost by doing research solely online.Lastly Tom recommends three worthwhile reads: Recommendation Engines by Michael Schrage, Valuing the Unique: The Economics of Singularities by Lucien Karpik, and Leave the World Behind by Rumaan Alam.You can find Tom on Twitter at @thoy and Stripe Partners at @stripepartners.

    Building Trust with Algorithmic Audits. With Gemma Galdon-Clavell

    Play Episode Listen Later Apr 21, 2021 39:27


    Our guest today is Dr Gemma Galdon-Clavell. Gemma is the Founder and CEO of Eticas Consulting. Her multidisciplinary background in the social, ethical and legal impact of data-intensive technology has enabled her and her team to design and implement practical solutions to data protection, ethics, explainability, and bias challenges in AI. Gemma, together with her colleague Emma Lopez, is talking at the 2021 Response-ability Summit where they will be sharing their bottom-up approach to algorithmic auditing.During our conversation Gemma shares how she moved from an interest in public spaces to a PhD on surveillance, security and urban policy in 2012 to then founding Eticas.Gemma explains why Eticas is focused on digital ethics and trustworthy AI, and why she thinks that enforceable regulation is a good thing, not least because it means people can trust the tech.Gemma explains the three main phases that comprise their Algorithmic Audit Framework technology. She also talks about why we need more people who understand society working in this space — as she says, the future of humanity depends on it. And she has suggestions for young women, particularly those who are studying the social sciences, who want to make a positive contribution to emerging technologies.Lastly, Gemma shares some recommended reads and further resources.Follow Gemma on Twitter @gemmagaldon. To  find out more about their algorithmic auditing work, visit Eticas Consulting, and Eticas Foundation for information about their pubic impact work. 

    The Future of Privacy Tech. With Gilbert Hill

    Play Episode Listen Later Apr 7, 2021 50:36


    In this episode we're in conversation with Gilbert Hill. Gilbert is a privacy technologist and he's talking at the 2021 Summit in May.Most recently Gilbert was CEO and Advisor to Tapmydata, a start-up building consumer-grade tools for people to exercise data rights, with blockchain keeping score.  Before becoming CEO of TapMyData, Gilbert founded Optanon and, as the MD, grew it to become the market leader in the provision of website auditing and cookie compliance solutions in the UK and EU. Gilbert is a Fellow and Senior Tutor on Privacy and Ethics at the Institute of Data and Marketing.During our conversation Gilbert explains how, after graduating from Cambridge University with a degree in anthropology and archaeology, he became a privacy technologist. We discuss how he conceives of privacy and we talk about Tapmydata and how it enables consumers to exercise their data rights — contrary to popular opinion at the time that people didn't care about their data — and the advantage for companies who hold it.Gilbert talks about the growing movement to re-emancipating citizens in terms of their data and its value, and the concept of data unions, which is enshrined in the EU's Digital Markets Act. We also discuss the role that blockchain and crypto have to play in data privacy. Lastly, Gilbert shares some of his recommended reads and why he's looking forward to the summit.Follow Gilbert on Twitter @GilbertHill and read his writing at gilberthill.medium.com.Mentioned in our conversation:Covid-19 and the cult of privacy by Daniel MillerAn Artificial Revolution: On Power, Politics and AI by Ivana BartolettiThe End of Trust (McSweeney's 54) - features an interview with Ed Snowden explaining blockchain to his lawyer.Privacy is Power: Why and How You Should Take Back Control of Your Data by Carissa VélizThe Cryptocurrency Revolution: Finance in the Age of Bitcoin, Blockchains and Tokens by Rhian LewisAnd lastly enjoy comedian Stevie Martin's funny video, which is a biting commentary on the “accept all” cookie option.

    The Power and Politics of Algorithmic Life. With Taina Bucher

    Play Episode Listen Later Mar 24, 2021 43:16


    In this episode we talk with Taina Bucher who is an associate professor in screen cultures at the Department of Media and Communication, University of Oslo. Taina is the author of IF...THEN: Algorithmic Power and Politics, published by Oxford University Press in 2018.Taina explains why, as a media scholar, she became interested in algorithms and software, and we discuss her book and her proposal that we must approach algorithms not by asking what is an algorithm but instead when and how are algorithms. We discuss black boxes, a metaphor Taina finds problematic, and she uses the Facebook 'trending topics' controversy in 2016 as an example of the lack of nuance in discussions around attributing agency to either algorithms or humans. Taina explores how algorithms materialise in the institutional setting of news media in the context of the recent law passed by the Australian government aimed at making Google and Facebook pay for news content on their platforms.We also briefly talk about Taina's book, Facebook, published in May by Polity Press. Lastly, Taina recommends three books with respect to the questions she addressed in our conversation, Cloud Ethics by Louise Amoore, You Are Here by Whitney Phillips and Ryan M Milner, and Metrics at Work by Angèle Christin. Follow Taina on Twitter (@tainab) and find more about her at tainabucher.com.

    An Engineering Anthropologist. With Astrid Countee

    Play Episode Listen Later Mar 10, 2021 56:44


    My guest today is Astrid Countee. Astrid is an anthropologist and technologist based in Houston, Texas. She is co-founder of Missing Link Studios. In 2016, Astrid wrote an article for Ethnography Matters on why tech companies need to hire software developers with ethnographic skills, and it's this article I explore with her during our conversation.Astrid shares her journey from dreaming of being a surgeon to studying forensic science and then medical anthropology before becoming a software engineer.Astrid explains what she thinks the skills are that anthropologists and ethnographers bring to a development team and we talk about the benefits of technologists who have social understanding.Astrid shares some advice and tips for social scientists and researchers working alongside technologists and how we can work together.We also discuss the topic of whether anthropologists need to learn to code and she likens it to learning a language, which anthropologists doing fieldwork often have to do.Lastly, Astrid discusses how anthropologists might be seen as more than the people who 'make technology usable', and that anthropologists should play a bigger part in tackling the wicked problems of this century.You can find Astrid on Twitter or LinkedIn.

    Making Data and AI Work for People and Society. With Reema Patel

    Play Episode Listen Later Feb 24, 2021 28:20


    In this episode we talk to Reema Patel, Head of Public Engagement at the Ada Lovelace Institute. The Ada Lovelace Institute is an independent research institute that was established in 2018. Its mission is to ensure data and AI work for people and society. Reema leads the organisation's public attitudes and public deliberation research.During our conversation, Reema shares her journey from Cambridge University where she studied philosophy to becoming one of the founding team members of the Ada Lovelace Institute. Reema explains why the Ada Lovelace Institute was established, its mission and purpose, and takes us through the four main themes on which the Institute is focused: algorithm accountability; data for the public good; justice and equalities; and Covid-19 technologies. Reema also touches on the Institute's recent work on the risks and benefits of digital COVID-19 vaccine certification schemes.We discuss the impact the Ada LoveLace Instiute's work is having, determined as it is not to be a "talking shop". Lastly, Reema tells us about JUST AI, a humanities-led network established in 2020 that is committed to understanding the social and ethical value of data and AI.Follow Reema on Twitter and connect with her on LinkedIn.Follow the Ada Lovelace Institute on Twitter, check out their blog, and sign up to their informative fortnightly newsletter.

    The Office, Media, and Embodied Computing. With Simon Roberts

    Play Episode Listen Later Feb 10, 2021 43:07


    In this episode we talk to Dr Simon Roberts, business anthropologist and Partner at Stripe Partners, a strategy and innovation consultancy based in London. He's also the author of The Power of Not Thinking. Simon was a keynote at our inaugural summit. And Stripe Partners sponsored both the 2019 and 2020 events.During our conversation, Simon shares how he started out as a business anthropologist. We talk about his 2018 article, The UX-ification of Research in which he decried the fact that research is being squeezed into a new temporal rhythm — being thoughtful is out, speed is in — and how optimistic he feels now, at a time of economic crisis, when budgets are being squeezed.We also talk about the office, now that working from home is a reality for most of us for the foreseeable future, and how do we utilise offices to do what they do best.We discuss his article, The Age of the Ear, in which he calls for a deep understanding of how people experience the aural dimensions of life. Which leads us to think about embodiment, the subject of his book, and embodied computing more generally.Lastly, Simon shares a couple of his favourite reads from 2020.We hope you enjoy the show.Mentioned in our conversation:‘The Big Shift': Internal Facebook Memo Tells Employees to Do Better on PrivacyWe will miss the office if it dies. Lucy Kellaway, Financial Times, May 15 2020.The rise and fall of the office, Henry Mance, Financial Times, May 15 2020.James Rebanks: nature is my office, come rain or shine, Financial Times, December 29 2020.If Then: How the Simulmatics Corporation Invented the Future by Jill Lepore; Caste by Isabel Wilkerson; These Truths: A History of the United States by Jill Lepore; India After Gandhi: The History of the World's Largest Democracy by Ramachandra Guha; Unchartered: How to Map the Future by Margaret Hefferman; and Shuggie Bain by Douglas Stuart. 

    Building Evidence-Based & Problem-Led Commercial Ventures. With Rachel Carey

    Play Episode Listen Later Jan 27, 2021 40:52


    In this episode we talk to Dr Rachel Carey who is Chief Scientist at Zinc. Backed by the London School of Economics, Zinc was created in 2017 to test different ways of tackling important societal issues.Rachel is a behavioural scientist with a PhD in Psychology, and a passion for research translation and innovation.During our conversation, Rachel explains what her role involves and we talk about Zinc's mission and purpose, their Venture Builder Programme, and what it really looks like when you ground new commercial ventures in evidence-based research and with a problem-led approach from the get-go.Rachel also shares why she thinks start-ups are really interesting environments for social scientists and what's different about social science-based innovation.Rachel shares some success stories from the Zinc Venture Builder programmes, Bellevie and Tonus as well as two social-scientist-led ventures, Ferly and Studio X.Lastly, we talk about Zinc's work combatting the impacts of automation on the workforce, such as Tandem and Sook, and the importance of peer groups for social scientists working outside academia.You can follow Rachel on Twitter and connect with her on LinkedIn. You can find out more about Zinc at www.zinc.vc and twitter.com/zincvc.We loved talking to Rachel and we hope you enjoy the episode.

    Making Tech Accountable: Reflecting on 2020. With Martha Dark

    Play Episode Listen Later Dec 16, 2020 26:00


    In this episode we catch up with Martha Dark. Martha is the co-founder of Foxglove, a new NGO that exists to make tech fair for everyone. Made up of lawyers, technology experts and communications specialists, Foxglove believe that governments and big tech companies are misusing digital technology, and that this is harming the rest of us. Their aim is to fix this situation.We interviewed Martha in Episode 2 ahead of her talk at the 2020 conference. What better way to end the year than by celebrating the ways in which Foxglove has held tech accountable in 2020. During our conversation, Martha shares Foxglove's successes:Calling for transparency around the U.K. Government's secretive deals around the transfer the personal health information of millions of NHS users to private tech firms during the pandemic such as Amazon and Google, and controversial AI firms Faculty and Palantir.Calling out Google profiling children who are watching YouTube videos in order to deliver targeted ads to them.Calling out the U.K. Home Office's “racist” visa streaming algorithm which reviewed applications against a “suspect nationalities” list.Fighting for fair treatment for Facebook's social media content moderators, and for safe working conditions during the pandemic.Challenging the U.K. Government's devastating use of algorithms to grade A-level exams.Martha also shares with us what Foxglove will likely be working on in 2021. Lastly, Martha explains why Foxglove needs our help and how we can all support Foxglove's work in holding tech accountable.We loved talking to Martha and we hope you enjoy the episode. You can follow Foxglove on Twitter and find out more about them at foxglove.org.ukVisit us online at anthtechconf.co.uk and sign up for our newsletter, or follow us on LinkedIn and Twitter.Mentioned our conversation:Behind the Screen: Content Moderation in the Shadows of Social Media by Sarah T. Roberts.Ghost Work: How to Stop Silicon Valley From Building a New Global Underclass by Mary L. Gray and Siddharth Suri.Open letter to Facebook's leaders signed by over 200 Facebook content moderators from across the worldFacebook moderators forced to work in Dublin office despite high-tier lockdown, 23 October 2020.Leo Varadkar to press Facebook on working conditions for its content moderators, 15 November 2020.

    Addressing Ethical Challenges in a FinTech Startup. With Jeffrey Greger

    Play Episode Listen Later Dec 2, 2020 50:21


    Jeffrey Greger is a UX Researcher at Varo Bank. His work focuses on the ethical and organisational challenges that design professionals face as they develop financial services for and with low- to moderate-income communities. He holds a Master's degree in Applied Anthropology from San José State University.During our conversation, Jeff explains how he came to anthropology from industrial design and what sparked his interest in financial inclusion.  We discuss his Master's thesis which explored the “the application of ethnographic research and commercially-derived design approaches in support of financial inclusion” and we touch on why “well-intentioned humanitarian projects often fail to achieve their goals, at times further retrenching social and economic inequalities".We move on to talk about the working group Jeff and his colleagues created at Varo called PeopleFirst, which aims to avoid the industry logics (insularity, decontextualisation, and tech hubris), and how this community of practice is a way for staff to address ethics in a practical and substantive way.We loved talking to Jeff and we hope you enjoy the episode. You can follow Jeff on Twitter and find out more about him at www.jeffreygreger.com.Visit us online at anthtechconf.co.uk and sign up for our newsletter, or follow us on LinkedIn and Twitter.Mentioned our conversation:Divining a Digital Future: Mess and Mythology in Ubiquitous Computing by Paul Dourish and Genevieve BellJeff's paper, presented at EPIC 2020: There's No Playbook for Praxis. Translating Scholarship into Action to Build a More Ethical Bank.FairMoney: https://fairnetwork.orgSeeing Like a State by James C. ScottMetcalf, Moss and boyd. Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics

    Why Our Cities Aren't Just Another Technology Problem. With Ben Green

    Play Episode Listen Later Nov 18, 2020 41:11


    Dr Ben Green is our guest in this week's episode. Ben Green spoke in the Smart Cities stream at the conference on 9 October 2020. Ben is the author of The Smart Enough City: Putting Technology In Its Place to Reclaim Our Urban Future and he is a Postdoctoral Fellow, Gerald R. Ford School of Public Policy, University of Michigan.During our conversation, Ben shares what prompted his move from a physics major to his research on the social and political impacts of government algorithms. We move on to talk about his book, The Smart Enough City, which was published by MIT Press in 2019, and why it's essential reading for both city officials and urban dwellers in the “battle for the future of cities”. The core message of the book is that “cities are not technology problems and technology cannot solve many of today's most pressing urban challenges”.We also talk about how and why data scientists must recognise themselves as political rather than neutral actors, in the context of a paper Ben wrote titled, "Data Science as Political Action: Grounding Data Science in a Politics of Justice". Lastly, we talk about his FAT*2020 paper, Algorithmic Realism, co-authored with Salomé Viljoen, which proposes a new mode of algorithmic thinking to “better equip computer scientists to reduce algorithmic harms and to reason well about doing good.”We loved talking to Ben and we hope you enjoy the episode. Get your copy of The Smart Enough City from MIT or read an open access version. Connect with Ben on Twitter and find out more about his work and current thinking at https://www.benzevgreen.com.Visit us online at anthtechconf.co.uk and sign up for our newsletter, or follow us on LinkedIn and Twitter. 

    DNA Testing Sites: The Promises and Pitfalls of Precision Medicine. Part II

    Play Episode Listen Later Nov 4, 2020 26:36


    In this special two-part episode, Zoe Weaver, a Psychology student at the University of the West of England and our 2020 Summer intern, unravels some of the privacy concerns raised by the Blackstone group's recent acquisition of genealogy provider, Ancestry.com. Zoe's guests are Phil Booth, co-ordinator of MedConfidential, and Dr Laura Sobola, Senior Consultant at Unai and one of the Health Tech stream leads at the 2020 conference. In this second and final episode, Zoe delves more deeply into the laws surrounding genetic data, and the promises and pitfalls of precision medicine. She moves on to discuss the ethical issues that stand in the way of Matt Hancock, who is the U.K.'s Secretary of State for Health and Social Care, and his plans for the NHS to sequence the DNA of every baby born in the UK. We hope you enjoy the show.Music credit: James Thompson.

    DNA Testing Sites: Are They Safe? Part I

    Play Episode Listen Later Oct 21, 2020 26:08


    In this special two-part episode, Zoe Weaver, a Psychology student at the University of the West of England and our 2020 Summer intern, unravels some of the privacy concerns raised by the Blackstone group's recent acquisition of genealogy provider, Ancestry.com. Zoe's guests are Phil Booth, co-ordinator of MedConfidential, and Dr Laura Sobola, Senior Consultant at Unai and one of the Health Tech stream leads at the 2020 conference. In this first episode, Zoe focuses on the privacy concerns raised by genetic testing sites, and other ethical issues that they raise, such as the virtual erosion of family secrets. Together with her guests, she explores issues of informed consent in genetic research, and the the flaws of familial DNA matching. In the second episode, Zoe will delve more deeply in the laws surrounding genetic data, and the promises and pitfalls of precision medicine.We hope you enjoy the show.Music credit: James Thompson.

    Why Advocacy Is Underappreciated and Outsized in Research. With Alex Freeman

    Play Episode Listen Later Oct 6, 2020 40:41


    Our guest in this week's episode is Alex Freeman, who is a Senior User Researcher at Spotify. We're delighted that Spotify are our Gold Partner for the 2020 edition of the Anthropology + Technology Conference.Originally from California, Alex is currently living in Stockholm, Sweden. Alex describes himself as a Professional People Watcher, and we chatted about how he got into user research, and explains how advocacy was important to him in his own career. Alex explains how companies like Spotify benefit from a diverse range of employees, what companies can do to ensure that people from underrepresented backgrounds are hired into research roles, and why he's so excited to work at Spotify.We move on to discuss the ways in which Spotify is contributing to the important conversations happening right now and Alex's shares his passionate belief and thoughts around ‘humane tech'.Lastly Alex shares why he's excited to attend the Anthropology + Technology Conference.We loved talking to Alex and we hope you enjoy the episode. To find out more about Spotify, you can visit https://www.spotify.com/us/.To join us at the Anthropology + Technology Conference on 9th October, visit us online at anthtechconf.co.uk and sign up for our newsletter, or follow us on LinkedIn and Twitter. 

    Why Representation Really, Really Matters in Tech. With Aisha Thomas

    Play Episode Listen Later Sep 30, 2020 31:03


    My guest today is Aisha Thomas, who is an Assistant Principal at an inner city secondary school in Bristol, England, and an educational activist. Aisha originally trained as a lawyer but an encounter with a young prisoner drew her into teaching. In her new profession she was shocked to discover how few Black teachers there were in Bristol, and was approached by the BBC to present a documentary on the issue.We talk about the conversation she wanted to ignite with her powerful, honest, and ultimately hopeful TEDx talk on why representation really matters, and the importance of feeling you belong, particularly for young people. We also touch on the U.K.'s A-level grading fiasco which is a text-book example of everything we talk about in the episode.Lastly Aisha shares her thoughts on the must-read book, Race After Technology, by Ruha Benjamin, and gives us a sneak peak into her talk at the 2020 Anthropology + Technology Conference.We loved talking to Aisha and we hope you enjoy the episode.To find out more about Aisha, you can visit her website https://www.repmatters.co.uk/ and connect with her on Twitter, Instagram, and LinkedIn.To hear Aisha's talk at the Anthropology + Technology Conference on 9th October, visit us online at anthtechconf.co.uk and sign up for our newsletter, or follow us on LinkedIn and Twitter. 

    Harnessing the Power of AI, Responsibly. With Richard Potter

    Play Episode Listen Later Sep 28, 2020 16:26


    In this episode, my guest is Richard Potter, Chief Technology Officer for Microsoft Consulting Services, UK. We're delighted that Microsoft are our 2020 Gold Partner.Richard is also Microsoft UK's Ethics Lead for AI so he's passionate about organisations making the most of this powerful technology but in a responsible way. We chatted about why Microsoft has such a strong position on responsible AI, and why it's imperative for business leaders to adopt a responsible approach to using the technology in their organisations.Richard discusses the recommendations he gives organisations, what he calls the 3i's – intelligence, ingenuity, and inclusivity.   We move on to discuss the future of responsible AI and how this powerful technology, if harnessed responsibly, can deliver immense value to both society and businesses.Lastly Richard shares what he's looking forward to at the 2020 edition of the Anthropology + Technology Conference, and why he thinks the conference is so topical.We loved talking to Richard and we hope you enjoy the episode. You can connect with Richard on LinkedIn or Twitter.To join Richard on 9th October, visit us online at anthtechconf.co.uk and sign up for our newsletter, or follow us on LinkedIn and Twitter. 

    AI, Power, and Politics. With Ivana Bartoletti

    Play Episode Listen Later Sep 23, 2020 54:51


    In this episode, we talk to Ivana Bartoletti, who is speaking on the panel at the conference on 9 October.Ivana is a Privacy and Data Protection professional, a media commentator and a public speaker. She is Technical Director, Privacy at Deloitte and advises businesses and organisations on best practice and how to comply with privacy legislation at both a national and global level. She is also co-Founder of Women Leading in AI, and author of An Artificial Revolution – On Power, Politics and AI.During our conversation, we talk about Ivana's journey to her current role and discuss some of the key challenges of power and justice in the field of AI right now.Ivana explains why AI is an increasingly political issue, and one that requires common language across all disciplines. Lastly, we talk about the problems of using AI in advertising and Ivana explains why it's vital that we have more women contributing to decision-making in our organisations. We loved talking to Ivana and we hope you enjoy the episode.To find out more about Ivana, you can visit www.ivanabartoletti.co.uk.To catch her talk at the Anthropology + Technology Conference on 9th October, visit us online at anthtechconf.co.uk, sign up for our newsletter, and follow us on LinkedIn and Twitter. 

    Freedom of Thought in the Digital Age. With Susie Alegre

    Play Episode Listen Later Sep 16, 2020 41:55


    Our guest in this week's episode is Susie Alegre. Susie is an international human rights barrister and consultant with extensive experience working in international development and human rights policy and practice. She is talking in the FinTech stream at the conference on 9 October.Susie describes international human rights law as “ethics with teeth”. She shares some of her career highlights and what sparked her interest in digital technology.We move on to discuss her interest in freedom of thought, and why it is so relevant in our 21st century digital society, in which behavioural micro-targeting is a growing enterprise.Susie also talks about why algorithmic decisions need to be explicable in the context of financial services, and gives us a sneak peak into her talk at the conference.We loved talking to Susie and we hope you enjoy the episode.To find out more about Susie, visit https://susiealegre.com.Susie's article, Rethinking Freedom of Thought for the 21st Century, discussed in the podcast.Susie is interviewed on the Better Human show. Forum Internum: Freedom of Thought (BBC Sounds). To catch Susie's talk at the Anthropology + Technology Conference on 9th October, visit us online at anthtechconf.co.uk and sign up for our newsletter, or follow us on LinkedIn and Twitter. 

    Diversity, Design, and Digital Health. With Ijeoma Azodo and Rafiah Badat

    Play Episode Listen Later Sep 9, 2020 53:19


    In this episode, we talk to Ijeoma Azodo and Rafiah Badat, who are speaking in the health tech stream at the conference on 9 October 2020.Ijeoma is a surgeon who is now using her clinical expertise in clinical service design. Rafiah is a speech and language therapist, doing a clinical doctoral fellowship on a digital therapy tool for children with a language disorder.During our conversation, Ijeoma and Rafiah share their career paths and backgrounds. Ijeoma explains what drew her to medicine and surgery, and Rafiah describes how she came to specialise in speech and language therapy. We move on to discuss Ijeoma's work in clinical service design and Rafiah's clinical doctoral fellowship, and the work they are both doing in digital health.Lastly Ijeoma and Rafiah share why the Shuri Network is important to them and how the Network supports women of colour, and allies, who are working in digital health, particularly in terms of challenging the system to take action and supporting women of colour to succeed in their careers.We loved talking to Ijeoma and Rafiah and we hope you enjoy the episode. To find out more about the Shuri Network, you can visit https://shurinetwork.com.To catch their joint talk at the Anthropology + Technology Conference on 9th October, visit us online at anthtechconf.co.uk and sign up for our newsletter, or follow us on LinkedIn and Twitter. 

    Using Tech To Transform Financial Wellbeing, Responsibly. With Ben Breen

    Play Episode Listen Later Sep 2, 2020 27:41


    Ben Breen is our guest in this week's episode. Ben is the co-Founder of NestEgg. He's talking in the FinTech stream at the conference on 9 October.During our conversation, Ben describes his career as a technologist, and explains why he and his co-Founder, Adrian Davies, set up NestEgg and what they were trying to change about the financial consumer market.We move on to discuss the research NestEgg did to understand its target market better, something so few start-ups do, and the startling finding it revealed. Ben shares his perspective, both as a technologist and an entrepreneur, about why beginning with the consumer is essential. We also talk about how NestEgg is innovating responsibly in the fintech space and why that's important to both him and Adrian.Lastly Ben discusses the opportunities in the world of fintech in the next five years or so, and gives us a sneak peak into his talk at the conference.We loved talking to Ben and we hope you enjoy the episode. To find out more about NestEgg, you can visit https://nestegg.ai.To catch his talk at the Anthropology + Technology Conference on 9th October, visit us online at anthtechconf.co.uk and sign up for our newsletter, or follow us on LinkedIn and Twitter. 

    The Future of Safe and Transparent Access to Health Data. With Eerke Boiten

    Play Episode Listen Later Aug 26, 2020 25:26


    Professor Eerke Boiten is our guest in this week's episode. Eerke is Professor of Cyber Security and Director of the Cyber Technology Institute at De Montfort University. He's talking in the Health Tech stream at the conference on 9 October.During our conversation, Eerke shares what prompted his move from computer science into cybersecurity and explains his interest in health data. He talks about data sharing in health, why anonymisation isn't really safe, and suggests that we need to think about health data in a new way, which might offer a way forward.We move on to talk about ethics in AI and where the responsibility lies, and that proceeding thoughtfully rather than at haste pays dividends. We also touch on why, as an academic, Eerke actively engages in policy and public debate on issues such as data privacy and cybersecurity.Lastly, we explore the ‘privacy versus saving lives' discourse that contact-tracing apps have re-ignited, and why data protection impact assessments are crucial.We loved talking to Eerke and we hope you enjoy the episode.Read Eerke's opinion piece on health data in The Guardian, and his article in The Conversation on the U.K. government's COVID-19 data project.To catch Eerke's talk at the Anthropology + Technology Conference on 9th October, visit us online at anthtechconf.co.uk and sign up for our newsletter, or follow us on LinkedIn and Twitter. 

    Using Deep Learning to Detect Retinal Disease. With Pearse Keane

    Play Episode Listen Later Aug 19, 2020 45:55


    Pearse Keane is our guest in this week's episode. Dr Keane is speaking in the Health Tech stream at the conference in October. Pearse explains what drew him to ophthalmology, and shares his story about how he came to collaborate with DeepMind to use deep learning to identify age-related macular degeneration (AMD), a devasting but common retinal disease that, if not treated quickly enough, leads to blindness.Pearse talks about the two papers published in Nature Magazine about this research and the realities and challenges involved in moving from research to real world implementation, and the importance of working with industry to bring benefits to patients.Pearse explains what the UKRI Future Leaders Fellowship means to his work, which explores the development and application of AI in the NHS, particularly sharing his learnings around ethics, data protection, and transparency, so that the NHS and the UK can be world leaders in this space.We loved talking to Pearse and we hope you enjoy the episode. To catch his talk at the Anthropology + Technology Conference on 9th October, get your ticket at anthtechconf.co.uk.

    Why Diversity Really Matters in the Health Service. With Sam Shah

    Play Episode Listen Later Aug 12, 2020 38:45


    Sam Shah is our guest in this week's episode. He's on the panel at the conference in October. Dr Shah is a Global Clinical and Digital Adviser, NHS Clinician, former Director of Digital Development for NHSX, and the Financial Times 4th most influential UK BAME tech leader 2019.Sam talks about one of the public health projects he worked on that's still very close to his heart, Healthy Places, Heavy Lives, which aimed to reduce health inequalities. He discusses the fact that health issues are complex, and why we don't always need more clinicians, instead we need a better way of tackling the wider social determinants.Sam shares his belief that people should always be at the heart of digital development, and discusses the balancing act in healthcare around the clinical, emotional, and practical needs of patients.We move on to discuss telehealth, its fast adoption during the pandemic, and the risk of omitting people who can't access these services and driving a new inequality.Sam also talks about the lack of diversity in leadership roles within the health service, why inclusivity means better outcomes for everyone, and shares his ideas on how the public health sector can be made more diverse and representative of our wider society. We loved talking to Sam and we hope you enjoy the episode. To get in touch with Sam, go to his LinkedIn profile: https://www.linkedin.com/in/sam-shah-nhs/To catch his contribution on the panel at the Anthropology + Technology Conference on 9th October, get your ticket at anthtechconf.co.uk.And check out Sam's conversation with Dr James Somauroo on the Health-Tech podcast.

    Delivering Social Value in the Built Environment. With Gemma John

    Play Episode Listen Later Aug 5, 2020 23:12


    Gemma John is our guest in this week's episode. She's talking in the Smart Cities stream at the conference on 9 October. Dr John is an urban anthropologist and Managing Director at Human City. Gemma explains how her PhD research into the Freedom of Information (Scotland) Act 2002 led to her focus on spatial transformation in the context of the knowledge economy. She discusses the work Human City does with asset management companies and local authorities, and shares a success story from a small town in England where a retail space delivered both social and financial value in the context of crisis. Gemma shares her experiences of interdisciplinarity and collaboration and the ways in which anthropologists add value to the design of buildings and the built environment.We move on to discuss the lack of affordable housing at a time when most people are now working from home, how seriously we should take this ability to work at a distance, and why the systemic equation between housing, employment, and health urgently needs to be made more explicit.Lastly Gemma explores the concept ‘speaking for the social', why it's important, and what that means in practice, particularly when collaborating with people from different disciplines. We loved talking to Gemma and we hope you enjoy the episode.To get in touch with Gemma and to find out more about Human City, visit https://humancity.co.uk.To catch Gemma's talk at the Anthropology + Technology Conference on 9th October, get your ticket at anthtechconf.co.uk.

    Digital Money, Mobility, and Long Tails with Erin B. Taylor

    Play Episode Listen Later Jul 28, 2020 28:25


    In this episode we are in conversation with Dr Erin B. Taylor, who is talking in the FinTech stream at the conference in October. Erin is an economic anthropologist who specialises in research into financial behaviour, and is the co-founder and research lead at Canela Consulting. Erin explains what an economic anthropologist does, what got her hooked on exploring how finances affect people's lives, and why we don't understand our own financial behaviours despite money affecting so much of what we do.Erin shares some stories from her consulting life, including anecdotes that showcase why understanding the cultural is so important as well as the value that anthropologists like Erin bring to fintech companies.We move on to discuss the Consumer Finance Research Toolkit that Erin's developed, which researchers can use to help them understand specific aspects of financial behaviour. She also discusses mobility and digital money, and explains how fintechs can create niche solutions which address specific financial needs untapped by banks.Lastly Erin talks about the Female Finance report she's co-written with Dr Anette Broløs, and why it makes commercial sense to deliver financial services specifically for women.We loved talking to Erin and we hope you enjoy the episode. To get in touch with Erin and to find out more about Canela Consulting, you can visit https://canela-group.com.To catch her talk at the Anthropology + Technology Conference on 9th October, visit us online at anthtechconf.co.uk and buy your ticket.

    Why Building AI Responsibly is Like Building a House. With Anders Kofod-Petersen

    Play Episode Listen Later Jul 21, 2020 36:34


    In this episode we talk to Dr Anders Kofod-Petersen. Anders is the Deputy Director of the Alexandra Institute in Copenhagen and also a Professor of Artificial Intelligence at the Norwegian University of Science and Technology. He's talking in the FinTech stream at the conference in October.During our conversation, Anders shares his favourite success story, the Danish Natural Language Processing repository, and why collaboration is essential, both across companies and disciplines. He shares his beliefs about why we need social scientists to help build technology, and why an AI project should be seen as an organisational one rather than an IT project.We move on to discuss responsible AI and the ways in which companies can be encouraged to build AI responsibly without the need for regulation. We also talk about the importance of understanding that we often pay for digital technologies with our data.Lastly Anders offers a challenge to anthropologists and other social scientists. We loved talking to Anders and we hope you enjoy the episode. To get in touch with Anders and to find out more about the Alexandra Institute, you can visit https://alexandra.dk/uk.To catch his talk at the Anthropology + Technology Conference on 9th October, visit us online at anthtechconf.co.uk and sign up for our newsletter. We'll be in touch as soon as tickets go on sale. 

    Why We Need to Understand Humans as Well as Data. With Phil Harvey

    Play Episode Listen Later Jul 14, 2020 26:57


    What opportunities lie ahead in the world of AI? How can businesses prepare? And what must leaders do to avoid putting their business and wider society at risk? These are just some of the topics we cover with Phil Harvey, our guest in this week's episode. Phil is a Senior Cloud Solution Architect for Data & AI in One Commercial Partner at Microsoft UK. He's the co-author of Data: A Guide to Humans, which will be published in January 2021. And he's talking in the Smart Cities stream at the conference in October.During our chat, we talk about the ways that data in smart cities relates specifically to people's lives and why it's essential that it's handled ethically. Phil also explains why leaders who aren't using AI responsibly risk damaging their business as well as the people they serve.We move on to discuss the need to educate wider teams to help identify business risks and Phil shares some key advice on implementing AI responsibly, including some of the steps already taken at Microsoft. Last, but certainly not least, we explore the importance of soft skills like empathy. Phil explains why we must have a deeper understanding of people, as well as the data that we're working with. We loved talking to Phil and we hope you enjoy the episode. To get in touch with Phil or to find out more about his work and book, you can visit his LinkedIn profile at https://www.linkedin.com/in/philipdavidharvey/To catch his talk at the Anthropology + Technology Conference on 9th October, visit us online at anthtechconf.co.uk and sign up for our newsletter. We'll be in touch as soon as tickets go on sale. 

    Small nudges towards more responsible AI with Dr Allison Gardner

    Play Episode Listen Later Jul 6, 2020 50:34


    In this episode we're thrilled to be interviewing Dr Allison Gardner, who is one of our 2020 keynote speakers in the health tech stream. Allison is a Teaching Fellow at Keele University and Programme Director for the Science Foundation Year. Her research is focused on gender and computing, AI ethics, governance of AI and the use of machine learning to predict disease.She works on the IEEE P7000 Global Initiative on the Ethics of Autonomous and Intelligent Systems and specifically P7003 on algorithmic bias, providing a framework for Algorithmic Impact Assessments. Allison is also one of the co-founders of Women Leading in AI.In this episode, Allison discusses some of the key issues we need to address when using AI in the health sector. She explains:why investors in health tech need to be asking questions about teams as well as tech. why we must hold onto our senior clinicians and prevent deskilling. why it's absolutely critical that AI systems are designed, developed and deployed correctly. Alongside the challenges AI brings to the health sector, we hear about future opportunities, including more personalised medicine and effective disease prediction. Lastly, Allison talks in detail about the journey towards regulation and why small nudges and building on existing legal frameworks is the way forward.We found this a humbling, insightful and inspiring conversation. We hope you enjoy it.To find out more about Allison Gardner, visit Keele University https://www.keele.ac.uk/scm/staff/, or head to the Women Leading in AI website https://womenleadinginai.org/The two books referenced in this episode are: Invisible Women: Data Bias in a World Designed for Men by Caroline Criado Perez and What Works: Gender Equality by Design, written by Iris Bohnet.Allison Gardner will be speaking at the Anthropology + Technology Conference on 9th October. Visit us online at anthtechconf.co.uk and sign up for our newsletter. We'll be in touch as soon as tickets go on sale. 

    How Responsible AI Benefits Everyone

    Play Episode Listen Later Jul 1, 2020 5:15


    Welcome to the Anthropology + Technology Conference Podcast. Thanks for tuning in, we're delighted to have you with us.In this first episode of our series, your host is Dawn Walter, the conference founder.Dawn set up the Anthropology + Technology Conference in 2019 with the aim of bringing together technologists and social scientists to ignite much-needed conversations and to help facilitate the adoption of more socially-responsible AI.As we'll discover during this podcast series, AI can provide unparalleled opportunities, but without the correct processes it can also have a detrimental impact on people's futures and a business's reputation.The conference aims to create dialogue around these very issues. To celebrate technology's capabilities, while also making us aware of the serious challenges we must overcome.During this first episode, Dawn explains the meaning of responsible AI and why it's critically important right now. She also uncovers some of the key challenges that business leaders are facing and why responsible AI is an issue for all, not just the few.Last but not least, we'll be giving you a sneak peak into the podcast series. We'll be highlighting the core topics and discussing how you can benefit from joining us on our journey.We hope you enjoy the series as much as we've enjoyed making it. And we're delighted to have you on our side as we move towards more socially-responsible AI. Together.To find out more about the conference, please visit anthtechconf.co.uk. Sign up to our newsletter and we'll be in touch as soon as tickets go on sale.

    Making Tech Accountable with Martha Dark

    Play Episode Listen Later Jul 1, 2020 22:46


    In this episode we're delighted to be talking to Martha Dark, who is speaking on the health stream at the conference.Martha is the co-founder of Foxglove, a new NGO that exists to make tech fair for everyone. Foxglove is a team of lawyers, technology experts and communications specialists, who believe that governments and big tech companies are misusing digital technology, and that this is harming the rest of us. Their aim is to fix this situation.During this episode, Martha shares some of the recent investigations Foxglove has undertaken, including putting pressure on the government to be transparent about the contracts they make with private companies who want to access NHS data.We talk about the challenges of algorithmic decision making in the health sector today and how these might evolve in the next five years. Plus, Martha shares her thoughts on legislation and how companies need to consider responsible design from the outset, rather than as an afterthought.For more information on Foxglove, please visit Foxglove.org.uk. To see Martha's talk at the Anthropology + Technology Conference on 9th October, visit us online at anthtechconf.co.uk and sign up for our newsletter. We'll be in touch as soon as tickets go on sale. 

    Claim Response-ability.Tech

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel