Untangled is a podcast about technology, people, and power.
I'm Charley Johnson, and this is Untangled, a newsletter and podcast about our sociotechnical world, and how to change it. Today, I'm bringing you the audio version of my latest essay, “There's no such thing as ‘fully autonomous agents.' Before getting into it, two quick things:1. I have two part essay out in Tech Policy Press with Michelle Shevin that offers a roadmap for how philanthropy can use the current “AI Moment” to build more just futures.2. There is still room available in my upcoming course. In it, I weave together frameworks — from science and technology studies, complex adaptive systems, future thinking etc. — to offer you strategies and practical approaches to address the twin questions confronting all mission driven leaders, strategists, and change-makers right now: what is your 'AI strategy' and how will you change the system you're in?Now, on to the show! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe
Today, I'm sharing my conversation with Greg Epstein, American Humanist chaplain at Harvard University and the Massachusetts Institute of Technology, and author of the great new book Tech Agnostic: How Technology Became the World's Most Powerful Religion, and Why It Desperately Needs a Reformation. We discuss:* How tech is becoming a religion, and why it's connected to our belief that we're never enough.* How Elon Musk, Mark Zuckerberg, Jeff Bezos, and Bill Gates are hungry ghosts. * What ‘tech-as-religion' allows us to see and understand that ‘capitalism-as-religion' doesn't.* My concerns with the metaphor and Greg's thoughtful response.* How we might usher in a tech reformation, and the tech humanists leading the way.* The value of agnosticism and not-knowing when it comes to tech.Okay, that's it for now,Charley This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe
Today, I'm sharing my conversation with Divya Siddarth, Co-Founder and Executive Director of the Collective Intelligence Project (CIP) about how we might democratize the development and governance of AI. We discuss:* The CIP's work on alignment assemblies with Anthropic and OpenAI — what they've learned, and why in the world a company would agree to increasing public participation.* The #1 risk of AI as ranked by the public. (Sneak peek: it has nothing to do with rogue robots.)* Are participatory processes good enough to bind companies to the decisions they generate? * How we need to fundamentally change our conception of ‘AI expertise.'* How worker and public participation can shift the short-term thinking and incentives driving corporate America.* Should AI companies become direct democracies or representative ones? * How Divya would structure public participation if she had a blank sheet of paper and if AI companies had to adopt the recommendations.That's it for now,Charley This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe
Today, I'm sharing my conversation with Deepti Doshi, Co-Director of New_Public about what they've learned building local healthy communities, online and off. We discuss:* The problem New_Public is trying to address with their initiative, Local Lab. (Which I highlighted in my recent essay, “Fragment the media! Embrace the shards!”)* What Deepti has learned about what makes for pro-social conversations that build community on messaging boards and private groups.* Why it's an oxymoron to call Twitter a ‘global town square' and the relationship between scale and trustworthy information ecosystems.* The importance of ‘digital stewards' in facilitating online community.* How the social capital people build online is translating into IRL actions and civic engagement.* What a future might look like if New_Public realizes the vision of Local Lab.That's it for now,Charley This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe
This week, I'm sharing my conversation with Anya Kamenetz, the creator of The Golden Hour, a newsletter about “thriving and caring for others on a rapidly changing planet. Anya and I announced a new partnership recently — now, when you sign up for an annual paid subscription to Untangled, you'll get free access to the paid version of The Golden Hour — and we wanted to talk about it, and the work ahead.Along the way, we also discuss:* How we're adapting our newsletters in response to the election.* Why mitigating harms isn't sufficient, and a framework that can help us all orient to the present moment: block, build, be.* How we consume information — our mindsets, habits, and practices — and also, why ‘consume' isn't the right frame. * The difference between social media connections and email-based relationships.* How to talk to your kids about the election.* The fragmentation of the news media environment and why it's a good thing.I couldn't be more excited to partner with Anya and introduce you to her work. Enjoy!More soon,Charley This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe
Hi, I'm Charley, and this is Untangled, a newsletter about our sociotechnical world, and how to change it.* Come work with me! The initiative I lead at Data & Society is hiring for a Community Manager. Learn more here.* Check out my new course, Sociotechnical Systems Change in Practice. The first cohort will take place on January 11 and 12, and you can sign up here.* Last week I interviewed Mozilla's Jasmine Sun and Nik Marda on the potential of public AI, and the week prior I shared my conversation with AI reporter Karen Hao on OpenAI's mythology, Meta's secret, and Microsoft's hypocrisy.
Hi, I'm Charley, and this is Untangled, a newsletter about our sociotechnical world, and how to change it.* Untangled crossed the 8,000 subscriber mark this week. Woot!* Come work with me! The initiative I lead at Data & Society is hiring for a Community Manager. Learn more here. * Last week, I shared my conversation with award-winning AI reporter Karen Hao on OpenAI's mythology, Meta's secret, and Microsoft's hypocrisy.* I launched my new course, Sociotechnical Systems Change in Practice. The first cohort will take place on January 11 and 12, and you can sign up here. (As you'll see, I've decided to offer a free 1:1 coaching session to all participants following the course.)
Hi, I'm Charley, and this is Untangled, a newsletter about our sociotechnical world, and how to change it.* Last week, I argued that the shared reality that the U.S. has long glorified was predominantly white and male, and historically, fragmentation has proven to be a good thing.* I launched my new course, Sociotechnical Systems Change in Practice. The first cohort will take place on January 11 and 12, and you can sign up here. (As you'll see, I've decided to offer a free 1:1 coaching session to all participants following the course.)* Untangled is 40 percent off at the moment, and I partnered with Anya Kamenetz to offer you her great newsletter The Golden Hour for free! Check out her latest on how to talk to your kids about the election. Signing up for Untangled right now means you'll get $140 in value for $54.This week, I'm sharing my conversation with Karen Hao, an award-winning writer covering artificial intelligence for The Atlantic. We discuss:* Karen's investigation into Microsoft's hypocrisy on AI and climate change.* How OpenAI's mythology reminds Karen of Dune. (I can't stop thinking about the connection after Karen made it.)* How Meta uses shell companies to hide from community scrutiny when building new data centers.* How AI discourse should change and what Karen is doing to train journalists on how to report on AI.* How to shift power within tech companies. Employee organizing? Community advocacy? Reporting that rejects narratives premised on future promises and innovation for its own sake? Yes.Reflections on the last weekI interviewed Karen on the morning of the election. I hesitated to share the episode this Sunday but ultimately decided to release it because it's a conversation about big, structural problems, and what we can do about them. The election results affirm for me the pivot I announced a few weeks ago. Namely, we can't solve existing problems or fix broken institutions such that they return us to the status quo. We're (still!) not going back. We have to transform existing sociotechnical systems as we address the rot that lies beneath. We must imagine alternative futures and align our individual and collective actions to them. We have to live these futures today, and then tomorrow. One day at a time,Charley This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe
Hi, it's Charley, and this is Untangled, a newsletter about technology, people, and power. Today, I'm sharing my conversation with Evan Ratliff, journalist and host of Shell Game, a funny and provocative new podcast about “things that are not what they seem.” Evan cloned his voice, hitched it to an AI agent, and then put it in conversation with scammers and spammers, a therapist, work colleagues, and even his friends and family. Shell Game helps listeners see a li'l farther into a future overrun with AI agents, and I wanted to speak with Evan about his experience of this future.In our conversation, we discuss:* The hilarity that ensues when Evan's AI agent engages with scammers and spammers, and the quirks and limitations of these tools.* The harrowing experience of listening to your AI agent make stuff up about you in therapy.* How those building these tools view the problem(s) they're solving.* What it's like to send your AI agent to work meetings in you place.* The work required to maintain these tools and make their outputs useful — does it actually help you save time and be more productive??* The lingering uncertainty these tools culitvated through its interactions with Evan's family and friends.If you find the conversation interesting, share it with a friend.Okay, that's it for now,Charley This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe
Hi, it's Charley, and this is Untangled, a newsletter about technology, people, and power.Can't afford a subscription and value Untangled's offerings? Let me know! You only need to reply to this email, and I'll add you to the paid subscription list, no questions asked.I turned 40 this week and I spent the weekend in nature, surrounded by my favorite people. While my cup is running over with friendship, love, and support, I'll always take more
Hi, it's Charley, and this is Untangled, a newsletter about technology, people, and power.This week I'm sharing my conversation with Shannon Vallor, the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh. Vallor and I talk about her great new book, The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking, and how to chart a new path from the one we're on. We discuss:* The metaphor of an ‘AI mirror' — what it is, and how it helps us better understand what AI is(and isn't!)* What AI mirrors reveal about ourselves and our past.* How AI mirrors distort what we see — whose voices and values they amplify, and who is left out of the picture altogether.* How Vallor would change AI discourse.* How we might chart a new path toward a fundamentally different future — as a sneak peak, it requires starting with outcomes and values and thinking backward.* How we can become so much more than the limits subtly shaping our teenage selves (e.g. conceptions of what we're good at, what we're not, etc.) — and how that growth and evolution doesn't have to stop as we age.It's not hyperbole when I say Vallor's book is the best thing I've read this year. If you send me a picture holding it in one hand, and my new book in the other, I might just explode with joy.More soon,Charley This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe
Hi, it's Charley, and this is Untangled, a newsletter about technology, people, and power.Last week, I published an essay about what humans can do that AI systems cannot. One such thing? Meaning making! So what would it look like to put meaning-making at the center of AI product development? That's the conversation I had with Vaughn Tan, professor at University College London's School of Management, who writes the great newsletter The Uncertainty Mindset. We discuss:* What's at stake in shifting the question from “Can AI systems produce outputs that look like outputs from humans,” to “What can humans do that AI systems cannot?”* Meaning-making: what it is, the different types, and why it matters.* How to center meaning-making in AI product development.* What makes for a good question — and how to start asking better ones in the context of AI.Okay, that's it for now.Charley Get full access to Untangled with Charley Johnson at untangled.substack.com/subscribe
Hi, it's Charley, and this is Untangled, a newsletter and podcast about technology, people, and power.Can't afford a subscription and value Untangled's offerings? Let me know! You only need to reply to this email, and I'll add you to the paid subscription list, no questions asked.
Hi, it's Charley, and this is Untangled, a newsletter and podcast about technology, people, and power.
Last week, I analyzed a new lawsuit brought by University of Massachusetts Amherst professor Ethan Zuckerman and the Knight First Amendment Institute at Columbia University. The lawsuit would loosen Big Tech's grip over our internet experience if successful. In this conversation, I'm joined by , the creator of the tool Unfollow Everything, which is at the center of the lawsuit. Louis and I discuss:* What it's like to be bullied by a massive company;* Why this lawsuit would be so consequential for consumer choice and control over our online experience;* The tools Louis would build to democratize power online.That's it for this edition of Untangled.Charley Get full access to Untangled with Charley Johnson at untangled.substack.com/subscribe
Y'all are in for a treat — this week I interviewed for the podcast edition of Untangled. Elise is a journalist, podcaster, and entrepreneur. She is the host of TED Talks Daily, a host-at-large for NPR, co-founder of the podcast production company Reasonable Volume, and she reports for VICE News. Her book, Flawless: Lessons in Looks and Culture from the K-Beauty Capital is fantastic, and it inspired my recent essay “The Artificial Gaze.”Guess what? For the first three people to sign up for an annual paid subscription to Untangled, Elise is offering a free, signed copy of her book. So, sign up for the annual paid subscription, and if you're one of the first three, I'll ping you for your address, and Elise will generously send you a signed copy. Huzzah!Now, on to the show!In the episode, Elise and I talk about a lot of things:* The idea of “pretty privilege” and the ways in which beauty culture is entangled with social norms, power, and technology; * How Korean beauty culture is one of the most extreme in the world and how its hypermodernity offers the rest of us a glimpse into our future; * How Elise's book might have been different if she started writing it now, amidst all the craze over AI; * Beauty filters and how they remind us of the internet of the early 1990s that felt weirder and more exploratory — like play! Listen to the end to hear Elise and I talk about our shared love of Brene Brown and how worthiness, embodiment, and mutuality can help us collectively create an alternative future.I'm grateful to for joining me on Untangled. Do yourself a favor and subscribe to her newsletter, — it is a fun jaunt through current events and pop culture, and I just added it to my recommendations list! In the show, Elise and I mentioned a few great pieces. You can find them here:* “We Have Built a Giant Treadmill That We Can't Get Off”: Sci-Fi Prophet Ted Chiang on How to Best Think About AI* Face Forward: The unpredictable magic of TikTok and Instagram beauty filters is that they make you feel more like you.If you like the podcast, subscribe to it on Apple or Spotify, review it, rate it, and share it. It really does make a difference.As always, if you have ideas about how to make the newsletter or podcast better, tell me. If you're curious about how it's all going, let's talk. If you think one of the posts misses the mark, let me know.*Until next time,CharleyAlso, definitely tell me if you like it
Hi, and welcome back to the podcast edition of Untangled. This was a big week at Untangled HQ. Substack featured Untangled on its homepage (!) and sent me this cute lil' graphic to commemorate the moment.As a result, hundreds of you have subscribed in the last few days. That just warms my wonky heart. Welcome to the Untangled community
Hi, and welcome back to the podcast edition of Untangled. Want to make my day? Just subscribe to the newsletter on Substack and the podcast on Apple or Spotify. Clicking a couple of links to make someone's day just a lil' more joyful sounds like a deal to me
Hi, and welcome back to the podcast edition of Untangled. If you're reading this but haven't yet subscribed, what are you even doing with your life?! I kid. (I hear chastising one's readers is a top-notch strategy!
Hi, welcome back to the podcast edition of Untangled. You must be thinking “whoa, two podcasts in two weeks, you're really working hard to produce that sweet, sweet content.” You're right, I am! But, like any relationship, this is a two-way street, so please do your part by subscribing to Untangled on Apple or Spotify, and sharing this episode with a few friends. This month I wrote about pseudonymity, harassment, and what they reveal about our relationship to technology. In the newsletter, I drew upon Alice Marwick's model of “morally motivated networked harassment” to help contextualize the backlash to Katie Notopoulos's story that revealed the real identities of the pseudonymous founders of the Bored Ape Yacht Club.Marwick's model is the best explanation for why harassment happens online that I've come across, so I was thrilled to host her on Untangled to dive into it. Marwick is an Associate Professor at the University of North Carolina at Chapel Hill where she researches the social, political, and cultural implications of popular social media technologies. In this episode, we discuss:The “morally motivated networked harassment” (MMNH) model and what it helps explain that we didn't understand before.The impact of networked harassment at an individual, group, and societal level.Why social media companies aren't designed or incentivized to address networked harassment.How networked harassment relates to the process of online radicalization.Listen to the end to hear what advice Marwick would offer her teenage self.You can find more from Alice on Twitter.As always, if you like the podcast, please review it, rate it, and share it.Until next time,Charleyp.s. What's the point of having a newsletter if you can't wish your Dad a Happy Father's Day? So - Happy Father's Day, Dad! Credits:Track: The Perpetual Ticking of Time — Artificial.Music [Audio Library Release]Music provided by Audio Library Plus This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com
Hi, welcome back to the podcast edition of Untangled. Not long ago, I had the terrifying thought, “is the Internet out of content? Have I reached the end of the Internet??” Luckily, I didn't have to contemplate that question for too long before a new season of Love on the Spectrum came out. Thank you, Netflix!
Hi, welcome back to the podcast edition of Untangled. I've got a few great interviews lined up, including one with Ian Bogost from The Atlantic. If you haven't yet subscribed, do your future self a favor and sign up on Apple, Spotify, or Substack.In February, I offered a sociological deep dive into a topic you didn't know you needed to understand: Decentralized Autonomous Organizations, or DAOs. As I was writing it, I came across Kyle Chayka's piece for The New Yorker, “The Promise of DAOs, The Latest Craze in Crypto.” It's great, which won't be surprising to anyone who follows Chayka (if you don't, do it!). He is an incisive writer on technology and culture. Kyle and his colleague Daisy Alioto also recently launched their own DAO to govern their newsletter, Dirt, about entertainment online. It's an interesting experiment in editorial democracy, and I wanted to host Kyle on Untangled to talk about it, and DAOs more broadly.In our conversation, Kyle and I discuss:Whether DAOs will evolve from arcane financial instruments to function more like modern-day companies.Why governance in DAOs is everything, and how most people don't participate in it.How his experiment with DirtDAO is going, and what he's learned.Oh, and we speculate wildly on what would happen if Twitter launched a token and distributed it to users proportional to one's accumulated retweets.
Hello, welcome back to the podcast edition of Untangled. If someone forward you this link, it was probably my sister. Give it a listen — she knows what she's talking about. Then, if you're so inclined, become a subscriber.
You came back! That warms my writerly heart. If someone forwarded you this email, definitely thank them - they just get your wonky sensibility. Then, if you're so inclined, become a subscriber.