Big Tech is a bi-weekly podcast that considers how emerging technologies are reshaping democracy, the economy and society. Co-hosts Taylor Owen and David Skok sit down with leading scholars, policy makers and entrepreneurs, to have in-depth and thought-provoking discussions about the good and bad as…
CIGI / The Logic, Taylor Owen, David Skok, The Logic, The Centre for International Governance Innovation
In the chaotic early months of his second term, Donald Trump has attacked the Canadian economy and mused about turning Canada into the “51st state.” Now, after decades of close allyship with the U.S., our relationship with America has suddenly become fraught. Which means that Canadians are now starting to ask what a more sovereign Canada might look like – a question Jim Balsillie has been thinking about for 30 years. Balsillie is the former co-CEO of Research in Motion, the company that developed the Blackberry, and is one of the most successful business people in Canada. He's also one of the patriotic, which makes his recent criticism of our country that much more meaningful. As Balsillie has pointed out, our GDP per capita is currently about 70% of what it is in the U.S., our productivity growth has been abysmal for years, and our high cost of living means that 1 in 4 Canadians are now food insecure.But, according to Balsillie, none of this can be blamed on Trump. He thinks that over the last thirty years we've clung to an outdated economic model and have allowed our politics to be captured by corporate interests.So, with less than a week to go before the federal election, I thought it was the perfect time to sit down with Jim and ask him how we might build a stronger, more sovereign Canada.Mentioned:“Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS),” The World Trade Organization“Reinforcing Canada's security and sovereignty in the Arctic,” Prime Minister of Canada“Ontario Welcomes Siemens' $150 Million Investment to Establish New Technology Centre in Oakville,” news release from the Government of OntarioFurther Reading:“We are all economic nationalists now,” by Jim Balsillie (National Post)
We have a really exciting episode coming out on Tuesday: an interview with former RIM co-CEO Jim Balsillie about the fight for Canada's economic sovereignty. In the meantime, we wanted to share a conversation between Taylor and political journalist Paul Wells. Every week, Paul sits down with the people trying to solve the biggest problems in Canada and around the world. And this week, that person is Taylor. He joins Paul to discuss his work on election interference and share his wish list for the next government's digital policy.
We're a few weeks into a federal election that is currently too close to call. And while most Canadians are wondering who our next Prime Minister will be, my guests today are preoccupied with a different question: will this election be free and fair?In her recent report on foreign interference, Justice Marie-Josée Hogue wrote that “information manipulation poses the single biggest risk to our democracy”. Meanwhile, senior Canadian intelligence officials are predicting that India, China, Pakistan and Russia will all attempt to influence the outcome of this election. To try and get a sense of what we're up against, I wanted to get two different perspectives on this. My colleague Aengus Bridgman is the Director of the Media Ecosystem Observatory, a project that we run together at McGill University, and Nina Jankocwicz is the co-founder and CEO of the American Sunlight Project. Together, they are two of the leading authorities on the problem of information manipulation.Mentioned:“Public Inquiry Into Foreign Interference in Federal Electoral Processes and Democratic Institutions,” by the Honourable Marie-Josée Hogue"A Pro-Russia Content Network Foreshadows the Automated Future of Info Ops,” by the American Sunlight ProjectFurther Reading:“Report ties Romanian liberals to TikTok campaign that fueled pro-Russia candidate,” by Victor Goury-Laffont (Politico)“2025 Federal Election Monitoring and Response,” by the Canadian Digital Media Research Network“Election threats watchdog detects Beijing effort to influence Chinese Canadians on Carney,” by Steven Chase (Globe & Mail)“The revelations and events that led to the foreign-interference inquiry,” by Steven Chase and Robert Fife (Globe & Mail)“Foreign interference inquiry finds ‘problematic' conduct,” by The Decibel
If you're having a conversation about the state of journalism, it's bound to get a little depressing. Since 2008, more than 250 local news outlets have closed down in Canada. The U.S. has lost a third of the newspapers they had in 2005. But this is about more than a failing business model. Only 31 percent of Americans say they trust the media. In Canada, that number is a little bit better – but only a little. The problem is not just that people are losing their faith in journalism. It's that they're starting to place their trust in other, often more dubious sources of information: TikTok influencers, Elon Musk's X feed, and The Joe Rogan Experience. The impact of this shift can be seen almost everywhere you look. 15 percent of Americans believe climate change is a hoax. 30 percent believe the 2020 election was stolen. 10 percent believe the earth is flat. A lot of this can be blamed on social media, which crippled journalism's business model and led to a flourishing of false information online. But not all of it. People like Jay Rosen have long argued that journalists themselves are at least partly responsible for the post-truth moment we now find ourselves in. Rosen is a professor of journalism at NYU who's been studying, critiquing, and really shaping, the press for nearly 40 years. He joined me a couple of weeks ago at the Attention conference in Montreal to explain how we got to this place – and where we might go from here. A note: we recorded this interview before the Canadian election was called, so we don't touch on it here. But over the course of the next month, the integrity of our information ecosystem will face an inordinate amount of stress, and conversations like this one will be more important than ever. Mentioned:"Digital News Report Canada 2024 Data: An Overview," by Colette Brin, Sébastien Charlton, Rémi Palisser, Florence Marquis "America's News Influencers," by Galen Stocking, Luxuan Wang, Michael Lipka, Katerina Eva Matsa,Regina Widjaya,Emily Tomasik andJacob LiedkeFurther Reading: "Challenges of Journalist Verification in the Digital Age on Society: A Thematic Review," Melinda Baharom, Akmar Hayati Ahmad Ghazali, Abdul Muati, Zamri Ahmad"Making Newsworthy News: The Integral Role of Creativity and Verification in the Human Information Behavior that Drives News Story Creation," Marisela Gutierrez Lopez, Stephann Makri, Andrew MacFarlane, Colin Porlezza, Glenda Cooper, Sondess Missaoui"The Trump Administration and the Media (2020)," by Leonard Downie Jr. for the Committee to Protect Journalists.
When the American company OpenAI released ChatGPT, it was the first time that a lot of people had ever interacted with Generative AI. ChatGPT has become so popular that, for many, it's now synonymous with artificial intelligence.But that may be changing. Earlier this year a Chinese startup called DeepSeek launched its own AI chatbot, sending shockwaves across Silicon Valley. According to DeepSeek, their model – DeepSeek-R1 – is just as powerful as ChatGPT but was developed at a fraction of the cost. In other words, this isn't just a new company, it could be an entirely different approach to building artificial intelligence.To try and understand what DeepSeek means for the future of AI, and for American innovation, I wanted to speak with Karen Hao. Hao was the first reporter to ever write a profile on OpenAI and has covered AI for The MIT Tech Review, The Atlantic and the Wall Street Journal. So she's better positioned than almost anyone to try and make sense of this seemingly monumental shift in the landscape of artificial intelligence.Mentioned:“The messy, secretive reality behind OpenAI's bid to save the world,” by Karen HaoFurther Reading:“DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning,” by DeepSeek-AI and others.“A Comparison of DeepSeek and Other LLMs,” by Tianchen Gao, Jiashun Jin, Zheng Tracy Ke, Gabriel Moryoussef“Technical Report: Analyzing DeepSeek-R1′s Impact on AI Development,” by Azizi Othman
Do I have your attention right now? I'm guessing probably not. Or, at least, not all of it. In all likelihood, you're listening to this on your morning commute, or while you wash the dishes or check your e-mail.We are living in a world of perpetual distraction. There are more things to read, watch and listen to than ever before – but our brains, it turns out, can only absorb so much. Politicians like Donald Trump have figured out how to exploit this dynamic. If you're constantly saying outrageous things, it becomes almost impossible to focus on the things that really matter. Trump's former strategist Steve Bannon called this strategy “flooding the zone.”As the host of the MSNBC show All In, Chris Hayes has had a front-row seat to the war for our attention – and, now, he's decided to sound the alarm with a new book called The Sirens' Call: How Attention Became the World's Most Endangered Resource.Hayes joined me to explain how our attention became so scarce, and what happens to us when we lose the ability to focus on the things that matter most.Mentioned:"Twitter and Tear Gas: The Power and Fragility of Networked Protest," by Zeynep TufekciFurther Reading:"Ethics of the Attention Economy: The Problem of Social Media Addiction," by Vikram R. Bhargava and Manuel Velasquez."The Attention Economy Labour, Time and Power in Cognitive Capitalism," by Claudio Celis Bueno“The business of news in the attention economy: Audience labor and MediaNews Group's efforts to capitalize on news consumption,” Brice Nixon
It's become pretty easy to spot phishing scams: UPS orders you never made, banking alerts from companies you don't bank with, phone calls from unfamiliar area codes. But over the past decade, these scams – and the technology behind them – have become more sophisticated, invasive and sinister, largely due to the rise of something called ‘mercenary spyware.'The most potent version of this tech is Pegasus, a surveillance tool developed by an Israeli company called NSO Group. Once Pegasus infects your phone, it can see your texts, track your movement, and download your passwords – all without you realizing you'd been hacked.We know a lot of this because of Ron Deibert. Twenty years ago, he founded Citizen Lab, a research group at the University of Toronto that has helped expose some of the most high profile cases of cyber espionage around the world.Ron has a new book out called Chasing Shadows: Cyber Espionage, Subversion, and the Global Fight for Democracy, and he sat down with me to explain how spyware works, and what it means for our privacy – and our democracy.Note: We reached out to NSO Group about the claims made in this episode and they did not reply to our request for comment.Mentioned:“Chasing Shadows: Cyber Espionage, Subversion, and the Global Fight for Democracy,” by Ron Deibert“Meta's WhatsApp says spyware company Paragon targeted users in two dozen countries,” by Raphael Satter, ReutersFurther Reading:“The Autocrat in Your iPhone,” by Ron Deibert“A Comprehensive Analysis of Pegasus Spyware and Its Implications for Digital Privacy and Security,” Karwan Kareem“Stopping the Press: New York Times Journalist Targeted by Saudi-linked Pegasus Spyware Operator,” by Bill Marczak, Siena Anstis, Masashi Crete-Nishihata, John Scott-Railton, and Ron Deibert
We've spent a lot of time on this show talking about AI: how it's changing war, how your doctor might be using it, and whether or not chatbots are curing, or exacerbating, loneliness.But what we haven't done on this show is try to explain how AI actually works. So this seemed like as good a time as any to ask our listeners if they had any burning questions about AI. And it turns out you did.Where do our queries go once they've been fed into ChatGPT? What are the justifications for using a chatbot that may have been trained on plagiarized material? And why do we even need AI in the first place?To help answer your questions, we are joined by Derek Ruths, a Professor of Computer Science at McGill University, and the best person I know at helping people (including myself) understand artificial intelligence.Further Reading:“Yoshua Bengio Doesn't Think We're Ready for Superhuman AI. We're Building It Anyway,” Machines Like Us podcast“ChatGPT is blurring the lines between what it means to communicate with a machine and a human,” by Derek Ruths“A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going,” by Michael Wooldridge“Artificial Intelligence: A Guide for Thinking Humans,” by Melanie Mitchell“Anatomy of an AI System,” by Kate Crawford and Vladan Joler“Two years after the launch of ChatGPT, how has generative AI helped businesses?,” by Joe Castaldo
We spend a lot of time talking about AI on this show: how we should govern it, the ideologies of the people making it, and the ways it's reshaping our lives.But before we barrel into a year where I think AI will be everywhere, we thought this might be a good moment to step back and ask an important question: what exactly is AI? On our next episode, we'll be joined by Derek Ruths, a Professor of Computer Science at McGill University.And he's given me permission to ask him anything and everything about AI. If you have questions about AI, or how its impacting your life, we want to hear them. Send an email or a voice recording to: machineslikeus@paradigms.techThanks – and we'll see you next Tuesday!
In February, 2024, Megan Garcia's 14-year-old son Sewell took his own life.As she tried to make sense of what happened, Megan discovered that Sewell had fallen in love with a chatbot on Character.AI – an app where you can talk to chatbots designed to sound like historical figures or fictional characters. Now Megan is suing Character.AI, alleging that Sewell developed a “harmful dependency” on the chatbot that, coupled with a lack of safeguards, ultimately led to her son's death.They've also named Google in the suit, alleging that the technology that underlies Character.AI was developed while the founders were working at Google.I sat down with Megan Garcia and her lawyer, Meetali Jain, to talk about what happened to Sewell. And to try to understand the broader implications of a world where chatbots are becoming a part of our lives – and the lives of our children. We reached out to Character.AI and Google about this story. Google did not respond to our request for comment by publication time.A spokesperson for Character.AI made the following statement:“We do not comment on pending litigation.Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry. As part of this, we have launched a separate model for our teen users – with specific safety features that place more conservative limits on responses from the model.The Character.AI experience begins with the Large Language Model that powers so many of our user and Character interactions. Conversations with Characters are driven by a proprietary model we continuously update and refine. For users under 18, we serve a version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content. This initiative – combined with the other techniques described below – combine to produce two distinct user experiences on the Character.AI platform: one for teens and one for adults.Additional ways we have integrated safety across our platform include:Model Outputs: A “classifier” is a method of distilling a content policy into a form used to identify potential policy violations. We employ classifiers to help us enforce our content policies and filter out sensitive content from the model's responses. The under-18 model has additional and more conservative classifiers than the model for our adult users.User Inputs: While much of our focus is on the model's output, we also have controls to user inputs that seek to apply our content policies to conversations on Character.AI.This is critical because inappropriate user inputs are often what leads a language model to generate inappropriate outputs. For example, if we detect that a user has submitted content that violates our Terms of Service or Community Guidelines, that content will be blocked from the user's conversation with the Character. We also have a process in place to suspend teens from accessing Character.AI if they repeatedly try to input prompts into the platform that violate our content policies.Additionally, under-18 users are now only able to access a narrower set of searchable Characters on the platform. Filters have been applied to this set to remove Characters related to sensitive or mature topics.We have also added a time spent notification and prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice. As we continue to invest in the platform, we will be rolling out several new features, including parental controls. For more information on these new features, please refer to the Character.AI blog HERE.There is no ongoing relationship between Google and Character.AI. In August, 2024, Character.AI completed a one-time licensing of its technology and Noam went back to Google.” If you or someone you know is thinking about suicide, support is available 24-7 by calling or texting 988, Canada's national suicide prevention helpline. Mentioned:Megan Garcia v. Character Technologies, Et Al.“Google Paid $2.7 Billion to Bring Back an AI Genius Who Quit in Frustration” by Miles Kruppa and Lauren Thomas“Belgian man dies by suicide following exchanges with chatbot,” by Lauren Walker“Can AI Companions Cure Loneliness?,” Machines Like Us“An AI companion suggested he kill his parents. Now his mom is suing,” by Nitasha TikuFurther Reading:“Can A.I. Be Blamed for a Teen's Suicide?” by Kevin Roose“Margrethe Vestager Fought Big Tech and Won. Her Next Target is AI,” Machines Like Us
In July, there was a recall on two brands of plant-based milks, Silk and Great Value, after a listeria outbreak that led to at least 20 illnesses and three deaths. Public health officials determined the same strain of listeria had been making people sick for almost a year. When Globe reporters began looking into what happened, they found a surprising fact: the facility that the bacteria was traced to had not been inspected for listeria in years.The reporters learned that in 2019 the Canadian Food Inspection Agency introduced a new system that relies on an algorithm to prioritize sites for inspectors to visit. Investigative reporters Grant Robertson and Kathryn Blaze Baum talk about why this new system of tracking was created, and what went wrong.
The board game Go has more possible board configurations than there are atoms in the universe.Because of that seemingly infinite complexity, developing software that could master Go has long been a goal of the AI community.In 2016, researchers at Google's DeepMind appeared to meet the challenge. Their Go-playing AI defeated one of the best Go players in the world, Lee Sedol.After the match, Lee Sedol retired, saying that losing to an AI felt like his entire world was collapsing.He wasn't alone. For a lot of people, the game represented a turning point – the moment where humans had been overtaken by machines.But Frank Lantz saw that game and was invigorated. Lantz is a game designer (his game “Hey Robot” is a recurring feature on The Tonight Show Starring Jimmy Fallon), the director of the NYU game center, and the author of The Beauty of Games. He's spent his career thinking about how technology is changing the nature of games – and what we can learn about ourselves when we sit down to play them.Mentioned:“AlphaGo”“The Beauty of Games” by Frank Lantz“Adversarial Policies Beat Superhuman Go AIs” by Tony Wang Et al.“Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern“Heads-up limit hold'em poker is solved” by Michael Bowling Et al.Further Reading:“How to Play a Game” by Frank Lantz“The Afterlife of Go” by Frank Lantz“How A.I. Conquered Poker” by Keith Romer“In Two Moves, AlphaGo and Lee Sedol Redefined the Future” by Cade MetzHey Robot by Frank LantzUniversal Paperclips by Frank Lantz
The past few months have seen a series of bold proclamations from the most powerful people in tech.In September, Mark Zuckerberg announced that Meta had developed “the most advanced glasses the world had ever seen.” That same day, Open AI CEO Sam Altman predicted we could have artificial super intelligence within a couple of years. Elon Musk has said he'll land rockets on Mars by 2026.We appear to be living through the kinds of technological leaps we used to only dream about. But whose dreams were those, exactly?In her latest book, Imagination: A Manifesto, Ruha Benjamin argues that our collective imagination has been monopolized by the Zuckerbergs and Musks of the world. But, she says, it doesn't need to be that way.Mentioned:“Imagination: A Manifesto,” by Ruha BenjaminSummer of Soul (...Or, When the Revolution Could Not Be Televised), directed by Questlove“The Black Woman: An Anthology,” by Toni Cade Bambara“The New Artificial Intelligentsia,” by Ruha Benjamin“Race After Technology,” by Ruha BenjaminBreonna's Garden, with Ju'Niyah Palmer“Viral Justice,” by Ruha BenjaminThe Parable Series, by Octavia ButlerFurther Reading:“AI could make health care fairer—by helping us believe what patients say,” by Karen Hao“How an Attempt at Correcting Bias in Tech Goes Wrong,” by Sidney Fussell“Unmasking AI: My Mission to Protect What Is Human in a World of Machines,'” by Joy Buolamwini“The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence,” by Timnit Gebru and Émile P. Torres
Margrethe Vestager has spent the past decade standing up to Silicon Valley. As the EU's Competition Commissioner, she's waged landmark legal battles against tech giants like Meta, Microsoft and Amazon. Her two latest wins will cost Apple and Google billions of dollars.With her decade-long tenure as one of the world's most powerful anti-trust watchdogs coming to an end, Vestager has turned her attention to AI. She spearheaded the EU's AI Act, which will be the first and, so far, most ambitious piece of AI legislation in the world.But the clock is ticking – both on her term and on the global race to govern AI, which Vestager says we have “very little time” to get right.Mentioned:The EU Artificial Intelligence Act“Dutch scandal serves as a warning for Europe over risks of using algorithms,” by Melissa Heikkilä“Belgian man dies by suicide following exchanges with chatbot” by Lauren WalkerThe Digital Services ActThe Digital Markets ActGeneral Data Protection Regulation (GDPR)“The future of European competitiveness” by Mario Draghi“Governing AI for Humanity: Final Report” by the United Nations Secretary-General's High-level Advisory BodyThe Artificial Intelligence and Data Act (AIDA)Further Reading:“Apple, Google must pay billions in back taxes and fines, E.U. court rules” by Ellen Francis and Cat Zakrzewski“OpenAI Lobbied the E.U. to Water Down AI Regulation” by Billy Perrigo“The total eclipse of Margrethe Vestager” by Samuel Stolton“Digital Empires: The Global Battle to Regulate Technology” by Anu Bradford“The Brussels Effect: How the European Union Rules the World” by Anu Bradford
We're off this week, so we're bringing you an episode from our Globe and Mail sister show Lately. That creeping feeling that everything online is getting worse has a name: “enshittification,” a term for the slow degradation of our experience on digital platforms. The enshittification cycle is why you now have to wade through slop to find anything useful on Google, and why your charger is different from your BFF's. According to Cory Doctorow, the man who coined the memorable moniker, this digital decay isn't inevitable. It's a symptom of corporate under-regulation and monopoly – practices being challenged in courts around the world, like the US Department of Justice's antitrust suit against Google.Cory Doctorow is a British-Canadian journalist, blogger and author of Chokepoint Capitalism, as well as speculative fiction works like The Lost Cause and the new novella Spill. Every Friday, Lately takes a deep dive into the big, defining trends in business and tech that are reshaping our every day. It's hosted by Vass Bednar. Machines Like Us will be back in two weeks.
The tech lobby has quietly turned Silicon Valley into the most powerful political operation in America. Pro crypto donors are now responsible for almost half of all corporate donations this election. Elon Musk has gone from an occasional online troll to, as one of our guests calls him, “MAGA's Minister of Propaganda.” And for the first time, the once reliably blue Silicon Valley seems to be shifting to the right. What does all this mean for the upcoming election? To help us better understand this moment, we spoke with three of the most prominent tech writers in the U.S. Charles Duhigg (author of the bestseller Supercommunicators) has a recent piece in the New Yorker called “Silicon Valley, the New Lobbying Monster.” Charlie Warzel is a staff writer at the Atlantic, and Nitasha Tiku is a tech culture reporter at the Washington Post.Mentioned:“Silicon Valley, the New Lobbying Monster” by Charles Duhigg“Big Crypto, Big Spending: Crypto Corporations Spend an Unprecedented $119 Million Influencing Elections” by Rick Claypool via Public Citizen“I'm Running Out of Ways to Explain How Bad This Is” by Charlie Warzel“Elon Musk Has Reached a New Low” by Charlie Warzel“The movement to diversify Silicon Valley is crumbling amid attacks on DEI” by Naomi Nix, Cat Zakrzewski and Nitasha Tiku“The Techno-Optimist Manifesto” by Marc Andreessen“Trump Vs. Biden: Tech Policy,” The Ben & Marc Show “The MAGA Aesthetic Is AI Slop” by Charlie WarzelFurther Reading:“Biden's FTC took on big tech, big pharma and more. What antitrust legacy will Biden leave behind?” by Paige Sutherland and Meghna Chakrabarti“Inside the Harris campaign's blitz to win back Silicon Valley” by Cat Zakrzewski, Nitasha Tiku and Elizabeth Dwoskin“The Little Tech Agenda” by Marc Andreessen and Ben Horowitz“Silicon Valley had Harris's back for decades. Will she return the favor?” by Cristiano Lima-Strong and Cat Zakrzewski“SEC's Gensler turns tide against crypto in courts” by Declan Harty“Trump vs. Harris is dividing Silicon Valley into feuding political camps” by Trisha Thadani, Elizabeth Dwoskin, Nitasha Tiku and Gerrit De Vynck“Inside the powerful Peter Thiel network that anointed JD Vance” by Elizabeth Dwoskin, Cat Zakrzewski, Nitasha Tiku and Josh Dawsey
What kind of future are we building for ourselves? In some ways, that's the central question of this show.It's also a central question of speculative fiction. And one that few people have tried to answer as thoughtfully – and as poetically – as Emily St. John Mandel.Mandel is one of Canada's great writers. She's the author of six award winning novels, the most recent of which is Sea of Tranquility – a story about a future where we have moon colonies and time travelling detectives. But Mandel might be best known for Station Eleven, which was made into a big HBO miniseries in 2021. In Station Eleven, Mandel envisioned a very different future. One where a pandemic has wiped out nearly everyone on the planet, and the world has returned to a pre industrial state. In other words, a world without technology.I think speculative fiction carries tremendous power. In fact, I think that AI is ultimately an act of speculation. The AI we have chosen to build, and our visions of what AI could become, have been shaped by acts of imagination.So I wanted to speak to someone who has made a career imagining other worlds, and thinking about how humans will fit into them.Mentioned:“Last Night in Montreal” by Emily St. John Mandel“Station Eleven” by Emily St. John MandelThe Nobel Prize in Literature 2014 – Lecture by Patrick Modiano“The Glass Hotel” by Emily St. John Mandel“Sea of Tranquility” by Emily St. John MandelSummary of the 2023 WGA MBA, Writers Guild of AmericaHer (2013)“The Handmaid's Tale” by Margaret Atwood“Shell Game” by Evan RatliffReplikaFurther Reading:“Can AI Companions Cure Loneliness?,” Machines Like Us“Yoshua Bengio Doesn't Think We're Ready for Superhuman AI. We're Building it Anyway.,” Machines Like Us“The Road” by Cormac McCarthy
A couple of weeks ago, I was at this splashy AI conference in Montreal called All In. It was – how should I say this – a bit over the top. There were smoke machines, thumping dance music, food trucks. It was a far cry from the quiet research labs where AI was developed. While I remain skeptical of the promise of artificial intelligence, this conference made it clear that the industry is, well, all in. The stage was filled with startup founders promising that AI was going to revolutionize the way we work, and government officials saying AI was going to supercharge the economy. And then there was Yoshua Bengio. Bengio is one of AI's pioneering figures. In 2018, he and two colleagues won the Turing Award – the closest thing computer science has to a Nobel Prize – for their work on deep learning. In 2022, he was the most cited computer scientist in the world. It wouldn't be hyperbolic to suggest that AI as we know it today might not exist without Yoshua Bengio. But in the last couple of years, Bengio has had an epiphany of sorts. And he now believes that, left unchecked, AI has the potential to wipe out humanity. So these days, he's dedicated himself to AI safety. He's a professor at the University of Montreal and the founder of MILA - the Quebec Artificial Intelligence Institute. And he was at this big AI conference too, amidst all these Silicon Valley types, pleading with the industry to slow down before it's too late. Mentioned:“Personal and Psychological Dimensions of AI Researchers Confronting AI Catastrophic Risks” by Yoshua Bengio“Deep Learning” by Yann LeCun, Yoshua Bengio, Geoffrey Hinton“Computing Machinery and Intelligence” by Alan Turing“International Scientific Report on the Safety of Advanced AI” “Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?” by R. Ren et al.“SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”Further reading:“‘Deep Learning' Guru Reveals the Future of AI” by Cade Metz“Montréal Declaration for a Responsible Development of Artificial Intelligence” “This A.I. Subculture's Motto: Go, Go, Go” By Kevin Roose“Reasoning through arguments against taking AI safety seriously” by Yoshua Bengio
In 2015, 195 countries gathered in Paris to discuss how to address the climate crisis. Although there was plenty they couldn't agree on, there was one point of near-absolute consensus: if the planet becomes 2°C hotter than it was before industrialization, the effects will be catastrophic. Despite that consensus, we have continued barrelling toward that 2°C threshold. And while the world is finally paying attention to climate change, the pace of our action is radically out of step with the severity of the problem. What is becoming increasingly clear is that just cutting our emissions – by switching to clean energy or driving electric cars – will not be sufficient. We will also need some bold technological solutions if we want to maintain some semblance of life as we know it.Luckily, everything is on the table. Grinding entire mountains into powder and dumping them into oceans. Sucking carbon directly out of the air and burying it underground. Spraying millions of tons of sulphur dioxide directly into the atmosphere.Gwynne Dyer has spent the past four years interviewing the world's leading climate scientists about the moonshots that could save the planet. Dyer is a journalist and historian who has written a dozen books over his career, and has become one of Canada's most trusted commentators on war and geopolitics.But his latest book, Intervention Earth, is about the battle to save the planet.Like any reporting on the climate, it's inevitably a little depressing. But with this book Dyer has also given us a different way of thinking about the climate crisis – and maybe even a road map for how technology could help us avoid our own destruction.Mentioned:“Intervention Earth: Life-Saving Ideas from the World's Climate Engineers” by Gwynne Dyer“Scientists warn Earth warming faster than expected – due to reduction in ship pollution” by Nicole Mortillaro“Global warming in the pipeline” by James Hansen, et al.“Albedo Enhancement by Stratospheric Sulfur Injections: A Contribution to Resolve a Policy Dilemma?” by Paul CrutzenFurther Reading:Interview with Hans Joachim Schellnhuber and Gwynne Dyer
For nearly a year now, the world has been transfixed – and horrified – by what's happening in the Gaza Strip. Yet for all the media coverage, there seems to be far less known about how this war is actually being fought. And the how of this conflict, and its enormous human toll, might end up being its most enduring legacy.In April, the Israeli magazine +972 published a story describing how Israel was using an AI system called Lavender to target potential enemies for air strikes, sometimes with a margin of error as high as 10 per cent.I remember reading that story back in the spring and being shocked, not that such tools existed, but that they were already being used at this scale on the battlefield. P.W. Singer was less surprised. Singer is one of the world's foremost experts on the future of warfare. He's a strategist at the think tank New America, a professor of practice at Arizona State University, and a consultant for everyone from the US military to the FBI.So if anyone can help us understand the black box of autonomous weaponry and AI warfare, it's P.W. Singer.Mentioned:“‘The Gospel': how Israel uses AI to select bombing targets in Gaza” by Harry Davies, Bethan McKernan, and Dan Sabbagh“‘Lavender': The AI machine directing Israel's bombing spree in Gaza” by Yuval Abraham“Ghost Fleet: A Novel of the Next World War” by P. W. Singer and August ColeFurther Reading:“Burn-In: A Novel of the Real Robotic Revolution” by P. W. Singer and August Cole“The AI revolution is already here” by P. W. Singer“Humans must be held responsible for decisions AI weapons make” in The Asahi Shimbun“Useful Fiction”
Things do not look good for journalism right now. This year, Bell Media, VICE, and the CBC all announced significant layoffs. In the US, there were cuts at the Washington Post, the LA Times, Vox and NPR – to name just a few. A recent study from Northwestern University found that an average of two and a half American newspapers closed down every single week in 2023 (up from two a week the year before).One of the central reasons for this is that the advertising model that has supported journalism for more than a century has collapsed. Simply put, Google and Meta have built a better advertising machine, and they've crippled journalism's business model in the process.It wasn't always obvious this was going to happen. Fifteen or twenty years ago, a lot of publishers were actually making deals with social media companies, thinking they were going to lead to bigger audiences and more clicks.But these turned out to be faustian bargains. The journalism industry took a nosedive, while Google and Meta became two of the most profitable companies in the world.And now we might be doing it all over again with a new wave of tech companies like OpenAI. Over the past several years, OpenAI, operating in a kind of legal grey area, has trained its models on news content it hasn't paid for. While some news outlets, like the New York Times, have chosen to sue OpenAI for copyright infringement, many publishers (including The Atlantic, the Financial Times, and NewsCorp) have elected to sign deals with OpenAI instead.Julia Angwin has been worried about the thorny relationship between big tech and journalism for years. She's written a book about MySpace, documented the rise of big tech, and won a Pulitzer for her tech reporting with the Wall Street Journal.She was also one of the few people warning publishers the first time around that making deals with social media companies maybe wasn't the best idea.Now, she's ringing the alarm again, this time as a New York Times contributing opinion writer and the CEO of a journalism startup called Proof News that is preoccupied with the question of how to get people reliable information in the age of AI.Mentions:“Stealing MySpace: The Battle to Control the Most Popular Website in America,” by Julia Angwin“What They Know” WSJ series by Julia Angwin“The Bad News About the News” by Robert G. Kaiser“The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work” by By Michael M. Grynbaum and Ryan Mac“Seeking Reliable Election Information? Don't Trust AI” by Julia Angwin, Alondra Nelson, Rina PaltaFurther Reading:“Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance” by Julia Angwin“A Letter From Our Founder” by Julia Angwin
Last year, the venture capitalist Marc Andreesen published a document he called “The Techno-Optimist Manifesto.” In it, he argued that “everything good is downstream of growth,” government regulation is bad, and that the only way to achieve real progress is through technology.Of course, Silicon Valley has always been driven by libertarian sensibilities and an optimistic view of technology. But the radical techno-optimism of people like Andreesen, and billionaire entrepreneurs like Peter Thiel and Elon Musk, has morphed into something more extreme. In their view, technology and government are always at odds with one another.But if that's true, then how do you explain someone like Audrey Tang?Tang, who, until May of this year, was Taiwan's first Minister of Digital Affairs, is unabashedly optimistic about technology. But she's also a fervent believer in the power of democratic government.To many in Silicon Valley, this is an oxymoron. But Tang doesn't see it that way. To her, technology and government are – and have always been – symbiotic.So I wanted to ask her what a technologically enabled democracy might look like – and she has plenty of ideas. At times, our conversation got a little bit wonky. But ultimately, this is a conversation about a better, more inclusive form of democracy. And why she thinks technology will get us there.Just a quick note: we recorded this interview a couple of months ago, while Tang was still the Minister of Digital Affairs.Mentions:“vTaiwan”“Polis”“Plurality: The Future of Collaborative Technology and Democracy” by E. Glen Weyl, Audrey Tang and ⿻ Community“Collective Constitutional AI: Aligning a Language Model with Public Input,” AnthropicFurther Reading:“The simple but ingenious system Taiwan uses to crowdsource its laws” by Chris Horton“How Taiwan's Unlikely Digital Minister Hacked the Pandemic” by Andrew Leonard
If you listened to our last couple of episodes, you'll have heard some pretty skeptical takes on AI. But if you look at the stock market right now, you won't see any trace of that skepticism. Since the launch of ChatGPT in late 2022, the chip company NVIDIA, whose chips are used in the majority of AI systems, has seen their stock shoot up by 700%. A month ago, that briefly made them the most valuable company in the world, with a market cap of more than $3.3 trillion.And it's not just chip companies. The S&P 500 (the index that tracks the 500 largest companies in the U.S.) is at an all-time high this year, in no small part because of the sheen of AI. And here in Canada, a new report from Microsoft claims that generative AI will add $187 billion to the domestic economy by 2030. As wild as these numbers are, they may just be the tip of the iceberg. Some researchers argue that AI will completely revolutionize our economy, leading to per capita growth rates of 30%. In case those numbers mean absolutely nothing to you, 25 years of 30% growth means we'd be a thousand times richer than we are now. It's hard to imagine what that world would like – or how the average person fits into it. Luckily, Rana Foroohar has given this some thought. Foroohar is a global business columnist and an associate editor at The Financial Times. I wanted to have her on the show to help me work through what these wild predictions really mean and, most importantly, whether or not she thinks they'll come to fruition.Mentioned:“Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity” by Daron Acemoglu and Simon Johnson (2023)“Manias, Panics, and Crashes: A History of Financial Crises” by Charles P. Kindleberger (1978)“Irrational Exuberance” by Robert J. Shiller (2016)“Gen AI: Too much spend, too little benefit?” by Goldman Sachs Research (2024)“Workers could be the ones to regulate AI” by Rana Foroohar (Financial Times, 2023)“The Financial Times and OpenAI strike content licensing deal” (Financial Times, 2024)“Is AI about to kill what's left of journalism?” by Rana Foroohar (Financial Times, 2024)“Deaths of Despair and the Future of Capitalism” by Anne Case and Angus Deaton (2020)“The China Shock: Learning from Labor Market Adjustment to Large Changes in Trade” by David H. Autor, David Dorn & Gordon H. Hanson (2016)Further Reading:“Beware AI euphoria” by Rana Foroohar (Financial Times, 2024)“AlphaGo” by Google DeepMind (2020)
Douglas Rushkoff has spent the last thirty years studying how digital technologies have shaped our world. The renowned media theorist is the author of twenty books, the host of the Team Human podcast, and a professor of Media Theory and Digital Economics at City University of New York. But when I sat down with him, he didn't seem all that excited to be talking about AI. Instead, he suggested – I think only half jokingly – that he'd rather be talking about the new reboot of Dexter.Rushkoff's lack of enthusiasm around AI may stem from the fact that he doesn't see it as the ground shifting technology that some do. Rather, he sees generative artificial intelligence as just the latest in a long line of communication technologies – more akin to radio or television than fire or electricity.But while he may not believe that artificial intelligence is going to bring about some kind of techno-utopia, he does think its impact will be significant. So eventually we did talk about AI. And we ended up having an incredibly lively conversation about whether computers can create real art, how the “California ideology” has shaped artificial intelligence, and why it's not too late to ensure that technology is enabling human flourishing – not eroding it.Mentioned:“Cyberia” by Douglas Rushkoff“The Original WIRED Manifesto” by Louis Rossetto“The Long Boom: A History of the Future, 1980–2020″ by Peter Schwartz and Peter Leyden“Survival of the Richest: Escape Fantasies of the Tech Billionaires” by Douglas Rushkoff“Artificial Creativity: How AI teaches us to distinguish between humans, art, and industry” by Douglas Rushkoff” by Douglas Rushkoff“Empirical Science Began as a Domination Fantasy” by Douglas Rushkoff“A Declaration of the Independence of Cyberspace” by John Perry Barlow“The Californian Ideology” by Richard Barbrook and Andy Cameron“Can AI Bring Humanity Back to Health Care?,” Machines Like Us Episode 5Further Reading:“The Medium is the Massage: An Inventory of Effects” by Marshall McLuhan“Technopoly: The Surrender of Culture to Technology” by Neil Postman“Amusing Ourselves to Death” by Neil Postman
It seems like the loudest voices in AI often fall into one of two groups. There are the boomers – the techno-optimists – who think that AI is going to bring us into an era of untold prosperity. And then there are the doomers, who think there's a good chance AI is going to lead to the end of humanity as we know it.While these two camps are, in many ways, completely at odds with one another, they do share one thing in common: they both buy into the hype of artificial intelligence.But when you dig deeper into these systems, it becomes apparent that both of these visions – the utopian one and the doomy one – are based on some pretty tenuous assumptions.Kate Crawford has been trying to understand how AI systems are built for more than a decade. She's the co-founder of the AI Now institute, a leading AI researcher at Microsoft, and the author of Atlas of AI: Power, Politics and the Planetary Cost of AI.Crawford was studying AI long before this most recent hype cycle. So I wanted to have her on the show to explain how AI really works. Because even though it can seem like magic, AI actually requires huge amounts of data, cheap labour and energy in order to function. So even if AI doesn't lead to utopia, or take over the world, it is transforming the planet – by depleting its natural resources, exploiting workers, and sucking up our personal data. And that's something we need to be paying attention to. Mentioned:“ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine” by Joseph Weizenbaum“Microsoft, OpenAI plan $100 billion data-center project, media report says,” Reuters“Meta ‘discussed buying publisher Simon & Schuster to train AI'” by Ella Creamer“Google pauses Gemini AI image generation of people after racial ‘inaccuracies'” by Kelvin Chan And Matt O'brien“OpenAI and Apple announce partnership,” OpenAIFairwork“New Oxford Report Sheds Light on Labour Malpractices in the Remote Work and AI Booms” by Fairwork“The Work of Copyright Law in the Age of Generative AI” by Kate Crawford, Jason Schultz“Generative AI's environmental costs are soaring – and mostly secret” by Kate Crawford“Artificial intelligence guzzles billions of liters of water” by Manuel G. Pascual“S.3732 – Artificial Intelligence Environmental Impacts Act of 2024″“Assessment of lithium criticality in the global energy transition and addressing policy gaps in transportation” by Peter Greim, A. A. Solomon, Christian Breyer“Calculating Empires” by Kate Crawford and Vladan Joler Further Reading:“Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence” by Kate Crawford“Excavating AI” by Kate Crawford and Trevor Paglen“Understanding the work of dataset creators” from Knowing Machines“Should We Treat Data as Labor? Moving beyond ‘Free'” by I. Arrieta-Ibarra et al.
Think about the last time you felt let down by the health care system. You probably don't have to go back far. In wealthy countries around the world, medical systems that were once robust are now crumbling. Doctors and nurses, tasked with an ever expanding range of responsibilities, are busier than ever, which means they have less and less time for patients. In the United States, the average doctor's appointment lasts seven minutes. In South Korea, it's only two.Without sufficient time and attention, patients are suffering. There are 12 million significant misdiagnoses in the US every year, and 800,000 of those result in death or disability. (While the same kind of data isn't available in Canada, similar trends are almost certainly happening here as well).Eric Topol says medicine has become decidedly inhuman – and the consequences have been disastrous. Topol is a cardiologist and one of the most widely cited medical researchers in the world. In his latest book, Deep Medicine, he argues that the best way to make health care human again is to embrace the inhuman, in the form of artificial intelligence.Mentioned:“Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again” by Eric Topol“The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations” by H. Singh, A. Meyer, E. Thomas“Burden of serious harms from diagnostic error in the USA” by David Newman-Toker, et al.“How Expert Clinicians Intuitively Recognize a Medical Diagnosis” by J. Brush Jr, J. Sherbino, G. Norman“A Randomized Controlled Study of Art Observation Training to Improve Medical Student Ophthalmology Skills” by Jaclyn Gurwin, et al.“Abridge becomes Epic's First Pal, bringing generative AI to more providers and patients, including those at Emory Healthcare”“Why Doctors Should Organize” by Eric Topol“How This Rural Health System Is Outdoing Silicon Valley” by Erika FryFurther Reading:"The Importance Of Being" by Abraham Verghese
Earlier this year, Elon Musk's company Neuralink successfully installed one of their brain implants in a 29 year old quadriplegic man named Noland Arbaugh. The device changed Arbaugh's life. He no longer needs a mouth stylus to control his computer or play video games. Instead, he can use his mind.The brain-computer interface that Arbaugh uses is part of an emerging field known as neurotechnology that promises to reshape the way we live. A wide range of AI empowered neurotechnologies may allow disabled people like Arbaugh to regain independence, or give us the ability to erase traumatic memories in patients suffering from PTSD.But it doesn't take great leaps to envision how these technologies could be abused as well. Law enforcement agencies in the United Arab Emirates have used neurotechnology to read the minds of criminal suspects, and convict them based on what they've found. And corporations are developing ways to advertise to potential customers in their dreams. Remarkably, both of these things appear to be legal, as there are virtually no laws explicitly governing neurotechnology.All of which makes Nita Farahany's work incredibly timely. Farahany is a professor of law and philosophy at Duke University and the author of The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology.Farahany isn't fatalistic about neurotech – in fact, she uses some of it herself. But she is adamant that we need to start developing laws and guardrails as soon as possible, because it may not be long before governments, employers and corporations have access to our brains. Mentioned:“PRIME Study Progress Update – User Experience,” Neuralink“Paralysed man walks using device that reconnects brain with muscles,” The GuardianCognitive Warfare – NATO's ACTThe Ethics of Neurotechnology: UNESCO appoints international expert group to prepare a new global standard
When Eugenia Kuyda saw Her for the first time – the 2013 film about a man who falls in love with his virtual assistant – it didn't read as science fiction. That's because she was developing a remarkably similar technology: an AI chatbot that could function as a close friend, or even a romantic partner.That idea would eventually become the basis for Replika, Kuyda's AI startup. Today, Replika has millions of active users – that's millions of people who have AI friends, AI siblings and AI partners. When I first heard about the idea behind Replika, I thought it sounded kind of dystopian. I envisioned a world where we'd rather spend time with our AI friends than our real ones. But that's not the world Kuyda is trying to build. In fact, she thinks chatbots will actually make people more social, not less, and that the cure for our technologically exacerbated loneliness might just be more technology. Mentioned:“ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine” by Joseph Weizenbaum“elizabot.js”, implemented by Norbert Landsteiner“Speak, Memory” by Casey Newton (The Verge)“Creating a safe Replika experience” by Replika“The Year of Magical Thinking” by Joan DidionAdditional Reading:The Globe & Mail: “They fell in love with the Replika AI chatbot. A policy update left them heartbroken”“Loneliness and suicide mitigation for students using GPT3-enabled chatbots” by Maples, Cerit, Vishwanath, & Pea“Learning from intelligent social agents as social and intellectual mirrors” by Maples, Pea, Markowitz
In the last few years, artificial intelligence has gone from a novelty to perhaps the most influential technology we've ever seen. The people building AI are convinced that it will eradicate disease, turbocharge productivity, and solve climate change. It feels like we're on the cusp of a profound societal transformation. And yet, I can't shake the feeling we've been here before. Fifteen years ago, there was a similar wave of optimism around social media: it was going to connect the world, catalyze social movements and spur innovation. It may have done some of these things. But it also made us lonelier, angrier, and occasionally detached from reality.Few people understand this trajectory better than Maria Ressa. Ressa is a Filipino journalist, and the CEO of a news organization called Rappler. Like many people, she was once a fervent believer in the power of social media. Then she saw how it could be abused. In 2016, she reported on how Rodrigo Duterte, then president of the Philippines, had weaponized Facebook in the election he'd just won. After publishing those stories, Ressa became a target herself, and her inbox was flooded with death threats. In 2021, she won the Nobel Peace Prize.I wanted this to be our first episode because I think, as novel as AI is, it has undoubtedly been shaped by the technologies, the business models, and the CEOs that came before it. And Ressa thinks we're about to repeat the mistakes we made with social media all over again.Mentioned:“How to Stand Up to a Dictator” by Maria Ressa“A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism” by Thompson et al.Rappler's Matrix Protocol Chat App: Rappler Communities“Democracy Report 2023: Defiance in the Face of Autocratization” by V-Dem“The Foundation Model Transparency Index” by Stanford HAI (Human-Centered Artificial Intelligence)“All the ways Trump's campaign was aided by Facebook, ranked by importance” by Philip Bump (The Washington Post)“Our Epidemic of Loneliness and Isolation” by U.S. Surgeon General Dr. Vivek H. Murthy
We are living in an age of breakthroughs propelled by advances in artificial intelligence. Technologies that were once the realm of science fiction will become our reality: robot best friends, bespoke gene editing, brain implants that make us smarter. Every other Tuesday Taylor Owen sits down with someone shaping this rapidly approaching future. The first two episodes will be released on May 7th. Subscribe now so you don't miss an episode.
On the season finale of Big Tech, host Taylor Owen discusses the future of tech governance with Azeem Azhar, author of The Exponential Age: How Accelerating Technology is Transforming Business, Politics, and Society. In addition to his writing, Azeem hosts the Exponential View podcast, which, much like this podcast, looks at how technology is transforming business and society.Taylor and Azeem reflect on some of the broad themes that have concerned them this season, from platform governance, antitrust and competition, to polarization, deliberative democracy and Web3. As listeners have come to know, Taylor often views technology's future through a cautionary lens, while Azeem has a more optimistic outlook. They begin with the recent news of Elon Musk's attempt to purchase Twitter and what that might mean for the platform. As the episode unfolds, Taylor and Azeem touch on the varied approaches to tech regulation around the world, and how polarization and its amplification via social media are impacting democracy. They discuss Web3's potential to foster more transparency and trust building on the internet, as well as the need for states to be involved in shaping our future online. Ultimately, there are opportunities to make positive changes at many levels of these complex, multilayered issues. As a concluding thought, Azeem points to the coal industry as an example of how, regardless of political winds, many factors in a system can bring about change.
In this episode of Big Tech, host Taylor Owen speaks with Ephrat Livni, a lawyer and journalist who reports from Washington on the intersection of business and policy for DealBook at The New York Times. One of Livni's focuses has been how cryptocurrencies have moved from the periphery of the financial world into the mainstream. The cryptocurrency movement originated with a commitment to the decentralization of money and the removal of intermediaries and government to enable person-to-person financial transactions. Early on, governments viewed cryptocurrency as a tool for illicit criminal activity and a threat to institutional power. In the last two years, cryptocurrency has moved into the mainstream, with sporting arenas named after crypto companies and flashy celebrity endorsements and Super Bowl ads. Crypto markets are extremely volatile with great interest from retail investors and venture capitalists. There's a lot of enthusiasm about crypto, but not a lot of information.With crypto moving into the mainstream, companies that wish to create trust with their customers must be more transparent, accept regulations and act more like the institutions they initially sought to disrupt. As Livni and Owen discuss, this is not a sector that regulators can ignore: it is complicated, fast-changing, multinational, and demanding a great deal of thought about how best to proceed.
The internet is an ever-evolving thing, with new features and services popping up daily. But these innovations are happening in the current internet space, known as Web 2.0. The potential next big leap is to what is being called Web3 or Web 3.0. You have likely heard some of the terms associated with this next age — the token economy, blockchain, NFTs. Our guest this week walks us through what all this “future stuff” means, and how it could impact our daily lives.In this episode of Big Tech, host Taylor Owen speaks with Shermin Voshmgir, founder of Token Kitchen and BlockchainHub Berlin and the author of Token Economy: How the Web3 reinvents the Internet. Her work focuses on making technology accessible to a non-tech audience to ensure everyone can be part of the decision-making process. Early adopters in the Web3 space see this new iteration of the Web as liberating, an innovation that will decentralize power, facilitate peer-to-peer transactions, enable individual data ownership and challenge the dominance of tech giants. There are many questions about the governance of Web3 and its impacts to society that regulators, still stuck on platform content moderation, are not yet looking at. The conversation between Taylor and Shermin provides a foundational understanding of Web3 and a look ahead at areas where regulators should focus their attention.
Humanity has long imagined a future where humans could live for hundreds of years, if not forever. But those ideas have been the stuff of science fiction, up until now. There's growing interest and investment in the realm of biohacking and de-aging, and leading scientists such as Harvard's David A. Sinclair are bringing the idea of extended lifespans out of fantasy into a reality we may see within our generation. But a world where more people are living a lot longer than ever thought possible will have sweeping economic and social consequences. In this episode of Big Tech, host Taylor Owen speaks with journalist Matthew D. LaPlante, co-author of Lifespan: Why We Age — And Why We Don't Have To with David A. Sinclair. LaPlante's focus is on the impacts longer lifespans will have, rather than on the technology involved in achieving de-aging. For example: When people live longer, where do we set the retirement age? Can the planet support more humans? And how will we deal with our past choices when we live long enough to see their impacts on our great-great-grandchildren?In this wide-ranging conversation, Taylor and Matthew discuss more implications longer life would have on our society. In the justice system, appointing a 50-year-old to the Supreme Court looks very different when that person could live to 110 rather than 80. What about geopolitical stability, if autocrats and dictators can extend their lives to maintain power for much longer periods? And what are the implications for medical privacy when technology companies are using monitoring devices, such as the ubiquitous smart watch, in conjunction with artificial intelligence to predict when someone may develop an illness or have a heart attack?
A fundamental feature of the internet is its ability to transcend borders, connecting people to one another and all forms of information. The World Wide Web was heralded as a global village that would remove the traditional gatekeepers and allow anyone a platform to be heard. But the reality is that access to the internet and online services is very much bound to geography. A benign example is the location lockouts to online streaming platforms depending on which country you access. But more extreme examples of how location is inherently tied to internet access occur in authoritarian regimes that will limit access during uprisings, filter and block content, and surveil online conversations and then make real-world arrests. In this episode of Big Tech, host Taylor Owen speaks with Nanjala Nyabola, a CIGI fellow, political analyst and author of Digital Democracy, Analogue Politics: How the Internet Era is Transforming Politics in Kenya and Travelling While Black: Essays Inspired by a Life on the Move.Governments have been working on platform governance and content moderation reforms for a few years now, and the need to find solutions and set rules becomes increasingly important – just look at how misinformation and censorship have been playing out in Russia and other authoritarian states over the last few weeks during the war in Ukraine. In Nyabola's work on internet governance, she proposes that rather than look for global consensus on regulation, we need to think of the internet as a public good. “Water isn't administered the same way in Kenya as it is in Uganda, as it is in Ethiopia, as it is in the United States; different municipalities will have different codes. But there is a fundamental agreement that water is necessary for life and should, as far as possible, be administered as a public utility.” Nyabola explains that governing the internet requires first setting out its fundamental aspects that humanity wants to safeguard and then protecting those common principles while allowing jurisdictions deliver this public good in their own unique ways.
The speed at which the Russia-Ukraine war has played out across the internet has led to some interesting insights about how different groups have been experiencing and responding to information and misinformation about it. The West found unity across political divides, and the big tech platforms, breaking their long-held stance, have quickly acted to limit the spread of disinformation by making changes to their algorithms.However, across much of the non-English-language internet, the information ecosystem is very different. Many Russians aren't even aware that there is a war going on. And technology companies that are discontinuing their operations in Russia as a well-meaning sign of solidarity with Ukraine may be making the problem worse.In this episode of Big Tech, host Taylor Owen speaks with Ben Scott and Frederike Kaltheuner about various aspects of communications technology and the social media platforms that are being used by all sides in the Russia-Ukraine war. We begin with a conversation between Taylor and Ben, the executive director of Reset, on the state of the information ecosystem both inside Russia and around the world. In the second half, Taylor speaks with Frederike, the director of the technology and rights division at Human Rights Watch, about the importance of access to information during wartime in the monitoring and documenting of human rights abuses, as well as the critical role that communications systems play in helping citizens inside conflict zones.
In this episode of Big Tech, host Taylor Owen speaks with Margaret O'Mara, a historian of modern America and author of The Code: Silicon Valley and the Remaking of America. Silicon Valley and the massive wealth it has generated have long symbolized the wonders of free market capitalism, viewed as proof of how innovation can thrive when it is not burdened by government oversight. Silicon Valley is infused with this libertarian ethos, centred on the idea that it was guys in their garages, setting out to create something new and make the world a better place, who built the Valley. But O'Mara looks back into history and says that's all just a myth. During the Cold War, the United States was looking for ways to bolster its technological advantage over the Soviets. Knowing that state-led projects would appear “Communist” to the American people, the government funnelled federal funding for research and development through universities, research institutions and defence companies. This influx of funds enabled private companies to expand and innovate and universities to subsidize tuition. The Apollo space program offers one such example, where federal funds supported tech companies working in electronic miniaturization and semiconductors. The upshot is that the entire Silicon Valley tech sector was built on government intervention and support, and even the guys in their garages benefited from the access to affordable university education. “To pull yourself up by your bootstraps is an American myth that's very corrosive — there are very, very few truly self-made people,” explains O'Mara. By demystifying Silicon Valley's origins we can better approach regulation and oversight of the tech industry.
Do you feel as if you can't get through a single task without distractions? Perhaps you are watching a movie and stop it to check social media or respond to a message. You aren't alone; studies show that collectively our attention spans have been shrinking for decades. Many factors contribute to our fractured focus, including the processed foods we eat, which cause energy highs and lows, but the greatest culprit of all is technology. In this episode of Big Tech, host Taylor Owen speaks with Johann Hari, the author of three New York Times bestsellers: Stolen Focus, Lost Connections and Chasing the Scream. Hari has been writing about depression, addiction and drugs for many years. Using that as background, Hari seeks to understand how social media has been changing our ability to deeply focus on important tasks. Hari argues that we must not think of this as a personal failing and charge the individual with finding a way out of this crisis, as we have done with obesity and drug addictions. Instead, society must change its relationship with technology so that we can regain our human ability to focus. Technology has increased the speed at which we work and live; as we try to consume so much information, we begin to focus less and less on the details. Hari compares it to speed reading: “It's surprisingly effective, but it always comes with a cost, even for professional speed readers, which is the faster you read, the less you understand, the less you remember, and the more you're drawn to shallow and simplistic documents.” Couple that with the way platforms prioritize certain types of content and you have a recipe for disaster. “Everyone has experienced it. Human beings will stare longer at something that makes them angry and upset than they will at something that makes them feel good,” says Hari. What Hari worries is that rather than take collective action, society will put the onus on individuals much as in dealing with obesity it ignores the wider food supply network and instead sells fad diets and supplements to individuals. “And if you come to the attention crisis the same way [we responded] to the obesity crisis, we'll get the same outcome, which is an absolute disaster.”
In the history of computers and the internet, a few names likely come to mind: Alan Turing, Tim Berners-Lee, Bill Gates and Steve Jobs. Undoubtedly, these men's contributions to computer sciences have shaped much of our modern life. In the case of Jobs and Gates, their financial success shifted the landscape of software development and the metrics of success in Silicon Valley. Some sectors of the industry, such as programming, hypertext and databases, had been dominated by women in the early days, but once those areas became economic drivers, men flooded in, pushing aside the women. In the process, many of their contributions have been overlooked.In this episode of Big Tech, host Taylor Owen speaks with Claire L. Evans, a musician, internet historian and author of Broad Band: The Untold Story of the Women Who Made the Internet. Evans's book chronicles the work of women involved in creating the internet but left out of its history. Owen and Evans reflect on several important milestones of the early internet where women were innovating in community building and the moderation of message boards. Evans reveals a little-known history of the early web and the women involved. One aspect that stands out is how the projects that women led focused on building trust with users and the production of knowledge rather than the technical specifications of microprocessors or memory storage. Today, in the face of online harms, misinformation, failing institutional trust and content moderation challenges, there is a great deal we can learn from the work women were already doing decades ago in this space.
Nicholas Carr is a prolific blogger, author and critic of technology since the early days of the social web. Carr began his blog Rough Type in 2005, at a time when some of today's biggest companies where still start-ups operating out of college dorms. In 2010, he wrote the Pulitzer Prize for Nonfiction finalist The Shallows, in which he discussed how technology was changing the human brain. At the time, many were skeptical about Carr's argument, but in just over a decade many of his predictions have come true. In this episode of Big Tech, host Taylor Owen and guest Nicholas Carr reflect on how he was able to identify these societal shifts long before others. The social web, known as Web 2.0, was billed as a democratizing tool for breaking down barriers so that anyone could share information and have their voices heard. Carr had concerns; while others saw college kids making toys, he saw the potential for major shifts in society. “As someone who had studied the history of media, I knew that when you get these kinds of big systems, particularly big communication systems, the unexpected, unanticipated consequences are often bigger than what everybody thinks is going to happen,” Carr explains. We are again on the verge of the next online shift, called Web3, and as new online technologies like non-fungible tokens, cryptocurrencies and the metaverse are being built, we can learn from Web 2.0 in hopes of mitigating future unanticipated consequences. As Carr sees it, we missed the opportunity to become involved early on with social platforms, before they became entrenched in our lives. “Twitter was seen as a place where people, you know, describe what they had for breakfast, and so society didn't get involved in thinking about what are the long-term consequences here and how it's going to play out. So I think if we take a lesson from that, even if you're skeptical about virtual reality and augmented reality, now is the time that society has to engage with these visions of the future.”
People are divided: you are either pro-vaccination or against it, and there seems to be no middle ground. Whether around the dinner table or on social media, people are entrenched in their positions. A deep-seated mistrust in science, despite its contributions to the flourishing of human life, is being fuelled by online misinformation. For the first time in history, humanity is in the midst of a pandemic with communication tools of almost unlimited reach and potential benefit, yet social media and the information economy appear structured to promote polarization. Take the case of The Joe Rogan Experience podcast on Spotify: Rogan, a comedian, is able to engage millions of listeners and spread, unchecked, misinformation about COVID-19 “cures” and “treatments” that have no basis in evidence. What responsibility does Spotify have as the platform enabling Rogan to spread this misinformation, and is it possible for the scientific community to break through to skeptics? In this episode of Big Tech, host Taylor Owen speaks with Timothy Caulfield, the author of bestselling books such as Is Gwyneth Paltrow Wrong About Everything? and The Vaccination Picture. He is also the Canada Research Chair in Health Law and Policy at the University of Alberta. Throughout the COVID-19 pandemic, Caulfield has been outspoken on Twitter about medical misinformation with the #ScienceUpFirst campaign. What we have learned though the pandemic is how critical it is to have clear public health communication, and that it is remarkably difficult to share information with the public. As everyone rushed to provide medical advice, people were looking for absolutes. But in science, one needs to remain open to new discoveries, so, as the pandemic evolved, guidelines were updated. As Caulfield explains, “I think it's also a recognition of how important it is to bring the public along on that sort of scientific ride, saying, Look, this is the best advice we can give right now based on the science available.” When health guidelines are presented in a dogmatic way, it becomes difficult to share new emerging research; misunderstood or outdated facts become weaponized by those trying to discredit the public health sector who point to what was previously known and attempt to muddy the discourse and sow doubt. And that doubt leads to mistrust in institutions, the rise of “alternative facts,” the sharing of untested therapeutics on popular podcasts — and a convoy of truckers camped out in the Canadian capital to protest COVID lockdown and vaccine mandates.
Time and time again, we see the billionaire tech founder or CEO take the stage to present the latest innovation meant to make people's lives better, revolutionize industries and glorify the power of technology to save the world. While these promises are dressed up in fancy new clothes, in reality, the tech sector is no different than other expansionist enterprises from the past. Their core foundation of growth and expansion is deeply rooted in the European and American colonialization and Manifest Destiny doctrines. And just as in the past, the tech sector is engaging in extraction, exploitation and expansion.In this episode of Big Tech, host Taylor Owen speaks with Jeff Doctor, who is Cayuga from Six Nations of the Grand River Territory. He is an impact strategist for Animikii, an Indigenous-owned technology company.Doctor isn't surprised that technology is continuing to evolve in the same colonial way that he saw growing up and was built into television shows, movies and video games, such as the popular Civilizations franchise, which applies the same European expand-and-conquer strategy to winning the game regardless of the society a player represents in the game. “You see this manifested in the tech billionaire class, like all of them are literally trying to colonize space right now. It's not even a joke any more. They grew up watching the same crap,” Doctor says.Colonialism and technology have always been entwined. European expansionism depended on modern technology to dominate, whether it be through deadlier weapons, faster ships or the laying of telegraph and railway lines across the west. Colonization continues through, for example, English-only development tools, and country selection dropdown options limited to “Canada” or the “United States” that ignore Indigenous peoples' communities and nations. And, as governments grapple with how to protect people's personal data from the tech sector, there is little attention paid to Indigenous data sovereignty, to ensure that every nation and community has the ability to govern and benefit from its own data.
Governments around the world are looking at their legal frameworks and how they apply to the digital technologies and platforms that have brought widespread disruptive change to their economies, societies and politics. Most governments are aware that their regulations are inadequate to address the challenges of an industry that crosses borders and pervades all aspects of daily life. Three regulatory approaches are emerging: the restrictive regime of the Chinese state; the lax, free-market approach of the United States; and the regulatory frameworks of the European Union, which are miles ahead of those of any other Western democratic country.In this episode of Big Tech, host Taylor Owen speaks with Mark Scott, the chief technology correspondent at Politico, about the state of digital technology and platform regulations in Europe. Following the success of implementing the General Data Protection Regulation, which went into effect in 2018, the European Parliament currently has three big policy proposals in the works: the Digital Services Act, the Digital Markets Act and the Artificial Intelligence Act. Taylor and Mark discuss how each of these proposals will impact the tech sector and discuss their potential for adoption across Europe — and how many other nations, including Canada, are modelling similar regulations within their own countries.
Many unlocked mysteries remain about the workings of the human brain. Neuroscientists are making discoveries that are helping us to better understand the brain and correct preconceived notions about how it works. With the dawn of the information age, the brain's processing was often compared to that of a computer. But the problem with this analogy is that it suggested the human brain was hard-wired, able to work in one particular way only, much as if it were a computer chip, and which, if damaged, could not reroute itself or restore function to a damaged pathway. Taylor Owen's guest this week on the Big Tech podcast is a leading scholar of neuroplasticity, which is the ability of the brain to change its neural networks through growth and reorganization. Dr. Norman Doidge is a psychiatrist and author of The Brain That Changes Itself and The Brain's Way of Healing. His work points to just how malleable the brain can be.Dr. Doidge talks about the brain's potential to heal but also warns of the darker side of neuroplasticity, which is that our brains adapt to negative influences just as they do to positive ones. Today, our time spent in front of a screen and how we interact with technology are having significant impacts on our brains, and those of our children, affecting attention span, memory and recall, and behaviour. And all of these changes have societal implications.
Democracy is in decline globally. It's one year since the Capitol Hill insurrection, and many worry that the United States' democratic system is continuing to crumble. Freedom House, an America think tank, says that nearly three-quarters of the world's population lives in a country that experienced democratic deterioration last year. The rise of illiberalism is one reason for this, but another may be that democratic governments simply haven't been performing all that well in recent years. In this episode of Big Tech, host Taylor Owen speaks with Hélène Landemore, author of Open Democracy and Debating Democracy and professor of political science at Yale University. Landemore's work explores the limitations of casting a vote every few years for a candidate or political party and how in practice that isn't a very democratic process. “Electoral democracy is a closed democracy where power is restricted to people who can win elections,” she says. Positions on issues become entrenched within party lines; powerful lobbyists exert influence; and representatives, looking ahead to the next election, lack political will to lead in the here and now. In an open democracy, citizens would be called on to debate issues and create policy solutions for problems. “If you include more people in the conversation, in the deliberation, you get the benefits of cognitive diversity, the difficulties of looking at problems and coming up with solutions, which benefits the group ultimately,” Landemore explains. In response to the yellow jacket movement in France, the government asked 150 citizens to come up with climate policies. Over seven weekend meetings, that group came up with 149 proposals on how to reduce France's greenhouse gas emissions. In Ireland, a group of citizens was tasked with deliberating the abortion topic, a sensitive issue that was deadlocked in the political arena. The group included pro-life and pro-choice individuals and, rather than descending into partisan mud-slinging, was able to come to the recommendation, after much civil deliberation, that abortion be decriminalized. Landemore sees the French and Irish examples as precedents for further exploration and experimentation and that “it means potentially going through constitutional reforms to create a fourth or so chamber called the House of the People or something else, where it would be like a parliament but just made up of randomly selected citizens.”
On the first anniversary of the January 6 insurrection at the United States Capitol, Big Tech host Taylor Owen sits down with Craig Silverman to discuss how the rise of false facts led us to that moment. Silverman is a journalist for ProPublica and previously worked at Buzzfeed News, and is the editor of the Verification Handbook series. Before Donald Trump popularized “fake news” as a blanket term to attack mainstream news outlets, Silverman had been using it to mean something different and very specific. Fake news, also known as misinformation, disinformation or false facts, is online content that has been intentionally created to be shared on social media platforms. Before it was weaponized as a tool for election interference, fake news was simply a lucrative clickbait market that saw higher engagement than traditional media. And social media platforms' algorithms amplified it because that higher engagement meant people spent more time on the platforms and boosted their ad revenue. After establishing the origins of misinformation and how it was used to manipulate the 2016 US presidential election, Owen and Silverman discuss how Facebook, in particular, responded to the 2020 US presidential election. Starting in September 2020, the company established a civic integrity team focusing on, among other issues, its role in elections globally and removed posts, groups and users that were promoting misinformation. Silverman describes what happens next. “After the election, what does Facebook do? Well, it gets rid of the whole civic integrity team, including the group's task force. And so, as things get worse and worse leading up to January 6, nobody is on the job in a very focused way.” Before long, Facebook groups had “become an absolute hotbed and cesspool of delegitimization, death threats, all this kind of stuff,” explains Silverman. The lie that the election had been rigged was spreading unchecked via organized efforts on Facebook. Within a few weeks of the civic integrity team's dismantling, Trump's supporters arrived on Capitol Hill to “stop the steal.” It was then, as Silverman puts it, “the real world consequences came home to roost.”
In this episode of Big Tech, Taylor Owen speaks with Nicole Perlroth, New York Times cybersecurity journalist and author of This Is How They Tell Me the World Ends: The Cyberweapons Arms Race.Nicole and Taylor discuss how that the way in which nation-states go about acquiring cyber weapons through underground online markets creates an incentive structure that enables the entire cyberwarfare complex to thrive while discouraging these exploits from being patched. “So they don't want to tell anyone about their zero-day exploits, or how they're using them, because the minute they do, that $3 million investment they just made turns to mud,” Perlroth explains. As Perlroth investigated the world of cyberwarfare, she noticed how each offensive action was met with a response in kind, the United States is under constant attack. The challenge with countering cyber-based attacks is the many forms they can take and their many targets, from attacks on infrastructure such as the power grid, to corporate and academic espionage, such as stealing intellectual property or COVID-19 vaccine research, to ransomware. “The core thesis of your book,” Taylor reflects, “is for whatever gain the US government might get from using these vulnerabilities, the blowback is both an unknowable and uncontrollable uncontainable.”Early on, Perlroth was concerned about the infrastructure attacks, the ones that could lead to a nuclear power plant meltdown. However, the main focus of cyberattacks is on intelligence and surveillance of mobile phones and internet-connected devices. There is a tension between Silicon Valley's efforts to encrypt and secure user data and law enforcement's search for tools to break that encryption. Several jurisdictions are looking to force tech companies to build back doors into their products. Certainly, providing access to devices to aid in stopping terrorist attacks and human trafficking would be beneficial. But back doors, like other vulnerabilities found in code, can be weaponized and used by authoritarian regimes to attack dissidents or ethnic minorities.Cybersecurity is a multi-faceted issue that needs to be addressed at all levels, because the nature of cyberwarfare is that we can no longer protect just our physical borders. “We have no choice but to ask ourselves the hard questions about what is in our network and who's securing it — and where is this code being built and maintained and tested, and are they investing enough in security?” says Perlroth.
In the early days of the internet, information technology could be viewed as morally neutral. It was simply a means of passing data from one point to another. But, as communications technology has advanced by using algorithms, tracking and identifiers to shape the flow of information, we are being presented with moral and ethical questions about how the internet is being used and even reshaping what it means to be human.In this episode of Big Tech, Taylor Owen speaks with the Right Reverend Dr. Steven Croft, the Bishop of Oxford, Church of England. Bishop Steven, as he is known to his own podcast audience, is a board member of the Centre for Data Ethics and Innovation and has been part of other committees such as the House of Lords' Select Committee on Artificial Intelligence.Bishop Steven approaches the discussions around tech from a very different viewpoint, not as an academic or technologist but as a theologian in the Anglican church: “I think technology changes the way we relate to one another, and that relationship is at the heart of our humanity.” He compares what is happening now in society with the internet to the advent of the printing press in the fifteenth century, which democratized knowledge and changed the world in profound ways. The full impacts of this current technological shift in our society are yet to be known. But, he cautions, we must not lose sight of our core human principles when developing technology and ensure that we deploy it for “the common good of humankind.” “I don't think morals and ethics can be manufactured out of nothing or rediscovered. And if we don't have morality and ethics as the heart of the algorithms, when they're being crafted, then the unfairness will be even greater than they otherwise have been.”
Social media has become an essential tool for sharing information and reaching audiences. In the political realm, it provides access to constituents in a way that going door to door can't. It also provides a platform for direct access to citizens without paying for advertising or relying on news articles. We've seen how Donald Trump used social media to his advantage, but what happens when social media turns on the politician? In this episode of Big Tech, Taylor Owen speaks with Catherine McKenna, Canada's minister of environment and climate change from 2015 to 2019. McKenna's experience with online hate is not unique; many people and groups face online harassment and, in some cases, real-world actions against them. What does make McKenna's case interesting is the convergence of online harassment on social media and the climate change file. In her role as minister, McKenna was responsible for implementing the federal government's environmental policy, including the Paris Agreement commitments, carbon pricing and pipeline divestment. No matter what she said in her social posts, they were immediately met with negative comments from climate change deniers. Attacks against her escalated to the point where her constituency office was vandalized and a personal security detail was assigned to her. Finding solutions to climate change is complicated, cross-cutting work that involves many stakeholders and relies on dialogue and engagement with government, industry and citizens. McKenna found that the online expression of extremism, amplified by social media algorithms, made meaningful dialogue all but impossible. McKenna, no longer in politics, is concerned that the online social space is having negative impacts on future youth who may want to participate in finding climate solutions. “I've left public life not because of the haters, but because I just want to focus on climate change. But…I want more women to get into politics. I want broader diversity. Whether you're Indigenous, part of the LGBTQ+ community, or a new immigrant, whatever it is, I want you to be there, but it needs to be safe.” Which raises the question: To find climate solutions, must we first address misinformation and online hate?
Humans need privacy — the United Nations long ago declared it an inalienable and universal human right. Yet technology is making privacy increasingly difficult to preserve, as we spend fewer and fewer moments of time disconnected from our computers, smartphones and wearable tech. Edward Snowden's revelations about the scope of surveillance by the National Security Agency and journalists' investigations into Cambridge Analytica showed us how the tech products and platforms we use daily make incursions on our privacy. But we continue to use these services and allow our personal data to be collected and sold and, essentially, used against us — through advertising, political advertising and other forms of targeting, sometimes even surveillance or censorship — all because many feel that the benefits these services provide outweigh their negative impacts on our privacy. This week's guest, Carissa Véliz, believes that our current relationship with online privacy needs to change, and there are ways to go about it. Véliz is the author of Privacy Is Power and associate professor in the Faculty of Philosophy at the University of Oxford. Véliz speaks with host Taylor Owen about how sharing private information is often not simply individual information. “Whenever I share my personal data, I'm generally sharing personal data about others as well. So, if I share my genetic data, I'm sharing data about my parents, about my siblings, about my cousins,” which, she explains, can lead to unintended consequences for others, such as being denied medical insurance or deportation. As she sees it, users have the power to demand better controls over their personal data, because it is so valuable to the big tech companies that collect, sell and use it for advertising. “The most valuable kind of data is the most recent one, because personal data expires pretty quickly. People change their tastes. They move houses. They lose weight or gain weight. And so companies always want the most updated data.” Véliz wants people to know that even if they believe their data is already out there on the internet, it's not too late to improve their privacy practices or demand change from technology companies. “Because you're creating new data all the time, you can make a really big difference by protecting your data as of today,” she says. The battle is not lost — there is always an opportunity to change the way our data is used. But Véliz warns that we must act now to establish those guardrails, because technology will continue to invade ever more of our private spaces if left unchecked.
Tech billionaire Peter Thiel is an enigmatic, controversial and hugely influential power broker in both Silicon Valley and the political arena. He is often seen as a libertarian, who at one point was exploring the idea of building floating stateless cities in international waters. But at the same time Thiel is very much an insider. He is actively involved in American politics, through funding political candidates, and in tech, through co-founding PayPal and Palantir, as well as supporting other venture capital projects, and is even funding an “anti-woke” university. In this episode of Big Tech, host Taylor Owen speaks with Max Chafkin, author of The Contrarian: Peter Thiel and Silicon Valley's Pursuit of Power. Chafkin's study of Thiel seeks to understand how he has built a dual persona as a heroic Ayn Randian libertarian entrepreneurial superhero, on the one hand, and a vampiric supervillain, on the other. What has confused many about Thiel is how he seems to play on both sides of the political divide. When Thiel spoke at the Republican National Convention in support of Donald Trump, many on the left couldn't square the contradiction of how, in Chafkin's words, “a futurist who happens to be gay, who happens to be an immigrant, who happens to have two Stanford degrees, you know, support this, like, reactionary, anti-tech, you know, crazy guy from New York?” By seeking to understand what one of the most influential men in both tech and politics is about, as well as his beliefs and goals, perhaps we can better understand how our societies are being reshaped. And perhaps that understanding will make us better prepared to counteract those shifts in ways that serve the best interests of society rather than those of the powerful few.