POPULARITY
"A libertarian is someone who values freedom and usually political conceptions of freedom above most other values." Cameron interviews Will Duffield, a writer at the Cato Institute, discusses libertarianism, freedom, and the role of government in society. He shares his personal journey into politics and his belief in the value of individual liberty. Duffield explores the tension between wanting minimal government intervention and the desire to protect certain interests and prevent certain things from happening. He also discusses the importance of bottom-up governance, emergent solutions, and the concept of exit. Duffield emphasizes the need for more local variation and experimentation, allowing different communities to have their own rules and norms. Will Duffield discusses his college experience and the value of being able to customize his own education. He talks about the importance of education for the sake of learning and self-fulfillment, rather than just chasing a credential. Will also reflects on the challenges of transitioning from an activist milieu to a more professional setting and the tension between being a purist and making accommodations with state power. He emphasizes the need for continuous education in his field and the importance of staying informed about new laws and developments. The conversation explores the value of education, the importance of lifelong learning, and the role of the internet in knowledge acquisition. It delves into the significance of history in shaping our beliefs and understanding of the world. The discussion also touches on the challenges of free speech on the internet and the need for decentralized platforms. The conversation concludes with a reflection on pivotal moments in history and the potential for individuals to shape alternative futures. Follow Will on Twitter: @Will_Duffield Checkout his work with the Cato Institute.
Should Congress take steps to ban certain foreign-made drones that, despite being owned and used by Americans in a wide variety of helpful ways, could be sending sensitive data to antagonistic foreign governments? Will Duffield discusses the state of play. Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RTFB: On the New Proposed CAIP AI Bill, published by Zvi on April 10, 2024 on LessWrong. A New Bill Offer Has Arrived Center for AI Policy proposes a concrete actual model bill for us to look at. Here was their announcement: WASHINGTON - April 9, 2024 - To ensure a future where artificial intelligence (AI) is safe for society, the Center for AI Policy (CAIP) today announced its proposal for the "Responsible Advanced Artificial Intelligence Act of 2024." This sweeping model legislation establishes a comprehensive framework for regulating advanced AI systems, championing public safety, and fostering technological innovation with a strong sense of ethical responsibility. "This model legislation is creating a safety net for the digital age," said Jason Green-Lowe, Executive Director of CAIP, "to ensure that exciting advancements in AI are not overwhelmed by the risks they pose." The "Responsible Advanced Artificial Intelligence Act of 2024" is model legislation that contains provisions for requiring that AI be developed safely, as well as requirements on permitting, hardware monitoring, civil liability reform, the formation of a dedicated federal government office, and instructions for emergency powers. The key provisions of the model legislation include: 1. Establishment of the Frontier Artificial Intelligence Systems Administration to regulate AI systems posing potential risks. 2. Definitions of critical terms such as "frontier AI system," "general-purpose AI," and risk classification levels. 3. Provisions for hardware monitoring, analysis, and reporting of AI systems. 4. Civil + criminal liability measures for non-compliance or misuse of AI systems. 5. Emergency powers for the administration to address imminent AI threats. 6. Whistleblower protection measures for reporting concerns or violations. The model legislation intends to provide a regulatory framework for the responsible development and deployment of advanced AI systems, mitigating potential risks to public safety, national security, and ethical considerations. "As leading AI developers have acknowledged, private AI companies lack the right incentives to address this risk fully," said Jason Green-Lowe, Executive Director of CAIP. "Therefore, for advanced AI development to be safe, federal legislation must be passed to monitor and regulate the use of the modern capabilities of frontier AI and, where necessary, the government must be prepared to intervene rapidly in an AI-related emergency." Green-Lowe envisions a world where "AI is safe enough that we can enjoy its benefits without undermining humanity's future." The model legislation will mitigate potential risks while fostering an environment where technological innovation can flourish without compromising national security, public safety, or ethical standards. "CAIP is committed to collaborating with responsible stakeholders to develop effective legislation that governs the development and deployment of advanced AI systems. Our door is open." I discovered this via Cato's Will Duffield, whose statement was: Will Duffield: I know these AI folks are pretty new to policy, but this proposal is an outlandish, unprecedented, and abjectly unconstitutional system of prior restraint. To which my response was essentially: I bet he's from Cato or Reason. Yep, Cato. Sir, this is a Wendy's. Wolf. We need people who will warn us when bills are unconstitutional, unworkable, unreasonable or simply deeply unwise, and who are well calibrated in their judgment and their speech on these questions. I want someone who will tell me 'Bill 1001 is unconstitutional and would get laughed out of court, Bill 1002 has questionable constitutional muster in practice and unconstitutional in theory, we would throw out Bill 1003 but it will stand up these days because SCOTUS thinks the commerc...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RTFB: On the New Proposed CAIP AI Bill, published by Zvi on April 10, 2024 on LessWrong. A New Bill Offer Has Arrived Center for AI Policy proposes a concrete actual model bill for us to look at. Here was their announcement: WASHINGTON - April 9, 2024 - To ensure a future where artificial intelligence (AI) is safe for society, the Center for AI Policy (CAIP) today announced its proposal for the "Responsible Advanced Artificial Intelligence Act of 2024." This sweeping model legislation establishes a comprehensive framework for regulating advanced AI systems, championing public safety, and fostering technological innovation with a strong sense of ethical responsibility. "This model legislation is creating a safety net for the digital age," said Jason Green-Lowe, Executive Director of CAIP, "to ensure that exciting advancements in AI are not overwhelmed by the risks they pose." The "Responsible Advanced Artificial Intelligence Act of 2024" is model legislation that contains provisions for requiring that AI be developed safely, as well as requirements on permitting, hardware monitoring, civil liability reform, the formation of a dedicated federal government office, and instructions for emergency powers. The key provisions of the model legislation include: 1. Establishment of the Frontier Artificial Intelligence Systems Administration to regulate AI systems posing potential risks. 2. Definitions of critical terms such as "frontier AI system," "general-purpose AI," and risk classification levels. 3. Provisions for hardware monitoring, analysis, and reporting of AI systems. 4. Civil + criminal liability measures for non-compliance or misuse of AI systems. 5. Emergency powers for the administration to address imminent AI threats. 6. Whistleblower protection measures for reporting concerns or violations. The model legislation intends to provide a regulatory framework for the responsible development and deployment of advanced AI systems, mitigating potential risks to public safety, national security, and ethical considerations. "As leading AI developers have acknowledged, private AI companies lack the right incentives to address this risk fully," said Jason Green-Lowe, Executive Director of CAIP. "Therefore, for advanced AI development to be safe, federal legislation must be passed to monitor and regulate the use of the modern capabilities of frontier AI and, where necessary, the government must be prepared to intervene rapidly in an AI-related emergency." Green-Lowe envisions a world where "AI is safe enough that we can enjoy its benefits without undermining humanity's future." The model legislation will mitigate potential risks while fostering an environment where technological innovation can flourish without compromising national security, public safety, or ethical standards. "CAIP is committed to collaborating with responsible stakeholders to develop effective legislation that governs the development and deployment of advanced AI systems. Our door is open." I discovered this via Cato's Will Duffield, whose statement was: Will Duffield: I know these AI folks are pretty new to policy, but this proposal is an outlandish, unprecedented, and abjectly unconstitutional system of prior restraint. To which my response was essentially: I bet he's from Cato or Reason. Yep, Cato. Sir, this is a Wendy's. Wolf. We need people who will warn us when bills are unconstitutional, unworkable, unreasonable or simply deeply unwise, and who are well calibrated in their judgment and their speech on these questions. I want someone who will tell me 'Bill 1001 is unconstitutional and would get laughed out of court, Bill 1002 has questionable constitutional muster in practice and unconstitutional in theory, we would throw out Bill 1003 but it will stand up these days because SCOTUS thinks the commerc...
Yesterday, while Americans were celebrating Independence Day, a federal judge issued an injunction limiting the Biden administration from contacting social media companies. Will Duffield from the CATO Institute says the court is taking strong action against jawboning, AKA bullying by government officials using informal pressures. See omnystudio.com/listener for privacy information.
What protections do/should platforms have to use algorithms to suggest content to viewers? Will Duffield and Jennifer Huddleston comment on recent and future cases at the Supreme Court. Hosted on Acast. See acast.com/privacy for more information.
Introduction: Caleb O. BrownJennifer Huddleston and Will Duffield on social media regulations and big tech policySenator Tim Kaine on repealing the 1991 and 2002 Authorization for Use of Military Force against IraqFormer U.S. Secretary of Defense Chris Miller and Justin Logan on cutting the defense budgetMark Calabria, Dana Wade and Norbert Michel on challenges faced at the Federal Housing Finance AgencyExclusive: Thomas Berry on executive power and the administrative state Hosted on Acast. See acast.com/privacy for more information.
Congressional anger at the popular app TikTok could be better aimed at making Americans' data more secure from snoopers and hostile foreign governments. Cato's Jennifer Huddleston and Will Duffield discuss the recent Congressional hearing on TikTok. Hosted on Acast. See acast.com/privacy for more information.
Fears of artificial intelligence have been goosed recently with the emergence of services like ChatGPT that can deliver longform coherent text addressing fairly specific prompts. Cato's Will Duffield says many of the fears it has inspired are unfounded. Hosted on Acast. See acast.com/privacy for more information.
On February 21, the Supreme Court will hear oral arguments in Gonzalez v. Google, a case that risks reshaping the internet for the worse. In Gonzalez, plaintiffs have sued Google, the parent company of YouTube, alleging that YouTube's algorithms aided terrorist recruitment by helping would‐be terrorists find radicalizing videos. They argue that YouTube's video “recommendations” are distinct from publishing and thus unprotected by Section 230. If accepted, their argument would expose many websites' algorithmic matching features to litigation. This will be the first time the Supreme Court interprets Section 230, the bedrock intermediary liability shield that enables the modern internet, and whatever the court decides will echo throughout the web.Join our panelists Thomas Berry, Jess Miers, Nicole Saad Bembridge, and Gabrielle Shea for a discussion of the oral arguments in Gonzalez, moderated by Will Duffield. We will explain the implications of the case and attempt to read the tea leaves of justices' reactions and remarks. Hosted on Acast. See acast.com/privacy for more information.
Will Duffield provides additional context ahead of the Supreme Court's consideration of liability under Section 230 of Communications Decency Act.Related Cato Daily Podcast: Do Algorithms Get a Pass Under Section 230? featuring Thomas A. Berry and Caleb O. Brown Hosted on Acast. See acast.com/privacy for more information.
Everyone knows that special interests lobby government for favors. The Twitter Files have revealed that lobbying is a two-way street, with government lobbying corporations too – often to silence critics of its policies. Will Duffield of the Cato Institute has become the go-to expert on so-called “Jawboning” i.e., informally pressuring private companies to censor disfavored speech. The Jawboning-industrial-complex includes members of both parties. Major social media companies now have internal teams to handle “suggestions” from government officials. Ban this person. Silence that opinion. It's not exactly what Orwell pictured, but it's still concerning. I previously interviewed Jenin Younes of the National Civil Liberties Alliance on a related topic. NCLA has defended those who censored for contradicting official CDC stances on COVID, like Martin Kulldorff and Dr. Jay Bhattacharya. Getting banned from a private social media platform isn't the same as getting thrown in jail for speech. But what if the social media company is acting under implicit pressure from Congress? It's a classic “Your Money or Your Life!” decision. Does this kind of censorship violate the Constitution, even if no laws are passed? Duffield joins me Sunday to discuss the nuances of free speech law in the social media era, and to lay out a libertarian vision of digital expression.
Will Duffield joins Chelsea Follett to discuss recent advances in artificial intelligence, how they can improve our lives, and the challenges they may pose. Will Duffield is a policy analyst in Cato's Center for Representative Government, where he studies speech and internet governance. He's also an expert on a host of related issues in technology policy. Learn more: https://www.cato.org/people/will-duffield
-Trump dips into the NFTs. The bottom is in -The pendulum swings on Twitter journos -Kevin O'Leary fails the FTX fraud smell test -Reflections from Vienna INTERVIEW: Professor Jean-Sébastien Fallu on modern and smart alcohol policy. INTERVIEW: Cato Institute policy analyst Will Duffield gives us the breakdown on "jawboning": what it is, how it applies to tech policy, and why we should protect against overzealous actions of political actors. Broadcast on Consumer Choice Radio on December 17, 2022. Syndicated on Sauga 960AM and Big Talker Network. Website: https://consumerchoiceradio.com ***PODCAST*** Podcast Index: https://bit.ly/3EJSIs3 Apple: http://apple.co/2G7avA8 Spotify: http://spoti.fi/3iXIKIS RSS: https://omny.fm/shows/consumerchoiceradio/playlists/podcast.rss Our podcast is now Podcasting 2.0 compliant! Listen to the show using a Bitcoin lightning wallet-enabled podcasting app (Podverse, Breeze, Fountain, etc.) to directly donate to the show using the Bitcoin lightning network (stream those sats!). More information on that here: https://podcastindex.org/apps Produced by the Consumer Choice Center. Support us: https://consumerchoicecenter.org/donateSee omnystudio.com/listener for privacy information.
FIRE's new Director of Public Advocacy Aaron Terr and the Cato Institute's Will Duffield join the show to discuss a slew of recent free speech news. California gets it right on rap lyrics but wrong on coronavirus misinformation. One Texas school district repeatedly ventures into book banning. LeBron James spreads “hate speech” misinformation. Is government “jawboning” censorship? And, yes, Elon Musk . . . again. Show notes: Watch the video of the podcast conversation “VICTORY: After FIRE lawsuit, court halts enforcement of key provisions of the Stop WOKE Act limiting how Florida professors can teach about race, sex” “Jawboning against Speech: How Government Bullying Shapes the Rules of Social Media,” by Will Duffield “Fact Sheet: Texas School District Bans 'Gender Fluidity' from Library Shelves” “California Restricts Use of Rap Lyrics in Criminal Trials After Gov. Newsom Signs Bill,” “The ACLU Says California's Ban on COVID-19 ‘Misinformation' From Doctors Is Gratuitous and Unconstitutional,” LeBron James, via Twitter: “So many damn unfit people saying hate speech is free speech.” “Markey fires back after Musk mocks his Twitter complaint” “Biden asked whether Elon Musk is ‘threat' to national security, says relationships ‘worth being looked at'” www.sotospeakpodcast.com YouTube: https://www.youtube.com/thefireorg Twitter: https://www.twitter.com/freespeechtalk Facebook: https://www.facebook.com/sotospeakpodcast Instagram: https://www.instagram.com/freespeechtalk/ Email us: sotospeak@thefire.org
The public's reliance on social media platforms has created new opportunities for censorship by proxy, despite the First Amendment's prohibition on government speech regulation. Will Duffield's recent policy analysis “Jawboning against Speech: How Government Bullying Shapes the Rules of Social Media” details how government officials increasingly use informal pressure to compel the suppression of disfavored speakers on platforms like Facebook and Twitter.However, the specifics of this bullying, and what to do about it, remain contested. Does jawboning require a threat? When can coordination between platforms and government be voluntary? Solutions to jawboning must respect platforms' rights and cannot inhibit congressional debate. What, then, can be done?Please join Will Duffield, Adam Kovacevich, and Jenin Younes at the Cato Institute or online for a conversation about this novel threat to free speech. Hosted on Acast. See acast.com/privacy for more information.
INTERVIEW: Cato Institute policy analyst Will Duffield gives us the breakdown on "jawboning": what it is, how it applies to tech policy, and why we should protect against overzealous actions of political actors. https://www.cato.org/policy-analysis/jawboning-against-speech Yaël and David duke it out The energy wars continuing The necessity of oil and gas investment Has alternative energy effectively been a huge waste? Fallout from the Quebec election Broadcast on Consumer Choice Radio on October 8, 2022. Syndicated on Sauga 960AM and Big Talker Network. Website: https://consumerchoiceradio.com ***PODCAST*** Podcast Index: https://bit.ly/3EJSIs3 Apple: http://apple.co/2G7avA8 Spotify: http://spoti.fi/3iXIKIS RSS: https://omny.fm/shows/consumerchoiceradio/playlists/podcast.rss Our podcast is now Podcasting 2.0 compliant! Listen to the show using a Bitcoin lightning wallet-enabled podcasting app (Podverse, Breeze, Fountain, etc.) to directly donate to the show using the Bitcoin lightning network (stream those sats!). More information on that here: https://podcastindex.org/apps Produced by the Consumer Choice Center. Support us: https://consumerchoicecenter.org/donateSee omnystudio.com/listener for privacy information.
Over the last few years lawmakers have been threatening social media companies with antitrust legislation and the repeal of Section 230 if they refuse to moderate their content. A new report from the Cato Institute calls it "Jawboning." Will Duffield from the Cato Institute breaks down how government bullies social media companies into censoring content, why it's a problem, and what can be done. See omnystudio.com/listener for privacy information.
A service that keep sites online despite attacks often protects sites whose bad reputations are well earned. Elizabeth Nolan Brown and Will Duffield discuss Cloudflare and its change of heart over providing service to the infamous troll haven known as Kiwi Farms. Hosted on Acast. See acast.com/privacy for more information.
The Hunter Biden laptop story was suppressed by Facebook and other social media over a general request regarding “election disinformation” from the FBI. It's the kind of compliance that government probably couldn't get through legislation. Will Duffield discusses the difficult situations that arise from Congressional jawboning over social media moderation. Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.
What changes when people trying to make effective use of social media are active participants in a war? How advisable is it for large social media platforms to effective pick sides in a conflict? Will Duffield comments. See acast.com/privacy for privacy and opt-out information.
Why are we talking about “Big Tech” now in a way we we weren't 5 years ago? Cato's own Matthew Feeney and Will Duffield join Trevor to discuss how the 2016 election changed the political landscape, the value of moderation, and how digital infrastructure influences a platform's power. See acast.com/privacy for privacy and opt-out information.
Bonnie Pritchett reports on the lessons crisis pregnancy centers in Texas have learned since the heartbeat law went into effect; Mary Reichard talks to Will Duffield about the changes Elon Musk hopes to bring to Twitter; and Jenny Rough meets a man who believes less is more. Plus: commentary from Cal Thomas, a crash course in flying, and the Thursday morning news.Support The World and Everything in It today at wng.org/donate. Additional support comes from Ambassadors Impact Network, providing growth financing for companies led by CEOs and management teams who are disciple makers and evangelists. More at ambassadorsimpact.com. From Ridge Haven, The Camp, and Retreat Center of the Presbyterian Church in America. With campuses located in North Carolina and Iowa, Ridge Haven serves over 12,000 guests year-round in efforts to support the Church and train future generations in ministry. More at ridgehaven.org Listen to Beyond the Forum on Apple Podcasts here: bit.ly/BeyondTheForumApple And explore more about the Veritas Forum here: www.veritas.org
Content moderation poses a huge challenge for even the best-run social media platforms. Add to that challenge the vitriol and handwringing associated with Elon Musk's purchase of Twitter. Will Duffield comments. See acast.com/privacy for privacy and opt-out information.
The propaganda machine in Russia has been working overtime to sell its war in Ukraine as just and necessary. Will Duffield analyzes why this effort has failed so remarkably while other efforts have succeeded. See acast.com/privacy for privacy and opt-out information.
The metaverse offers an opportunity to replicate real-world human interaction, but it also presents some new and unique problems. Given the strength of current players in this market and the ever-present threat of regulation, how might the growth of this new simulated reality play out? Will Duffield comments. See acast.com/privacy for privacy and opt-out information.
Social media companies have differing ideas about allowing the Taliban on their platforms. Will Duffield explains what social media means for the people and (new) government of Afghanistan. See acast.com/privacy for privacy and opt-out information.
We've got another podcast cross-post for you this week! Mike recently joined the Cato Institute Daily Podcast to discuss the PACT Act — the more "serious" proposal for Section 230 reform that is still riddled with problems that will do damage to the entire internet. Listen to the full conversation withn Mike and Cato's Will Duffield on this week's episode.
Section 230 of the Communications Decency Act has a new piece of would-be reform legislation, though the proposal highlights just how hard it is to do content moderation at scale. Mike Masnick of techdirt and Cato's Will Duffield comment. See acast.com/privacy for privacy and opt-out information.
Should online platforms get blamed for criminal behavior that occurs online, even when police fail to act? Will Duffield comments. See acast.com/privacy for privacy and opt-out information.
An amalgam of proposals from Democrats would strictly regulate online speech, and make more costly other forms of public communication on policy issues. Will Duffield comments on the proposal. See acast.com/privacy for privacy and opt-out information.
Twitter banned President Trump after he used the platform to help spin up a crowd just before last week's deadly Capitol attack. That should seem like an easy call. But what about similar bans on some Trump supporters? The removal of accounts on various platforms appeared to be fairly widespread. Will Duffield and Matthew Feeney comment. See acast.com/privacy for privacy and opt-out information.
Twitter banned President Trump after he used the platform to help spin up a crowd just before last week's deadly Capitol attack. That should seem like an easy call. But what about similar bans on some Trump supporters? The removal of accounts on various platforms appeared to be fairly widespread. Will Duffield and Matthew Feeney comment. See acast.com/privacy for privacy and opt-out information.
Was Facebook's purchase of Instagram and other properties evidence of monopolistic practices? Will Duffield and Ryan Bourne are skeptical. See acast.com/privacy for privacy and opt-out information.
There are some things that even a pandemic cannot stop. One of those things is political pressure to "do something" about Big Tech. Paul checks in with Matthew Feeney and Will Duffield to get an update on the state of the techlash. Furthermore, this year many of the major social media platforms have ramped up their fact-checking operations in an attempt to combat disinformation about the pandemic and partisan politics, but it is possible that they have opened a Pandora's Box of unintended consequences by doing so. See acast.com/privacy for privacy and opt-out information.
Taken as a whole, the internet is a resilient platform for free speech. However, individual platforms are increasingly being targeted by government and activist groups demanding censorship. Will and Neeraj discuss the recent history of this trend and how censorship might look at different levels in the internet infrastructure stack.Read: A History of Crowdfunding in the Wake of Violence by Will Duffield
Today I'm joined by my friend Will Duffield. We discuss our favorite science fiction novels.
Politicians want their constituents to feel a sense of personal connection to them. Mass media makes those perceptions of intimacy and authenticity possible on a large scale, like FDR’s radio fireside chats, Ronald Reagan’s TV appearances, and Donald Trump’s tweets. But we are on the cusp of the political adoption of a new media form; it’s the age of livestreaming as an exercise in political branding, whether it’s Elizabeth Warren awkwardly taking a swig of beer, Beto O’Rourke carving a steak, or Alexandria Ocasia-Cortez wandering wide-eyed the corridors of Capitol Hill.Yet the adoption of livestreaming, as well as the rise of crowdfunded political campaigns, is drawing the attention of campaign finance regulators. Radio and television broadcasting by political candidates has long been regulated, but the internet has traditionally not. John Samples joins Will Duffield and Paul Matzko to discuss the legal and political implications of these new trends in fundraising and advertising.Are the social media accounts of politicians a more intimate way for voters to view them? Are politicians authentic on social media or do they try to hard to be seen as relatable? Do Americans have a right to view or hear Russian ads? Further Reading:Who Should Moderate Content on Facebook, written by John SamplesGoogle Is a Tricky Case but Conservatives Please Stay Strong — Reject the Temptation to Regulate the Internet, written by John SamplesRelated Content:New Year, New Congress, New Tech, Building Tomorrow PodcastPlace Your Political Bets, Building Tomorrow Podcast See acast.com/privacy for privacy and opt-out information.
Peter Van Doren and Will Duffield join us today to discuss a variety of topics including; designer babies, driverless cars, and “non-slaughter meat”. It may seem as though that these topics are not obviously related, but with any new or emerging technology there is a blowback response from potential users or consumers. In these three very different fields, reactions have been mixed. More importantly, it may be impossible to predict the consequences of fully adopting any of these emerging technologies that seemingly make our lives better off. Could we get to a point where everyone will be designing their children? What will the future of car-sharing be? How is technology and innovation hindered (or helped) by the culture of adoption? Should we be hesitant to adopt new technologies that we perceive as making our lives better? Further Reading:Safety of Beef Processing Method Is Questioned, written by Michael MossABC TV settles with beef product maker in ‘pink slime’ defamation case, written by Timothy MclaughlinRegulation Magazine Fall 2018Gene Editing Needs to Be Available to Everyone, written by Noah SmithIceland Close to Becoming the First Country Where No Down’s Syndrome Children are Born, written by Dave MacLeanRelated Content:Welcome to the Sharing Economy, Free Thoughts PodcastIn the Economy of the Future, You Won’t Own Your Kitchen, written by Pamela J. HobartRide-Sharing Services Aren’t a Problem, They’re a Solution, written by Aeon Skoble See acast.com/privacy for privacy and opt-out information.
We've got a crossover episode this week, all about the EU's disastrous moves on the copyright front. Mike recently joined the Building Tomorrow podcast to discuss the subject with Paul Matzko and Will Duffield, and now you can listen to it here on this week's episode of the Techdirt Podcast.
Building Tomorrow explores the ways technology, innovation, and entrepreneurship are creating a freer, wealthier, and more peaceful world. In our first episode, we survey how major recent advances in tech have made it harder for the State to “read” citizens, deepened networks of trust between activists, expanded ownership of our bodies, and created new sharing economies.Further Readings/References:Yes, an augmented reality cocktail bar is absolutely the best use of this exciting new technology.Philosopher gnomes for the gardens of those with discerning taste.A Building Tomorrow review of Michael Munger’s book, Tomorrow 3.0.The first article of Will Duffield’s “Prototype” project on creating an uncensorable internet. See acast.com/privacy for privacy and opt-out information.
Will Duffield joins us to discuss Cambridge Analytica and the future of social media. What is Cambridge Analytica? What is Facebook doing with all this data? Should we expect more regulation of online advertising or nationalized social media platforms?Further Readings/References:Free Thoughts Podcast: Free Speech Online (with Will Duffield) Free Thoughts Podcast: Nothing Is Secure (with Julian Sanchez)Free Thoughts Podcast: The Internet Doesn’t Need to Be Saved (with Peter Van Doren) See acast.com/privacy for privacy and opt-out information.
Will Duffield joins us this week to talk about the freedom of speech in the internet era. How has the shift to digital communication changed interpretations of the First Amendment?We discuss the implications of lower barriers to entry for ownership of the mechanisms for distribution of speech, draw a distinction between speech gatekeepers and speech enablers, think about whether big web companies are beginning to act like states, and have a conversation about “fake news.”Show Notes and Further ReadingHere’s the Youtube video Aaron mentions about software that can manipulate mouth and lip movement in video.Our Free Thoughts episode with Brock Cusick on the blockchain. See acast.com/privacy for privacy and opt-out information.