POPULARITY
Frequently on this podcast we come back to questions around information, misinformation, and disinformation. In this age of digital communications, the metaphorical flora and fauna of the information ecosystem are closely studied by scientists from a range of disciplines. We're joined in this episode by one such scientist who uses observation and ethnography as his method, bringing a particularly sharp eye to the study of propaganda, media manipulation, and how those in power— and those who seek power— use such tactics. Samuel Woolley is the author of Manufacturing Consensus: Understanding Propaganda in the Age of Automation and Anonymity, just out this week from Yale University Press. He's also the author of The Reality Game: How the Next Wave of Technology Will Break the Truth; co-author, with Nick Monaco, of a book on Bots; and co-editor, with Dr. Philip N. Howard, of a book on Computational Propaganda.
Technology is breaking politics - what can be done about it? Artificially intelligent "bot" accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies. With massive amounts of social media and public polling data, and in-depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these "lie machines" apart. Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives (Yale UP, 2020) is full of riveting behind the scenes stories from the world's biggest and most damagingly successful misinformation initiatives--including those used in Brexit and U.S. elections. Howard not only shows how these campaigns evolved from older propaganda operations but also exposes their new powers, gives us insight into their effectiveness, and shows us how to shut them down. As dangerous as things are now, they will only get worse; the enormous flood of data coming from the so-called Internet of Things, along with the growing sophistication of artificial intelligence, will make disinformation easier to generate and disseminate and much harder to spot and remove. Howard tackles the tough task of suggesting the changes that are needed to create a radically redesigned social media ecosystem that would reinforce, rather than erode, democracy. Medha Prasanna is an MA candidate at the Elliott School of International Affairs, George Washington University. Her current research focuses on International Organizations and Human Rights Law. You can learn more about her here or email her medp16@gwu.edu Learn more about your ad choices. Visit megaphone.fm/adchoices
Technology is breaking politics - what can be done about it? Artificially intelligent "bot" accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies. With massive amounts of social media and public polling data, and in-depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these "lie machines" apart. Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives (Yale UP, 2020) is full of riveting behind the scenes stories from the world's biggest and most damagingly successful misinformation initiatives--including those used in Brexit and U.S. elections. Howard not only shows how these campaigns evolved from older propaganda operations but also exposes their new powers, gives us insight into their effectiveness, and shows us how to shut them down. As dangerous as things are now, they will only get worse; the enormous flood of data coming from the so-called Internet of Things, along with the growing sophistication of artificial intelligence, will make disinformation easier to generate and disseminate and much harder to spot and remove. Howard tackles the tough task of suggesting the changes that are needed to create a radically redesigned social media ecosystem that would reinforce, rather than erode, democracy. Medha Prasanna is an MA candidate at the Elliott School of International Affairs, George Washington University. Her current research focuses on International Organizations and Human Rights Law. You can learn more about her here or email her medp16@gwu.edu Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/political-science
Technology is breaking politics - what can be done about it? Artificially intelligent "bot" accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies. With massive amounts of social media and public polling data, and in-depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these "lie machines" apart. Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives (Yale UP, 2020) is full of riveting behind the scenes stories from the world's biggest and most damagingly successful misinformation initiatives--including those used in Brexit and U.S. elections. Howard not only shows how these campaigns evolved from older propaganda operations but also exposes their new powers, gives us insight into their effectiveness, and shows us how to shut them down. As dangerous as things are now, they will only get worse; the enormous flood of data coming from the so-called Internet of Things, along with the growing sophistication of artificial intelligence, will make disinformation easier to generate and disseminate and much harder to spot and remove. Howard tackles the tough task of suggesting the changes that are needed to create a radically redesigned social media ecosystem that would reinforce, rather than erode, democracy. Medha Prasanna is an MA candidate at the Elliott School of International Affairs, George Washington University. Her current research focuses on International Organizations and Human Rights Law. You can learn more about her here or email her medp16@gwu.edu Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/book-of-the-day
Technology is breaking politics - what can be done about it? Artificially intelligent "bot" accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies. With massive amounts of social media and public polling data, and in-depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these "lie machines" apart. Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives (Yale UP, 2020) is full of riveting behind the scenes stories from the world's biggest and most damagingly successful misinformation initiatives--including those used in Brexit and U.S. elections. Howard not only shows how these campaigns evolved from older propaganda operations but also exposes their new powers, gives us insight into their effectiveness, and shows us how to shut them down. As dangerous as things are now, they will only get worse; the enormous flood of data coming from the so-called Internet of Things, along with the growing sophistication of artificial intelligence, will make disinformation easier to generate and disseminate and much harder to spot and remove. Howard tackles the tough task of suggesting the changes that are needed to create a radically redesigned social media ecosystem that would reinforce, rather than erode, democracy. Medha Prasanna is an MA candidate at the Elliott School of International Affairs, George Washington University. Her current research focuses on International Organizations and Human Rights Law. You can learn more about her here or email her medp16@gwu.edu Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology
Technology is breaking politics - what can be done about it? Artificially intelligent "bot" accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies. With massive amounts of social media and public polling data, and in-depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these "lie machines" apart. Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives (Yale UP, 2020) is full of riveting behind the scenes stories from the world's biggest and most damagingly successful misinformation initiatives--including those used in Brexit and U.S. elections. Howard not only shows how these campaigns evolved from older propaganda operations but also exposes their new powers, gives us insight into their effectiveness, and shows us how to shut them down. As dangerous as things are now, they will only get worse; the enormous flood of data coming from the so-called Internet of Things, along with the growing sophistication of artificial intelligence, will make disinformation easier to generate and disseminate and much harder to spot and remove. Howard tackles the tough task of suggesting the changes that are needed to create a radically redesigned social media ecosystem that would reinforce, rather than erode, democracy. Medha Prasanna is an MA candidate at the Elliott School of International Affairs, George Washington University. Her current research focuses on International Organizations and Human Rights Law. You can learn more about her here or email her medp16@gwu.edu Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/world-affairs
Technology is breaking politics - what can be done about it? Artificially intelligent "bot" accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies. With massive amounts of social media and public polling data, and in-depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these "lie machines" apart. Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives (Yale UP, 2020) is full of riveting behind the scenes stories from the world's biggest and most damagingly successful misinformation initiatives--including those used in Brexit and U.S. elections. Howard not only shows how these campaigns evolved from older propaganda operations but also exposes their new powers, gives us insight into their effectiveness, and shows us how to shut them down. As dangerous as things are now, they will only get worse; the enormous flood of data coming from the so-called Internet of Things, along with the growing sophistication of artificial intelligence, will make disinformation easier to generate and disseminate and much harder to spot and remove. Howard tackles the tough task of suggesting the changes that are needed to create a radically redesigned social media ecosystem that would reinforce, rather than erode, democracy. Medha Prasanna is an MA candidate at the Elliott School of International Affairs, George Washington University. Her current research focuses on International Organizations and Human Rights Law. You can learn more about her here or email her medp16@gwu.edu Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Technology is breaking politics - what can be done about it? Artificially intelligent "bot" accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies. With massive amounts of social media and public polling data, and in-depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these "lie machines" apart. Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives (Yale UP, 2020) is full of riveting behind the scenes stories from the world's biggest and most damagingly successful misinformation initiatives--including those used in Brexit and U.S. elections. Howard not only shows how these campaigns evolved from older propaganda operations but also exposes their new powers, gives us insight into their effectiveness, and shows us how to shut them down. As dangerous as things are now, they will only get worse; the enormous flood of data coming from the so-called Internet of Things, along with the growing sophistication of artificial intelligence, will make disinformation easier to generate and disseminate and much harder to spot and remove. Howard tackles the tough task of suggesting the changes that are needed to create a radically redesigned social media ecosystem that would reinforce, rather than erode, democracy. Medha Prasanna is an MA candidate at the Elliott School of International Affairs, George Washington University. Her current research focuses on International Organizations and Human Rights Law. You can learn more about her here or email her medp16@gwu.edu Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
Technology is breaking politics - what can be done about it? Artificially intelligent "bot" accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies. With massive amounts of social media and public polling data, and in-depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these "lie machines" apart. Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives (Yale UP, 2020) is full of riveting behind the scenes stories from the world's biggest and most damagingly successful misinformation initiatives--including those used in Brexit and U.S. elections. Howard not only shows how these campaigns evolved from older propaganda operations but also exposes their new powers, gives us insight into their effectiveness, and shows us how to shut them down. As dangerous as things are now, they will only get worse; the enormous flood of data coming from the so-called Internet of Things, along with the growing sophistication of artificial intelligence, will make disinformation easier to generate and disseminate and much harder to spot and remove. Howard tackles the tough task of suggesting the changes that are needed to create a radically redesigned social media ecosystem that would reinforce, rather than erode, democracy. Medha Prasanna is an MA candidate at the Elliott School of International Affairs, George Washington University. Her current research focuses on International Organizations and Human Rights Law. You can learn more about her here or email her medp16@gwu.edu Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/journalism
Technology is breaking politics - what can be done about it? Artificially intelligent "bot" accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies. With massive amounts of social media and public polling data, and in-depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these "lie machines" apart. Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives (Yale UP, 2020) is full of riveting behind the scenes stories from the world's biggest and most damagingly successful misinformation initiatives--including those used in Brexit and U.S. elections. Howard not only shows how these campaigns evolved from older propaganda operations but also exposes their new powers, gives us insight into their effectiveness, and shows us how to shut them down. As dangerous as things are now, they will only get worse; the enormous flood of data coming from the so-called Internet of Things, along with the growing sophistication of artificial intelligence, will make disinformation easier to generate and disseminate and much harder to spot and remove. Howard tackles the tough task of suggesting the changes that are needed to create a radically redesigned social media ecosystem that would reinforce, rather than erode, democracy. Medha Prasanna is an MA candidate at the Elliott School of International Affairs, George Washington University. Her current research focuses on International Organizations and Human Rights Law. You can learn more about her here or email her medp16@gwu.edu Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/communications
In a new study by researchers at the Oxford Internet Institute, ‘Global Attitudes towards Artificial Intelligence (AI) & Automated Decision Making, analysis shows that public perceptions on the use of AI in public life is divided, with populations in the West, generally more worried about AI than those in the East. The study, co-authored by doctoral researcher Lisa-Maria Neudert, Dr. Alekski Knnutila, and Professor Philip N. Howard, is based on analysis of survey data from the 2019 World Risk Poll, published by the Lloyds Register Foundation powered by Gallup, which examines peoples’ perceptions of global risks across 142 countries. It is the second in a series of reports from the Oxford Commission on AI and Good Governance (OxCAIGG), which seeks to advise world leaders on effective ways to use AI and machine learning in public administration and governance. The Oxford Internet Institute researchers examined the 2019 World Risk Poll data in relation to public attitudes towards the development of AI in the future, in particular, whether people think AI would help or mostly harm people in the next twenty years. The findings show significant regional differences, with North Americans and Latin Americans most skeptical about the benefits of AI, with at least 40% of their populations believing AI will be harmful, whilst only 25% of those living in South East Asia and just 11% of those living in East Asia expressed similar concerns. Researcher and lead author of the study, Lisa Maria-Neudert, Oxford Internet Institute, said: “Understanding public confidence in AI and machine learning is vital to the successful implementation of such systems in government. Our analysis suggests that putting AI to work for good governance will be a two-fold challenge. Involving AI and machine learning systems in public administration is going to require inclusive design, informed procurement, purposeful implementation, and persistent accountability. Additionally, it will require convincing citizens in many countries around the world that the benefits of using AI in public agencies outweighs the risks.” Oxford Internet Institute: Other findings include: Only 9% of Chinese people perceive AI as risky, which is significantly lower than in other regions Business executives and government officials are the most optimistic about AI, with 47% of those professionals believing AI will mostly help Construction and service workers (35%) are the least confident about the role of automated decision making in society About the research The Oxford study is based on analysis of data from the World Risk Poll 2019, published by the Lloyds Register Foundation. The Poll comprises survey data from 154,195 participants living in 142 countries with interviews carried out between May 2019 and January 2020. About OxCAIGG The Oxford Commission on AI and Good Governance launched in July 2020. Its mission is to investigate the procurement and implementation challenges surrounding the use of AI for good governance faced by democracies around the world. It identifies best practices for evaluating and managing risks and benefits, and recommend strategies in an effort to take full advantage of technological capacities while mitigating potential harms of AI-enabled public policy. Find out more. About the Oxford Internet Institute The Oxford Internet Institute (OII) is a multidisciplinary research and teaching department of the University of Oxford, dedicated to the social science of the Internet. Drawing from many different disciplines, the OII works to understand how individual and collective behaviour online shapes our social, economic, and political world. Since its founding in 2001, research from the OII has had a significant impact on policy debate, formulation, and implementation around the globe, as well as a secondary impact on people’s wellbeing, safety, and understanding. Drawing on many different disciplines, the OII takes a...
In this week’s episode, Jeremy Shapiro stepped in as host and welcomes senior policy fellows Kadri Liik and Andrew Wilson as well as political scientist and editor of “Belarus-Analysen” Olga Dryndova to the podcast. Together, they shed light on the situation on the ground in Belarus: what are the goals do of the opposition in Belarus and what kind, if any, of strategy does it have for achieving them? How does long-time president Lukashenko see the situation and what is the state’s strategy to try to remain in power? And finally, what roles should the EU and Russia play in a mediation process? Further reading: Why the EU now needs a deliberate Belarus policy, by Andrew Wilson: https://buff.ly/3gomwOl This podcast was recorded on 26 August 2020. Bookshelf: - “Berlin 1936: 16 days in August” by Oliver Hilmes - Follow Tadeusz Giczan for analyses on Belarus - “Lie Machines: How to save democracy from troll armies, deceitful robots, junk news operations, and political operatives” by Philip N. Howard - Collections of essays by Haljand Udam - “Einstein’s dream“ by Alan Lightman Picture: (c) picture alliance / AA | Marina Serebryakova
Bio Philip N. Howard (@pnhoward) the Director of the Oxford Internet Institute and a statutory Professor of Internet Studies at Balliol College at the University of Oxford. Howard investigates the impact of digital media on political life around the world, and he is a frequent commentator on global media and political affairs. Howard’s research has demonstrated how new information technologies are used in both civic engagement and social control in countries around the world. His projects on digital activism, computational propaganda, and modern governance have been supported by the European Research Council, National Science Foundation, US Institutes of Peace, and Intel’s People and Practices Group. He has published nine books and over 140 academic articles, book chapters, conference papers, and commentary essays on information technology, international affairs and public life. His articles examine the role of new information and communication technologies in politics and social development, and he has published in peer review journals such as the American Behavioral Scientist, the Annals of the American Academy of Political and Social Science, and The Journal of Communication. His first book on information technology and elections in the United States is called New Media Campaigns and the Managed Citizen (New York: Cambridge University Press, 2006). It is one of the few books to ever win simultaneous “best book” prizes from the professional associations of multiple disciplines, with awards from the American Political Science Association, the American Sociological Association, and the International Communication Association. His authored books include The Digital Origins of Dictatorship and Democracy (New York, NY: Oxford University Press, 2010), Castells and the Media (London, UK: Polity, 2011), Democracy’s Fourth Wave? Digital Media and the Arab Spring (New York, NY: Oxford University Press, 2012, with Muzammil Hussain), and Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up (New Haven, CT: Yale University Press, 2015). He has edited Society Online: The Internet in Context (Thousand Oaks, CA: Sage, 2004, with Steve Jones), the Handbook of Internet Politics (London, UK: Routledge, 2008, with Andrew Chadwick), State Power 2.0: Authoritarian Entrenchment and Political Engagement Worldwide (Farnham, UK: Ashgate, 2013, with Muzammil Hussain) and Computational Propaganda: Political Parties, Politicians and Manipulation on Social Media (New York, NY: Oxford University Press, 2018, with Samuel Woolley). Howard has had senior teaching, research, and administrative appointments at universities around the world. He has been on the teaching faculty at the Central European University, Columbia University, Northwestern University, the University of Oslo, and the University of Washington. He has had fellowship appointments at the Pew Internet & American Life Project in Washington D.C., the Stanhope Centre for Communications Policy Research at the London School of Economics, the Center for Advanced Study in the Behavioral Sciences at Stanford University, and the Center for Information Technology Policy at Princeton University. From 2013-15 he helped design and launch a new School of Public Policy at Central European University in Budapest, where he was the school’s first Founding Professor and Director of the Center for Media, Data and Society. He currently serves as Director of the Oxford Internet Institute at Oxford University, the leading center of research and teaching on technology and society. Howard’s research and commentary writing has been featured in the New York Times, Washington Post, and many international media outlets. He was awarded the National Democratic Institute’s 2018 “Democracy Prize” and Foreign Policy magazine named him a “Global Thinker” for pioneering the social science of fake news production. His B.A. is in political science from Innis College at the University of Toronto, his M.Sc. is in economics from the London School of Economics, and his Ph.D. is in sociology from Northwestern University. His website is philhoward.org. Resources Philip Howard, Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives (2020)
In this conversation, Philip spends time with professor and writer Philip N. Howard. He is the Director of the Oxford Internet Institute at Oxford University and is the author, most recently, of Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives from Yale University Press. They discuss the new reality of junk news and how complex the newly constructed lie machines are and the corrosive effect they have on our civil discourse. The Drop – The segment of the show where Philip and his guest share tasty morsels of intellectual goodness and creative musings. * Philip's Drop: Ramy (Hulu) (https://www.imdb.com/title/tt7649694/) * Phil H's Drop: Babylon Berlin (Netflix) (https://www.imdb.com/title/tt4378376/) Special Guest: Philip N. Howard.
As governments consider more funding for virtual cyber armies over real boots on the ground, Richard Kilgarriff talks to one of the world's leading experts on the impact of digital media across the political spectrum. Why and how do troll armies invade our democracies and what can we do to defend ourselves against them? Want to go deeper? Get a copy of Phil's book HERE
Artificially intelligent “bot” accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies.This is the plague that Oxford professor Philip Howard takes on in his new book: Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives. With massive amounts of social media and public polling data, and in depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these “lie machines” apart.Dr. Howard gets into how these lie machines are literally causing Covid deaths throughout our society through misinformation...and how they helped to put the authoritarians, currently mishandling the pandemic, in power in the first place...But it's not all doom and gloom: he has amazing treatments and cures prescribed for this plague--from big policy demands to personal behavior recommendations to rid us of this disinformation infection.As part of the creative contribution I ask of my guest experts, he does an amazing dramatic rendition of the titles of some unlikely and weird videos that come up when you search for Covid news under today's algorithm. Benedict Cumberbatch, William Shatner, look out! Dr. Howard is overtaking you.https://liemachines.org/https://www.oii.ox.ac.uk/people/philip-howard/Host and Editor: L.M. Bogad: www.lmbogad.comMusic: Jason Montero https://m.soundcloud.com/jamoja, and by my other friend named JaySound effects clips from soundbible.comclip art from nicepng.com
In the summer of 2017, a group of political activists in the UK figured out how to use Tinder to attract new supporters. They understood how the platform worked and how its users tended to use the app. Most importantly, they understood how Tinder’s algorithms distributed content, so they built a bot to automate flirty exchanges with real people. Over time, those flirty conversations turned to politics—and to the strengths of the U.K.’s Labour Party. The bot would take over a Tinder profile owned by a user sympathetic to the Labour party who agreed to the temporary repurposing of the account. The bot then sent roughly 40,000 messages, targeting 18- to 25-year-olds where the Labour candidates were running in tight races. While it is impossible to know if any voters were actually swayed by this campaign, what cannot be denied are the results of the election. In several targeted districts, the Labour Party won in tight races. As part of their victory celebrations… some of the winners gave Twitter shoutouts to the Tinder election bot. (This information is courtesy of Philip N. Howard and his article How Political Campaigns Weaponize Social Media Botsfrom IEEE Spectrum; October, 2018) Here’s the thing though… not all Bots are the same. In fact, not unlike most things in the world, the overwhelming amount of Bots perform important, yet perhaps tedious functions that allow people to focus on high-level assignments that truly support agency missions and outcomes. However, automation is not solely about offloading mundane tasks from humans. Instead, this type of technology creates an environment in which humans and technology not only collaborate to accelerate workflow processes but also speeds up decision-making. In this episode of the InSecurity Podcast, Matt Stephenson sits down with Ron Jones, Head of Solutions Architecture at Blue Prism. Ron is a builder of Robotic Process Automation. A mouthful right? You may know them as “Bots” and they are one of the most misunderstood pieces of technology around. Stick around and Ron will help you understand them a little better. About Ron Jones Ron Jones (@rgjSP) is an experienced leader specializing in enterprise technology strategy and consulting for the public sector. Ron currently serves North American Public Sector organizations implementing Blue Prism, the world’s most scalable, secure, and proven intelligent automation platform. About Blue Prism Blue Prism (@blue_prism) pioneered Robotic Process Automation (RPA), emerging as the trusted and secure intelligent automation choice for the Fortune 500 and the public sector. They offer a connected-RPA supported by the Digital Exchange (DX) app store—marrying internal entrepreneurship with the power of crowdsourced innovation. Blue Prism’s connected-RPA can automate and perform mission critical processes, allowing people the freedom to focus on creative, meaningful work. More than 1,500 global customers leverage Blue Prism’s Digital Workforce deployed in the cloud or on premises as well as through the company’s Thoughtonomy SaaS offering, empowering organizations to automate billions of transactions while returning hundreds of millions of hours of work back to the business. Blue Prism was recently named to Fast Company’s inaugural list of the Best Workplaces for Innovators – an honor achieved by 50 companies. Blue Prism is the only RPA provider and UK-based company to be recognized. About Matt Stephenson Insecurity Podcast host Matt Stephenson (@packmatt73) leads the Security Technology team at Cylance, which puts him in front of crowds, cameras, and microphones all over the world. He is the regular host of the InSecurity podcast and host of CylanceTV Twenty years of work with the world’s largest security, storage, and recovery companies has introduced Stephenson to some of the most fascinating people in the industry. He wants to get those stories told so that others can learn from what has come Every week on the InSecurity Podcast, Matt interviews leading authorities in the security industry to gain an expert perspective on topics including risk management, security control friction, compliance issues, and building a culture of security. Each episode provides relevant insights for security practitioners and business leaders working to improve their organization’s security posture and bottom line. Can’t get enough of Insecurity? You can find us at ThreatVector InSecurity Podcasts, Apple Podcasts and GooglePlay as well as Spotify, Stitcher, SoundCloud, I Heart Radio and wherever you get your podcasts! Make sure you Subscribe, Rate and Review!
The George Washington University’s Marc Lynch, director of the Project on Middle East Political Science, speaks with Philip N. Howard and Muzammil M. Hussain. Howard is an associate professor in the Department of Communication at the University of Washington and director of the World Information Access Project (wiaproject.org) and the Project on Information Technology and Political Islam (pitpi.org). Hussain is a doctoral candidate at the University of Washington’s Department of Communication, and comparative international researcher at the Center for Communication and Civic Engagement (CCCE) focusing on information infrastructure and social organization, and digital media and political participation. Lynch, Howard, and Hussain discuss the use of digital media by civil society and their new book Democracy’s Fourth Wave: Digital Media and the Arab Spring.
The George Washington University’s Marc Lynch, director of the Project on Middle East Political Science, speaks with Philip N. Howard and Muzammil M. Hussain. Howard is an associate professor in the Department of Communication at the University of Washington and director of the World Information Access Project (wiaproject.org) and the Project on Information Technology and Political Islam (pitpi.org). Hussain is a doctoral candidate at the University of Washington’s Department of Communication, and comparative international researcher at the Center for Communication and Civic Engagement (CCCE) focusing on information infrastructure and social organization, and digital media and political participation. Lynch, Howard, and Hussain discuss the use of digital media by civil society and their new book Democracy’s Fourth Wave: Digital Media and the Arab Spring.
Visiting scholar Philip N. Howard, a leading expert on the role of new information technologies in political systems, explains the crucial role communications technologies have played in advancing democracy in Muslim countries.