POPULARITY
Gordon Pennycook is an Associate Professor at Cornell University. We talk about his upbringing in rural Northern Canada, how he got into academia, and his work on misinformation: why people share it and what can be done about it.BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith.Support the show: https://geni.us/bjks-patreonTimestamps0:00:00: Straight outta Carrot River: From Northern Canada to publishing in Nature0:37:01: Exploration vs focusing on one topic: finding your research topic0:48:57: A sense of having made it0:54:17: Why apply reasoning research to religion?0:59:45: Starting working on misinformation 1:08:20: Defining misinformation, disinformation, and fake news1:15:52: Social media, the consumption of news, and Bayesian updating1:24:48: Reasons for why people share misinformation1:35:57: Are social media companies listening to Pennycook et al?1:38:19: Using AI to change conspiracy beliefs1:44:59: A book or paper more people should read1:46:33: Something Gordon wishes he'd learnt sooner1:48:12: Advice for PhD students/postdocsPodcast linksWebsite: https://geni.us/bjks-podBlueSky: https://geni.us/pod-bskyGordon's linksWebsite: https://geni.us/pennycook_webGoogle Scholar: https://geni.us/pennycook-scholarBlueSky: https://geni.us/pennycook-bskyBen's linksWebsite: https://geni.us/bjks-webGoogle Scholar: https://geni.us/bjks-scholarReferencesCostello, Pennycook & Rand (2024). Durably reducing conspiracy beliefs through dialogues with AI. Science.Dawkins (2006). The God Delusion.MacLeod, ... & Ozubko (2010). The production effect: delineation of a phenomenon. Journal of Experimental Psychology: Learning, Memory, and Cognition.Nowak & Highfield (2012). Supercooperators: Altruism, evolution, and why we need each other to succeed.Pennycook, ... & Fugelsang (2012). Analytic cognitive style predicts religious and paranormal belief. Cognition.Pennycook, Fugelsang & Koehler (2015). What makes us think? A three-stage dual-process model of analytic engagement. Cognitive Psychology.Pennycook, Cheyne, Barr, Koehler & Fugelsang (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision making.Pennycook & Rand (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition.Pennycook & Rand (2021). The psychology of fake news. Trends in cognitive sciences.Rand (2016). Cooperation, fast and slow: Meta-analytic evidence for a theory of social heuristics and self-interested deliberation. Psychological Science.Stanovich (2005). The robot's rebellion: Finding meaning in the age of Darwin.Tappin, Pennycook & Rand (2020). Thinking clearly about causal inferences of politically motivated reasoning: Why paradigmatic study designs often undermine causal inference. Current Opinion in Behavioral Sciences.Thompson, Turner & Pennycook (2011). Intuition, reason, and metacognition. Cognitive Psychology.
Debunkbot and Other Tools Against Misinformation In this follow-up episode of the Behavioral Design Podcast, hosts Aline Holzwarth and Samuel Salzer welcome back Gordon Pennycook, psychology professor at Cornell University, to continue their deep dive into the battle against misinformation. Building on their previous conversation around misinformation's impact on democratic participation and the role of AI in spreading and combating falsehoods, this episode focuses on actionable strategies and interventions to combat misinformation effectively. Gordon discusses evidence-based approaches, including nudges, accuracy prompts, and psychological inoculation (or prebunking) techniques, that empower individuals to better evaluate the information they encounter. The conversation highlights recent advancements in using AI to debunk conspiracy theories and examines how AI-generated evidence can influence belief systems. They also tackle the role of social media platforms in moderating content, the ethical balance between free speech and misinformation, and practical steps that can make platforms safer without stifling expression. This episode provides valuable insights for anyone interested in understanding how to counter misinformation through behavioral science and AI. LINKS: Gordon Pennycook: Google Scholar Profile Twitter Personal Website Cornell University Faculty Page Further Reading on Misinformation: Debunkbot - The AI That Reduces Belief in Conspiracy Theories Interventions Toolbox - Strategies to Combat Misinformation TIMESTAMPS: 01:27 Intro and Early Voting06:45 Welcome back, Gordon!07:52 Strategies to Combat Misinformation11:10 Nudges and Behavioral Interventions14:21 Comparing Intervention Strategies19:08 Psychological Inoculation and Prebunking32:21 Echo Chambers and Online Misinformation34:13 Individual vs. Policy Interventions36:21 If You Owned a Social Media Company37:49 Algorithm Changes and Platform Quality38:42 Community Notes and Fact-Checking39:30 Reddit's Moderation System42:07 Generative AI and Fact-Checking43:16 AI Debunking Conspiracy Theories45:26 Effectiveness of AI in Changing Beliefs51:32 Potential Misuse of AI55:13 Final Thoughts and Reflections -- Interesting in collaborating with Nuance? If you'd like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: nuancebehavior.com. Support the podcast by joining Habit Weekly Pro
The Role of Misinformation and AI in the US Election with Gordon Pennycook In this episode of the Behavioral Design Podcast, hosts Aline and Samuel explore the complex world of misinformation in the context of the U.S. elections with special guest Gordon Pennycook, a psychology professor at Cornell University. The episode covers the effects of misinformation on democratic participation, and how behavioral science sheds light on reasoning errors that drive belief in falsehoods. Gordon shares insights from his groundbreaking research on misinformation, exploring how falsehoods gain traction and the role AI can play in both spreading and mitigating misinformation. The conversation also tackles the evolution of misinformation, including the impact of social media and disinformation campaigns that blur the line between truth and fiction. Tune in to hear why certain falsehoods spread faster than truths, the psychological appeal of conspiracy theories, and how humor can amplify the reach of misinformation in surprising ways. LINKS: Gordon Pennycook: Google Scholar Profile Twitter Personal Website Cornell University Faculty Page Further Reading on Misinformation: Brandolini's Law and the Spread of Falsehoods Role of AI in Misinformation The Psychology of Conspiracy Theories TIMESTAMPS: 00:00 Introduction 03:14 Behavioral Science and Misinformation 05:28 Introducing Gordon Pennycook 10:02 The Evolution of Misinformation 12:46 AI's Role in Misinformation 14:51 Impact of Misinformation on Elections 21:43 COVID-19 and Vaccine Misinformation 26:32 Technological Advancements in Misinformation 33:50 Conspiracy Theories 35:39 Misinformation and Social Media 42:35 The Role of Humor in Misinformation 48:08 Quickfire Round: To AI or Not to AI -- Interesting in collaborating with Nuance? If you'd like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: nuancebehavior.com. Support the podcast by joining Habit Weekly Pro
Our guests in this episode are Thomas H. Costello at American University, Gordon Pennycook at Cornell University, and David G. Rand at MIT who created Debunkbot, a GPT-powered, large language model, conspiracy-theory-debunking AI that is highly effective at reducing conspiratorial beliefs. In the show you'll hear all about what happened when they placed Debunkbot inside the framework of a scientific study and recorded its interactions with thousands of participants.DebunkbotKittedHow Minds ChangeDavid McRaney's TwitterYANSS TwitterShow NotesNewsletterPatreon
This is a free preview of a paid episode. To hear more, visit rethinkingwellness.substack.comCognitive psychologist Gordon Pennycook explains the psychological reasons we fall for misinformation, conspiracy theories, and general bullshit (a technical term!). We discuss why people with an analytical cognitive style tend to be more skeptical of alternative medicine and health misinformation, some of the pitfalls of intuitive thinking (and why intuitive eating may actually be more of an analytical or deliberative process), why being skeptical of out-there wellness practices is actually a sign of open-mindedness, why even very smart people can fall for wellness misinformation, and more. Behind the paywall, we get into the difficulty of trusting experts in matters of health and wellness, the importance of thinking critically about science, the attention economy and how it contributes to incentivizing misinformation, how conspiracy theories have touched Gordon's life, his surprising findings about what it takes for people to drop conspiracist beliefs, and the best ways to stop the spread of misinformation.Paid subscribers can hear the full interview, and the first half is available to all listeners. To upgrade to paid, go to rethinkingwellness.substack.com. Gordon Pennycook is a Himan Brown Faculty Fellow and Associate Professor in the Department of Psychology at Cornell University. He obtained his PhD in Cognitive Psychology at the University of Waterloo in 2016 and held a Social Sciences and Humanities Research Council of Canada Banting Postdoctoral Fellowship at Yale University. His expertise is human reasoning and decision-making, and he has published over 100 peer-reviewed articles, including in journals such as Nature and Science. He has published research on the spread of fake news and misinformation, as well as the first ever paper on the psychology of bullshit.Gordon has received several awards, such as the Governor General's Gold Medal, Poynter Institute's International Fact-Checking Network “Researcher of the Year,” and early career awards from the Canadian Society for Brain, Behaviour, and Cognitive Science, the Psychonomic Society, and the Association for Psychological Science. He was elected to the Royal Society of Canada's College of New Scholars, Artists, and Scientists in 2020.If you like this conversation, subscribe to hear lots more like it! Support the podcast by becoming a paid subscriber, and unlock great perks like extended interviews, subscriber-only Q&As, full access to our archives, commenting privileges and subscriber threads where you can connect with other listeners, and more. Learn more and sign up at rethinkingwellness.substack.com.Christy's second book, The Wellness Trap, is available wherever books are sold! Order it here, or ask for it in your favorite local bookstore. If you're looking to make peace with food and break free from diet and wellness culture, come check out Christy's Intuitive Eating Fundamentals online course.
In May, Justin Hendrix moderated a discussion with David Rand, who is a professor of Management Science and Brain and Cognitive Sciences at MIT, the director of the Applied Cooperation Initiative, and an affiliate of the MIT Institute of Data, Systems, and Society and the Initiative on the Digital Economy. David's work cuts across fields such as cognitive science, behavioral economics, and social psychology, and with his collaborators he's done a substantial amount of work on the psychological underpinnings of belief in misinformation and conspiracy theories.David is one of the authors, with Thomas Costello and Gordon Pennycook, of a paper published this spring titled "Durably reducing conspiracy beliefs through dialogues with AI." The paper considers the potential for people to enter into dialogues with LLMs and whether such exchanges can change the minds of conspiracy theory believers. According to the study, dialogues with GPT-4 Turbo reduced belief in various conspiracy theories, with effects lasting many months. Even more intriguingly, these dialogues seemed to have a spillover effect, reducing belief in unrelated conspiracies and influencing conspiracy-related behaviors.While these findings are certainly promising, the experiment raises a variety of questions. Some are specific under the premise of the experiment- such as how compelling and tailored does the counter-evidence need to be, and how well do the LLMs perform? What happens if and when they make mistakes or hallucinate? And some of the questions are bigger picture- are there ethical implications in using AI in this manner? Can these results be replicated and scaled in real-world applications, such as on social media platforms, and is that a good idea? Is an internet where various AI agents and systems are poking and prodding us and trying to shape or change our beliefs a good thing? This episode contains an edited recording of the discussion, which was hosted at Betaworks.
Jim interviews Gordon Pennycook about the phenomenon of fake news. What types of misinformation tend to spread? What role will generative AI play in the [...]
In this episode, we interview Jake Womick and Tom Costello about psychological similarities and differences between liberals and conservatives. Jake is a postdoctoral scholar working with Dr. Kurt Gray at UNC. Tom is working with David Rand at MIT & Gordon Pennycook at University of Regina. We hope you enjoy this conversation. Manny and Jake's article on this topic. Evidence that conservatives think differently than liberals: Reducing uncertainty & ambiguity Wanting order/closure. Emphasizing purity, sanctity & loyalty Rigid thinking Viewing threat & danger Upholding status quo Evidence that extremists on both sides: See their beliefs as superior Avoid exposure to counter-beliefs Have motivated disbelief Struggle to find flaws in their sides' arguments View information more favorably when it supports their preferences Hate each other Align with their tribes more than their own beliefs Other mentions: Feldman, 2013 Malka, 2017 Norris, 2020 Pan & Xu, 2018 Saucier, 2000 Conservatives in the US compared to other countries. GOP voters change in the Trump era Pew data on Black democrats
Social media companies may claim censorship is for our own good – to shield us from misinformation – but the process has no transparency. And Facebook and Twitter algorithms are set up to amplify sensational claims, to push people into polarized camps, and delude users about the popularity and value of what are often fringe ideas. Social scientists David Rand and Gordon Pennycook have studied social media behavior and found that people care about sharing accurate news, but often give in to temptation to share with what's likely to be popular, rather than accurate. But there are solutions. The researchers found a way to leverage crowdsourcing to improve the quality of shared information. And algorithms could, in principle, be reset to amplify high quality information. “Follow the Science" is produced, written, and hosted by Faye Flam. Today's episode was edited by Seth Gliksman with music by Kyle Imperatore. If you'd like to hear more "Follow the Science," please like, follow, and subscribe!
Science doesn't lend itself to fact checking, since science isn't a set of facts but a process for finding things out. That's why Facebook got criticized for deleting posts suggesting the virus causing Covid-19 might have had something to do with a lab accident. The reality is we don't know where the virus came from. This week, social scientists David Rand of MIT and Gordon Pennycook of the University of Regina will discuss why there's so much misinformation on social media, and how to fix the problem without employing fact checkers. “Follow the Science" is produced, written, and hosted by Faye Flam, with funding by the Society for Professional Journalists. Today's episode was edited by Seth Gliksman with music by Kyle Imperatore. If you'd like to hear more "Follow the Science," please like, follow, and subscribe!
Contrary to the narrative that social media algorithms impact cognition, our research shows that people can and do override their intuitions and that reasoning often facilitates accurate belief formation. Although social media may impact what is salient to us when making choices about what to share with others, this is not intractable: Simple prompts that remind people to think about accuracy are sufficient to increase the quality of the news content that people share. This indicates that unreasonable behavior on social media is more a function of lazy thinking than of an inability for people to overcome social media algorithms.
This week on The Science Pawdcast, we chat about a new study that unlocked the physics of Northern Lights! In Pet Science, we chat about a study that dug into what makes dogs aggressive. Our expert guests are Dr. David Rand and Gordon Pennycook, who had their study about how misinformation spreads recently published in Nature. It's an amazing discussion about what can cause everyday people to share information that is false online. As this week was CRAZY we ran out of time for Woo or Wow, but there is a fun and silly family section at the end!For Science, Empathy, and Cuteness!The Paper! : https://www.nature.com/articles/s41586-021-03344-2Dr. David Rand: https://twitter.com/DG_Rand Dr. Rand's TEDTALK: https://www.youtube.com/watch?v=uC4JZ7TKAmcROBOT GOES HERE: https://www.youtube.com/watch?v=2zB2EJTTijQMORE RAND! http://davidrand-cooperation.com/musicGordon Pennycook: https://twitter.com/GordPennycookGordon's website: https://gordonpennycook.net/Support The Show AND Follow Buns and Beaks!The Bunsen Website www.bunsenbernerbmd.comThe Bunsen Website has adorable merch with hundreds of different combinations of designs and apparel- all with Printful- one of the highest quality companies we could find!Genius Lab Gear for 10% link!-10% off science dog bandanas, science stickers and science Pocket toolshttps://t.co/UIxKJ1uX8J?amp=1Bunsen and Beaker on Twitter:https://twitter.com/bunsenbernerbmdBunsen and Beaker on Facebookhttps://www.facebook.com/bunsenberner.bmd/InstaBunsandBeakshttps://www.instagram.com/bunsenberner.bmd/?hl=en Support the show (https://www.patreon.com/bunsenberner)
In this episode, April uses the definition of “bullshit” described by American philosopher Harry G. Frankfurt in his seminal 1985 essay, “On Bullshit” to discuss its impact on our lives and ways to see through it. As you can probably tell, she also says “bullshit” a lot. Episode 9 Show Notes I purchased the big red Bullshit Button that you hear in this episode from Amazon if you want one. I use it to annoy my students, but it’s also handy for annoying your kids, significant other, and pets.The book “On Bullshit” by Harry G. Frankfurt is also available on Amazon as well as academic databases.As of 2018 Trump was still insisting he was right about Sweden: https://www.nbcnews.com/politics/donald-trump/trump-claims-he-was-right-about-crimes-caused-immigrants-sweden-n854296 Beto O’Rourke and his “false” rating on Politifact: https://www.politifact.com/factchecks/2019/oct/21/beto-orourke/despite-his-claim-presidential-candidate-beto-orou/Hey it’s not just me; the illustrious Harry G. Frankfurt himself has called out Donald Trump on his bullshit: https://time.com/4321036/donald-trump-bs/Yes, “bullshit receptivity is a thing. Here’s discussion of the research by Gordon Pennycook et al: https://onlinelibrary.wiley.com/doi/full/10.1111/jopy.12476https://psycnet.apa.org/record/2015-54494-003https://www.psychologytoday.com/us/blog/one-among-many/201512/not-even-bullshithttps://www.niemanlab.org/2017/08/when-it-comes-to-the-academic-study-of-fake-news-bullshit-receptivity-is-a-thing/Philosopher Victor Moberger’s article in Theoria: https://onlinelibrary.wiley.com/doi/full/10.1111/theo.12271A short explanation of how Brandolini’s law got it’s name: http://ordrespontane.blogspot.com/2014/07/brandolinis-law.htmlConspiracy theories and pseudoscience both get science wrong:https://elephantinthelab.org/how-conspiracy-theorists-get-the-scientific-method-wrong/This amazing little handbook teaches you how to approach people and (nicely) debunk their bullshit. It's focused on climate change deniers, but it will work for other issues: https://www.climatechangecommunication.org/wp-content/uploads/2020/10/DebunkingHandbook2020.pdf Author Warren Berger shows you how to pick up on, and call out, bullshit: https://www.fastcompany.com/3068589/how-to-fine-tune-your-bullshit-detector “Legit Scientist” Paul M. Sutter explains the power behind the words “I don’t know.” https://www.forbes.com/sites/paulmsutter/2019/08/11/i-dont-know-is-one-of-the-most-powerful-things-you-can-say/?sh=8b5690e4e197
------------------Support the channel------------ Patreon: https://www.patreon.com/thedissenter SubscribeStar: https://www.subscribestar.com/the-dissenter PayPal: paypal.me/thedissenter PayPal Subscription 1 Dollar: https://tinyurl.com/yb3acuuy PayPal Subscription 3 Dollars: https://tinyurl.com/ybn6bg9l PayPal Subscription 5 Dollars: https://tinyurl.com/ycmr9gpz PayPal Subscription 10 Dollars: https://tinyurl.com/y9r3fc9m PayPal Subscription 20 Dollars: https://tinyurl.com/y95uvkao ------------------Follow me on--------------------- Facebook: https://www.facebook.com/thedissenteryt/ Twitter: https://twitter.com/TheDissenterYT Anchor (podcast): https://anchor.fm/thedissenter Dr. Gordon Pennycook is Assistant Professor of Behavioural Science at University of Regina's Hill/Levene Schools of Business. He's also an Associate Member of the Department of Psychology. He's a member of the editorial board for Thinking & Reasoning and a consulting editor for Judgment and Decision Making. His research focus is on reasoning and decision-making, broadly defined. He investigates the distinction between intuitive processes (“gut feelings”) and more deliberative (“analytic”) reasoning processes and is principally interested in the causes (a) and consequences (b) of analytic thinking. Dr. Pennycook has published on religious belief, sleep paralysis, morality, creativity, smartphone use, health beliefs (e.g., homeopathy), language use among climate change deniers, pseudo-profound bullshit, delusional ideation, fake news (and disinformation more broadly), political ideology, and science beliefs. In this episode, we first talk about dual-process theory, reasoning and rationality, motivated reasoning, the Dunning-Kruger effect, and the analytic cognitive style and ability. We then go through what characterizes pseudo-profound bullshit in terms of discourse, the language of climate change deniers, and fake news and misinformation (particularly on the internet). -- A HUGE THANK YOU TO MY PATRONS/SUPPORTERS: KARIN LIETZCKE, ANN BLANCHETTE, PER HELGE LARSEN, LAU GUERREIRO, JERRY MULLER, HANS FREDRIK SUNDE, BERNARDO SEIXAS, HERBERT GINTIS, RUTGER VOS, RICARDO VLADIMIRO, BO WINEGARD, CRAIG HEALY, OLAF ALEX, PHILIP KURIAN, JONATHAN VISSER, DAVID DIAS, ANJAN KATTA, JAKOB KLINKBY, ADAM KESSEL, MATTHEW WHITINGBIRD, ARNAUD WOLFF, TIM HOLLOSY, HENRIK AHLENIUS, JOHN CONNORS, PAULINA BARREN, FILIP FORS CONNOLLY, DAN DEMETRIOU, ROBERT WINDHAGER, RUI INACIO, ARTHUR KOH, ZOOP, MARCO NEVES, MAX BEILBY, COLIN HOLBROOK, SUSAN PINKER, THOMAS TRUMBLE, PABLO SANTURBANO, SIMON COLUMBUS, PHIL KAVANAGH, JORGE ESPINHA, CORY CLARK, MARK BLYTH, ROBERTO INGUANZO, MIKKEL STORMYR, ERIC NEURMANN, SAMUEL ANDREEFF, FRANCIS FORDE, TIAGO NUNES, BERNARD HUGUENEY, ALEXANDER DANNBAUER, OMARI HICKSON, PHYLICIA STEVENS, FERGAL CUSSEN, YEVHEN BODRENKO, HAL HERZOG, NUNO MACHADO, DON ROSS, JOÃO ALVES DA SILVA, JONATHAN LEIBRANT, JOÃO LINHARES, OZLEM BULUT, NATHAN NGUYEN, STANTON T, SAMUEL CORREA, ERIK HAINES, MARK SMITH, J.W., JOÃO EIRA, TOM HUMMEL, SARDUS FRANCE, DAVID SLOAN WILSON, YACILA DEZA-ARAUJO, AND IDAN SOLON! A SPECIAL THANKS TO MY PRODUCERS, YZAR WEHBE, JIM FRANK, ŁUKASZ STAFINIAK, IAN GILLIGAN, SERGIU CODREANU, LUIS CAYETANO, MATTHEW LAVENDER, TOM VANEGDOM, CURTIS DIXON, BENEDIKT MUELLER, AND VEGA GIDEY! AND TO MY EXECUTIVE PRODUCERS, MICHAL RUSIECKI, ROSEY, AND JAMES PRATT!
Dr. Gordon Pennycook studies why people share misinformation. His research has used many techniques to understand people’s ability to judge the accuracy of information, their willingness to share that information, and what we can do to encourage people to only spread true information. Some of the things that come up in this episode:There’s lots of coronavirus misinformation out thereSeeing fake news repeatedly makes it feel more true (Pennycook, Cannon, & Rand, 2018)Believing fake news is more about not paying attention than partisanship (Pennycook & Rand, 2019)Encouraging people to think about accuracy reduces sharing of false and misleading news (Pennycook et al., preprint)Using Twitter bots to get people to think about accuracyInterventions to stop the spread of COVID-19 misinformation (Pennycook et al., in press)The problem with biased thinking or “motivated reasoning” (Tappin, Pennycook, & Rand, 2020; preprint) For a transcript of this show, visit the episode's webpage: http://opinionsciencepodcast.com/episode/fake-news-with-gordon-pennycook/Learn more about Opinion Science at http://opinionsciencepodcast.com/ and follow @OpinionSciPod on Twitter.Additional music and sound effects obtained from https://www.zapsplat.com.
Chapter 1
It's time Science with Simi, and today we're taking a look at the issue of fake news. In his podcast, the Super Awesome Science Show, Jason Tetro spoke with Gordon Pennycook. He is an assistant professor at the University of Saskatchewan and he has tried to understand why people tend to believe these falsified stories and has come up with a rather unexpected result.
In a 24/7 news environment, stories sometimes get the facts wrong. But normally, these lapses are not intentional. But recently, there has been an explosion in false, inaccurate, and harmful stories that are made with the sole purpose of convincing the public that a different reality exists. It’s known as fake news and on this week’s show, we’re going to explore its nature, how to diagnose it, and also how not to be fooled by it. Our first guest is Amber Day, a professor at Bryant University. She reveals that fake news has a base in satire and parody although it has devolved into something more troubling. We learn about how the goals have evolved from bringing humour to bringing trust. What makes fake news so difficult is that many of the tactics used mimic tried and true modes of satire and parody such that we may be unable to judge between what is and what is not real. Because fake news is hard to identify, our next guest has developed software that can detect different types of fake news. Her name is Victoria Rubin and she is an associate professor at the University of Western Ontario. She has developed the LiT.RL news verification browser that can identify fake news and highlights it so you are informed before you click. We discuss how this browser was developed and how accurate it is compared to the human eye. In our SASS Class, we learn about one of the main reasons people fall for fake news. Our guest teacher is Gordon Pennycook and he an assistant professor at the University of Saskatchewan. He has tried to understand why people tend to believe these falsified stories and has come up with a rather unexpected result. While partisan beliefs do play a role, the most important factor is one we can all appreciate. It’s laziness. If you enjoy The Super Awesome Science Show, please take a minute to rate it on Apple Podcasts and be sure to tell a friend about the show. Thanks to you, we won the Canadian Podcast Award for Outstanding Science and Medicine Series. Let’s keep the awesome momentum going together! Twitter: @JATetro Email: thegermguy@gmail.com Guests: Amber Day Web: https://departments.bryant.edu/english-and-cultural-studies/faculty/day-amber Victoria Rubin Web: https://victoriarubin.fims.uwo.ca/ Twitter: @vVctoriaRubin LiT.RL Browser: https://victoriarubin.fims.uwo.ca/2018/12/19/release-for-the-lit-rl-news-verification-browser-detecting-clickbait-satire-and-falsified-news/ Gordon Pennycook Web: https://www.uregina.ca/arts/psychology/faculty-staff/faculty/pennycook%20gordon.html Twitter: @GordPennycook
‘Wholeness quiets infinite phenomena?’ Does it, really?! Why do some people fall for pseudo-profound bullshit and others don’t? When we share fake news stories, is this because we're motivated to think they're real, or because we don't bother to think at all? And why do scientists fight tooth-and-nail over the mechanisms involved, such as “System I vs. System II”, “Fast vs. Slow” and other frameworks? Gordon Pennycook joins Igor and Charles to discuss the critical distinction between a liar and a bullshitter, the cognitive reflection test, the random Deepak Chopra quote generator, the Ig Nobel prize, motivated reasoning, climate change beliefs, academic turf wars among dual process theorists, and how to stop yourself from compulsively retweeting fake news. Igor suggests that Gord only thought of studying bullshit after disbelief at one of Igor’s early talks, Gord reminds us that even the most enlightened social media platforms are in no hurry to help people STOP sharing news, and Charles unexpectedly finds common ground with the Chinese government. Welcome to Episode 15. Special Guest: Gordon Pennycook.
With publications such as "Prior exposure increases perceived accuracy of fake news", "Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning", and "The science of fake news", Gordon Pennycook is asking and answering analytical questions about the nature of human intuition and fake news. Gordon appeared on Data Skeptic in 2016 to discuss people's ability to recognize pseudo-profound bullshit. This episode explores his work in fake news.
A recent paper in the journal of Judgment and Decision Making titled On the reception and detection of pseudo-profound bullshit explores empirical questions around a reader's ability to detect statements which may sound profound but are actually a collection of buzzwords that fail to contain adequate meaning or truth. These statements are definitively different from lies and nonesense, as we discuss in the episode. This paper proposes the Bullshit Receptivity scale (BSR) and empirically demonstrates that it correlates with existing metrics like the Cognitive Reflection Test, building confidence that this can be a useful, repeatable, empirical measure of a person's ability to detect pseudo-profound statements as being different from genuinely profound statements. Additionally, the correlative results provide some insight into possible root causes for why individuals might find great profundity in these statements based on other beliefs or cognitive measures. The paper's lead author Gordon Pennycook joins me to discuss this study's results. If you'd like some examples of pseudo-profound bullshit, you can randomly generate some based on Deepak Chopra's twitter feed. To read other work from Gordon, check out his Google Scholar page and find him on twitter via @GordonPennycook. And just for fun, if you think you've dreamed up a Data Skeptic related pseudo-profound bullshit statement, tweet it with hashtag #pseudoprofound. If I see an especially clever or humorous one, I might want to send you a free Data Skeptic sticker.
On The Gist, the phrase you should listen for during Tuesday night’s State of the Union address. Then, researcher Gordon Pennycook explains lessons from his study “On the Reception and Detection of Pseudo-Profound Bullshit.” For the Spiel, please remember The Gist when you make your billions. Today’s sponsors: Harry’s, the shaving company that offers German-engineered blades, well-designed handles, and shipping right to your door. Visit Harrys.com for $5 off your first purchase with the promo code THEGIST. Join Slate Plus! Members get bonus segments, exclusive member-only podcasts, and more. Sign up for a free trial today at slate.com/gistplus. Learn more about your ad choices. Visit megaphone.fm/adchoices
Doctoral student Gordon Pennycook says much research needs to be done on the effects of smartphones on thinking.