Podcasts about Brian Nosek

  • 57PODCASTS
  • 76EPISODES
  • 50mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jan 2, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Brian Nosek

Latest podcast episodes about Brian Nosek

Freakonomics Radio
Can Academic Fraud Be Stopped? (Update)

Freakonomics Radio

Play Episode Listen Later Jan 2, 2025 68:57


Probably not — the incentives are too strong. But a few reformers are trying. We check in on their progress, in an update to an episode originally published last year. (Part 2 of 2) SOURCES:Max Bazerman, professor of business administration at Harvard Business School.Leif Nelson, professor of business administration at the University of California, Berkeley Haas School of Business.Brian Nosek, professor of psychology at the University of Virginia and executive director at the Center for Open Science.Ivan Oransky, distinguished journalist-in-residence at New York University, editor-in-chief of The Transmitter, and co-founder of Retraction Watch.Joseph Simmons, professor of applied statistics and operations, information, and decisions at the Wharton School at the University of Pennsylvania.Uri Simonsohn, professor of behavioral science at Esade Business School.Simine Vazire, professor of psychology at the University of Melbourne and editor-in-chief of Psychological Science. RESOURCES:"How a Scientific Dispute Spiralled Into a Defamation Lawsuit," by Gideon Lewis-Kraus (The New Yorker, 2024)."The Harvard Professor and the Bloggers," by Noam Scheiber (The New York Times, 2023)."They Studied Dishonesty. Was Their Work a Lie?" by Gideon Lewis-Kraus (The New Yorker, 2023)."Evolving Patterns of Extremely Productive Publishing Behavior Across Science," by John P.A. Ioannidis, Thomas A. Collins, and Jeroen Baas (bioRxiv, 2023)."Hindawi Reveals Process for Retracting More Than 8,000 Paper Mill Articles," (Retraction Watch, 2023)."Exclusive: Russian Site Says It Has Brokered Authorships for More Than 10,000 Researchers," (Retraction Watch, 2019)."How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data," by Daniele Fanelli (PLOS One, 2009).Lifecycle Journal. EXTRAS:"Why Is There So Much Fraud in Academia? (Update)" by Freakonomics Radio (2024)."Freakonomics Goes to College, Part 1," by Freakonomics Radio (2012).

Freakonomics Radio
Why Is There So Much Fraud in Academia? (Update)

Freakonomics Radio

Play Episode Listen Later Dec 26, 2024 75:08


Some of the biggest names in behavioral science stand accused of faking their results. Last year, an astonishing 10,000 research papers were retracted. In a series originally published in early 2024, we talk to whistleblowers, reformers, and a co-author who got caught up in the chaos. (Part 1 of 2) SOURCES:Max Bazerman, professor of business administration at Harvard Business School.Leif Nelson, professor of business administration at the University of California, Berkeley Haas School of Business.Brian Nosek, professor of psychology at the University of Virginia and executive director at the Center for Open Science.Joseph Simmons, professor of applied statistics and operations, information, and decisions at the Wharton School at the University of Pennsylvania.Uri Simonsohn, professor of behavioral science at Esade Business School.Simine Vazire, professor of psychology at the University of Melbourne and editor-in-chief of Psychological Science. RESOURCES:"More Than 10,000 Research Papers Were Retracted in 2023 — a New Record," by Richard Van Noorden (Nature, 2023)."Data Falsificada (Part 1): 'Clusterfake,'" by Joseph Simmons, Leif Nelson, and Uri Simonsohn (Data Colada, 2023)."Fabricated Data in Research About Honesty. You Can't Make This Stuff Up. Or, Can You?" by Nick Fountain, Jeff Guo, Keith Romer, and Emma Peaslee (Planet Money, 2023).Complicit: How We Enable the Unethical and How to Stop, by Max Bazerman (2022)."Evidence of Fraud in an Influential Field Experiment About Dishonesty," by Joseph Simmons, Leif Nelson, and Uri Simonsohn (Data Colada, 2021)."False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant," by Joseph Simmons, Leif Nelson, and Uri Simonsohn (Psychological Science, 2011). EXTRAS:"Why Do We Cheat, and Why Shouldn't We?" by No Stupid Questions (2023)."Is Everybody Cheating These Days?" by No Stupid Questions (2021).

a16z
Prediction Markets and Beyond

a16z

Play Episode Listen Later Dec 2, 2024 109:49


This episode was originally published on our sister podcast, web3 with a16z. If you're excited about the next generation of the internet, check out the show: https://link.chtbl.com/hrr_h-XCWe've heard a lot about the premise and the promise of prediction markets for a long time, but they finally hit the main stage with the most recent election. So what worked (and didn't) this time? Are they really better than pollsters, is polling dead? So in this conversation, we tease apart the hype from the reality of prediction markets, from the recent election to market foundations... going more deeply into the how, why, and where these markets work. We also discuss the design challenges and opportunities (including implications for builders throughout). And we also cover other information aggregation mechanisms -- from peer prediction to others -- given that prediction markets are part of a broader category of information-elicitation and information-aggregation mechanisms. Where do domain experts, superforecasters, pollsters, and journalists come in (and out)? Where do (and don't) blockchain and crypto technologies come in -- and what specific features (decentralization, transparency, real-time, open source, etc.) matter most, and in what contexts?  Finally, we discuss applications for prediction and decision markets -- things we could do right away to in the near-future to sci-fi -- touching on trends like futarchy, AI entering the market, DeSci, and more.  Our special expert guests are Alex Taborrok, professor of economics at George Mason University and Chair in Economics at the Mercatus Center; and Scott Duke Kominers, research partner at a16z crypto, and professor at Harvard Business School  -- both in conversation with Sonal Chokshi. As a reminder: None of the following should be taken as business, investment, legal, or tax advice; please see a16z.com/disclosures for more important information.  Resources:(from links to research mentioned to more on the topics discussed)The Use of Knowledge in Society by Friedrich Hayek (American Economic Review, 1945)Everything is priced in by rsd99 (r/wallstreetbets, 2019)Idea Futures (aka prediction markets, information markets) by Robin Hanson (1996)Auctions: The Social Construction of Value  by Charles SmithSocial value of public information by Stephen Morris and Hyun Song Shin (American Economic Review, December 2002)Using prediction markets to estimate the reproducibility of scientific research by Anna Dreber, Thomas Pfeiffer, Johan Almenberg, Siri Isaksson, Brad Wilson, Yiling Chen, Brian Nosek, and Magnus Johannesson (Proceedings of the National Academy of Sciences (November 2015)A solution to the single-question crowd wisdom problem by Dražen Prelec, Sebastian Seung, and John McCoy (Nature, January 2017)Targeting high ability entrepreneurs using community information: Mechanism design in the field by Reshmaan Hussam, Natalia Rigol, and Benjamin Roth (American Economic Review, March 2022)Information aggregation mechanisms: concept, design, and implementation for a sales forecasting problem by Charles Plott and Kay-Yut Chen, Hewlett Packard Laboratories (March 2002)If I had a million [on deciding to dump the CEO or not] by Robin Hanson (2008)Futarchy: Vote values, but bet beliefs by Robin Hanson (2013)From prediction markets to info finance by Vitalik Buterin (November 2024)Composability is innovation by Linda Xie (June 2021)Composability is to software as compounding interest is to finance by Chris Dixon (October 2021)resources & research on DAOs, a16z crypto Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

web3 with a16z
Prediction Markets and Beyond

web3 with a16z

Play Episode Listen Later Nov 22, 2024 108:05


with @atabarrok @skominers @smc90We've heard a lot about the premise and the promise of prediction markets for a long time, but they finally hit the main stage with the most recent election. So what worked (and didn't) this time? Are they better than pollsters, journalists, domain experts, superforecasters?So in this conversation, we tease apart the hype from the reality of prediction markets, from the recent election to market foundations... going more deeply into the how, why, and where these markets work. We also discuss the design challenges and opportunities, including implications for builders throughout. And we also cover other information aggregation mechanisms -- from peer prediction to others -- given that prediction markets are part of a broader category of information-elicitation and information-aggregation mechanisms.Where do (and don't) blockchain and crypto technologies come in -- and what specific features (decentralization, transparency, real-time, open source, etc.) matter most, and in what contexts? Finally, we discuss applications for prediction and decision markets -- things we could do right away to in the near-to distant future -- touching on everything from corporate decisions and scientific replication to trends like AI, DeSci, futarchy/ governance, and more?Our special expert guests are Alex Tabarrok, professor of economics at George Mason University and Chair in Economics at the Mercatus Center; and Scott Duke Kominers, research partner at a16z crypto, and professor at Harvard Business School  -- both in conversation with Sonal Chokshi.RESOURCES(from links to research mentioned to more on the topics discussed)The Use of Knowledge in Society by Friedrich Hayek (American Economic Review, 1945)Everything is priced in by rsd99 (r/wallstreetbets, 2019)Idea Futures (aka prediction markets, information markets) by Robin Hanson (1996)Auctions: The Social Construction of Value  by Charles SmithSocial value of public information by Stephen Morris and Hyun Song Shin (American Economic Review, December 2002)Using prediction markets to estimate the reproducibility of scientific research by Anna Dreber, Thomas Pfeiffer, Johan Almenberg, Siri Isaksson, Brad Wilson, Yiling Chen, Brian Nosek, and Magnus Johannesson (Proceedings of the National Academy of Sciences (November 2015)A solution to the single-question crowd wisdom problem by Dražen Prelec, Sebastian Seung, and John McCoy (Nature, January 2017)Targeting high ability entrepreneurs using community information: Mechanism design in the field by Reshmaan Hussam, Natalia Rigol, and Benjamin Roth (American Economic Review, March 2022)Information aggregation mechanisms: concept, design, and implementation for a sales forecasting problem by Charles Plott and Kay-Yut Chen, Hewlett Packard Laboratories (March 2002)If I had a million [on deciding to dump the CEO or not] by Robin Hanson (2008)Futarchy: Vote values, but bet beliefs by Robin Hanson (2013)From prediction markets to info finance by Vitalik Buterin (November 2024)Composability is innovation by Linda Xie (June 2021)Composability is to software as compounding interest is to finance by Chris Dixon (October 2021)resources & research on DAOs, a16z crypto  

Clearer Thinking with Spencer Greenberg
Highs and lows on the road out of the replication crisis (with Brian Nosek)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Nov 8, 2024 98:18


Read the full transcript here. How much more robust have the social sciences become since the beginnings of the replication crisis? What fraction of replication failures indicate that the original result was a false positive? What do we know with relative certainty about human nature? How much of a difference is there between how people behave in a lab setting and how they behave out in the world? Why has there been such a breakdown of trust in the sciences over the past few decades? How can scientists better communicate uncertainty in their findings to the public? To what extent are replication failures a problem in the other sciences? How useful is the Implicit Association Test (IAT)? What does it mean if someone can predict how they'll score on the IAT? How do biases differ from associations? What should (and shouldn't) the IAT be used for? Why do replications often show smaller effect sizes than the original research showed? What is the Lifecycle Journals project?Brian Nosek co-developed the Implicit Association Test, a method that advanced research and public interest in implicit bias. Nosek co-founded three non-profit organizations: Project Implicit to advance research and education about implicit bias, the Society for the Improvement of Psychological Science to improve the research culture in his home discipline, and the Center for Open Science (COS) to improve rigor, transparency, integrity, and reproducibility across research disciplines. Nosek is Executive Director of COS and a professor at the University of Virginia. Nosek's research and applied interests aim to understand why people and systems produce behaviors that are contrary to intentions and values; to develop, implement, and evaluate solutions to align practices with values; and, to improve research credibility and cultures to accelerate progress. Connect with him on Bluesky or LinkedIn, or learn more about him on the COS website. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

Clearer Thinking with Spencer Greenberg
What do we know for sure about human psychology? (with Simine Vazire)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Sep 11, 2024 81:10


Read the full transcript here. How much progress has psychology made on the things that matter most to us? What are some psychological findings we feel pretty confident are true? How much consensus is there about the Big 5 personality traits? What are the points of disagreement about the Big 5? Are traits the best way of thinking about personality? How consistent are the Big 5 traits across cultures? How accurately do people self-report their own personality? When are psychophysical measures more or less useful than self-report measures? How much credence should we lend to the concept of cognitive dissonance? What's the next phase of improvement in the social sciences? Has replicability improved among the social sciences in, say, the last decade? What percent of papers in top journals contain fraud? What percent of papers in top journals are likely unreplicable? Is it possible to set the bar for publishing too high? How can universities maintain a high level of quality in their professors and researchers without pressuring them so hard to publish constantly? What is the simpliest valid analysis for a given study?Simine Vazire's research examines whether and how science self-corrects, focusing on psychology. She studies the research methods and practices used in psychology, as well as structural systems in science, such as peer review. She also examines whether we know ourselves, and where our blind spots are in our self-knowledge. She teaches research methods. She is editor-in-chief of Psychological Science (as of 1 Jan, 2024) and co-founder (with Brian Nosek) of the Society for the Improvement of Psychological Science. Learn more about her and her work at simine.com.Further reading:"How Replicable Are Links Between Personality Traits and Consequential Life Outcomes? The Life Outcomes of Personality Replication Project", by Christopher J. Soto"Machine learning uncovers the most robust self-report predictors of relationship quality across 43 longitudinal couples studies", by Joel, Eastwick, Allison, and WolfNote from Spencer: I misremembered this study as trying to predict breakups when actually the variable they found they couldn't predict is change in relationship-quality over time. The authors said that "relationship-quality change (i.e., increases or decreases in relationship quality over the course of a study) was largely unpredictable from any combination of self-report variables". StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

The Studies Show
Episode 49: Scientific publishing

The Studies Show

Play Episode Listen Later Sep 10, 2024 76:13


It's in a peer-reviewed paper, so it must be true. Right? Alas, you can only really hold this belief if you don't know about the peer-review system, and scientific publishing more generally.That's why, in this episode of The Studies Show, Tom and Stuart break down the traditional scientific publishing process, discuss how it leads science astray, and talk about the ways in which, if we really cared, we could make it better.The Studies Show is brought to you by Works in Progress magazine. Their new September 2024 issue is out now, and is brimming with fascinating articles including one on lab-grown diamonds, one on genetically-engineered mosquitoes, and one on the evolution of drip coffee. Check it out at worksinprogress.co.Show Notes* A history of Philosophical Transactions, the oldest scientific journal* Hooke (1665) on “A Spot in One of the Belts of Jupiter”* The original paper proposing the h-index* Useful 2017 paper on perverse incentives and hypercompetition in science* Goodhart's Law* Bad behaviour by scientists:* What is a “predatory journal”?* Science investigates paper mills and their bribery tactics* The best example yet seen of salami slicing* Brief discussion of citation manipulation* Elisabeth Bik on citation rings* The recent discovery of sneaked citations, hidden in the metadata of a paper* The Spanish scientist who claims to publish a scientific paper every two days* Science report on the fake anemone paper that the journal didn't want to retract* Transcript of Ronald Fisher's 1938 lecture in which he said his famous line about statisticians only being able to offer a post-mortem* 2017 Guardian article about the strange and highly profitable world of scientific publishing* Brian Nosek's 2012 “scientific utopia” paper* Stuart's 2022 Guardian article on how we could do away with scientific papers altogether* The new Octopus platform for publishing scientific resaerch* Roger Giner-Sorolla's article on “aesthetic standards” in scientific publishing and how they damage science* The Transparency and Openness Practices guidelines that journals can be rated on* Registered Reports - a description, and a further discussion from Chris Chambers* 2021 paper showing fewer positive results in Registered Reports compared with standard scientific publicationCreditsThe Studies Show is produced by Julian Mayers at Yada Yada Productions. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thestudiesshowpod.com/subscribe

On Tech Ethics with CITI Program
Open Science Principles, Practices, and Technologies - On Tech Ethics

On Tech Ethics with CITI Program

Play Episode Listen Later Aug 6, 2024 29:07


This episode discusses the principles, practices, and technologies associated with open science and underscores the critical role that various stakeholders, including researchers, funders, publishers, and institutions, play in advancing it. Our guest today is Brian Nosek, the co-founder and Executive Director of the Center for Open Science and a professor at the University of Virginia, who focuses on research credibility, implicit bias, and aligning practices with values. Brian also co-developed the Implicit Association Test and co-founded Project Implicit and the Society for the Improvement of Psychological Science.  Additional resources: Center for Open Science: https://www.cos.io/ The Open Science Framework: https://www.cos.io/products/osf FORRT (Framework for Open and Reproducible Research Training): https://forrt.org/ The Turing Way: https://book.the-turing-way.org/  CITI Program's “Preparing for Success in Scholarly Publishing” course: https://about.citiprogram.org/course/preparing-for-success-in-scholarly-publishing/ CITI Program's “Protocol Development and Execution: Beyond a Concept” course: https://about.citiprogram.org/course/protocol-development-execution-beyond-a-concept/ CITI Program's “Technology Transfer” course: https://about.citiprogram.org/course/technology-transfer/ 

Freakonomics Radio
573. Can Academic Fraud Be Stopped?

Freakonomics Radio

Play Episode Listen Later Jan 18, 2024 62:36


Probably not — the incentives are too strong. Scholarly publishing is a $28 billion global industry, with misconduct at every level. But a few reformers are gaining ground.   (Part 2 of 2) SOURCES:Max Bazerman, professor of business administration at Harvard Business School.Leif Nelson, professor of business administration at the University of California, Berkeley Haas School of Business.Brian Nosek, professor of psychology at the University of Virginia and executive director at the Center for Open Science.Ivan Oransky, distinguished journalist-in-residence at New York University, editor-in-chief of The Transmitter, and co-founder of Retraction Watch.Joseph Simmons, professor of applied statistics and operations, information, and decisions at the Wharton School at the University of Pennsylvania.Uri Simonsohn, professor of behavioral science at Esade Business School.Simine Vazire, professor of psychology at the University of Melbourne and editor-in-chief of Psychological Science. RESOURCES:"The Harvard Professor and the Bloggers," by Noam Scheiber (The New York Times, 2023)."They Studied Dishonesty. Was Their Work a Lie?" by Gideon Lewis-Kraus (The New Yorker, 2023)."Evolving Patterns of Extremely Productive Publishing Behavior Across Science," by John P.A. Ioannidis, Thomas A. Collins, and Jeroen Baas (bioRxiv, 2023)."Hindawi Reveals Process for Retracting More Than 8,000 Paper Mill Articles," (Retraction Watch, 2023)."Exclusive: Russian Site Says It Has Brokered Authorships for More Than 10,000 Researchers," (Retraction Watch, 2019)."How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data," by Daniele Fanelli (PLOS One, 2009). EXTRAS:"Why Is There So Much Fraud in Academia?" by Freakonomics Radio (2024)."Freakonomics Goes to College, Part 1," by Freakonomics Radio (2012).

Freakonomics Radio
572. Why Is There So Much Fraud in Academia?

Freakonomics Radio

Play Episode Listen Later Jan 11, 2024 74:06


Some of the biggest names in behavioral science stand accused of faking their results. Last year, an astonishing 10,000 research papers were retracted. We talk to whistleblowers, reformers, and a co-author who got caught up in the chaos. (Part 1 of 2) SOURCES:Max Bazerman, professor of business administration at Harvard Business School.Leif Nelson, professor of business administration at the University of California, Berkeley Haas School of Business.Brian Nosek, professor of psychology at the University of Virginia and executive director at the Center for Open Science.Joseph Simmons, professor of applied statistics and operations, information, and decisions at the Wharton School at the University of Pennsylvania.Uri Simonsohn, professor of behavioral science at Esade Business School.Simine Vazire, professor of psychology at the University of Melbourne and editor-in-chief of Psychological Science. RESOURCES:"More Than 10,000 Research Papers Were Retracted in 2023 — a New Record," by Richard Van Noorden (Nature, 2023)."Data Falsificada (Part 1): 'Clusterfake,'" by Joseph Simmons, Leif Nelson, and Uri Simonsohn (Data Colada, 2023)."Fabricated Data in Research About Honesty. You Can't Make This Stuff Up. Or, Can You?" by Nick Fountain, Jeff Guo, Keith Romer, and Emma Peaslee (Planet Money, 2023).Complicit: How We Enable the Unethical and How to Stop, by Max Bazerman (2022)."Evidence of Fraud in an Influential Field Experiment About Dishonesty," by Joseph Simmons, Leif Nelson, and Uri Simonsohn (Data Colada, 2021)."False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant," by Joseph Simmons, Leif Nelson, and Uri Simonsohn (Psychological Science, 2011). EXTRAS:"Why Do We Cheat, and Why Shouldn't We?" by No Stupid Questions (2023)."Is Everybody Cheating These Days?" by No Stupid Questions (2021).

BJKS Podcast
84. Brian Nosek: Improving science, the past & future of the Center for Open Science, and failure in science

BJKS Podcast

Play Episode Listen Later Dec 8, 2023 62:09 Transcription Available


Brian Nosek is a professor of psychology at the University of Virginia, and Co-founder and Executive Director of the Center for Open Science. In this conversation, we discuss the Center for Open Science, Brian's early interest in improving science, how COS got started, what Brian would like to do in the future, and how to figure out whether ideas are working.BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith.Support the show: https://geni.us/bjks-patreonTimestamps00:00: Brian's early interest in improving science15:24: How the Center for Open Science got funded (by John and Laura Arnold)26:08: How long is COS financed into the future?29:01: What if COS isn't benefitting science anymore?35:42: Is Brian a scientist or an entrepreneur?40:58: The future of the Center for Open Science51:13: A book or paper more people should read54:42: Something Brian wishes he'd learnt sooner58:53: Advice for PhD students/postdocsPodcast linksWebsite: https://geni.us/bjks-podTwitter: https://geni.us/bjks-pod-twtBrian's linksWebsite: https://geni.us/nosek-webGoogle Scholar: https://geni.us/nosek-scholarTwitter: https://geni.us/nosek-twtBen's linksWebsite: https://geni.us/bjks-webGoogle Scholar: https://geni.us/bjks-scholarTwitter: https://geni.us/bjks-twtReferences & LinksArticle about John Arnold: https://www.wired.com/2017/01/john-arnold-waging-war-on-bad-science/Scientific virtues (including stupidity): https://slimemoldtimemold.com/2022/02/10/the-scientific-virtues/Cohen (1994). The earth is round (p

Meikles & Dimes
103: Brian Nosek | From Ruining His Career to Revolutionizing Science

Meikles & Dimes

Play Episode Listen Later Nov 28, 2023 22:17


Brian Nosek is a social-cognitive psychologist, professor at the University of Virginia, and co-founder and director of the Center for Open Science. In 2011, Brian and his colleagues launched the Reproducibility Project which would ultimately transform science forever. In this episode we discuss the following: Reputation is how people perceive us. But integrity is what we get to choose for ourselves. We can hold ourselves accountable for our integrity, but when we worry about our reputation, we're prone to get led astray. If we try to control our reputation, we're prone to avoid risk (e.g., we don't do the things we should do because we might make people mad). If we try to control our reputation, we may deviate from our values in an attempt to keep other people happy. We undermine ourselves when we prioritize reputation over integrity. Our long-term reputation will ultimately derive from our integrity. You can't control your reputation. You can control your integrity. Brian was told he was ruining his career. But by focusing on integrity over reputation, Brian and his colleagues revolutionized science.   Follow Brian: Twitter: https://twitter.com/BrianNosek LinkedIn: https://www.linkedin.com/in/brian-nosek-682b17114/ Follow Me: Twitter: https://twitter.com/nate_meikle LinkedIn: https://www.linkedin.com/in/natemeikle/ Instagram: https://www.instagram.com/nate_meikle/

Fularsız Entellik
Davranışsal Ekonomi 4: Dürüstlük Yalanları ve Tekrarlanabilirlik Krizi

Fularsız Entellik

Play Episode Listen Later Nov 11, 2023 21:56


Davranışsal ekonomi balonunu şişirdik şişirdik, artık indiriyoruz. Çeşit çeşit sahtekarlıktan oluşan bir dedektiflik hikayesi oldu. İki başrol oyuncumuz var, ben de finali iki kısma ayırdım: Bugünkü kısım Dan Ariely'nin dürüstlük çalışmalarına odaklı. Aynı zamanda tekrarlanabilirlik krizi üstünden bilimin nasıl düşe kalka ilerlediğini de göreceğiz. Tüm kaynaklar aşağıda her zamanki gibi, patronlarıma patroniçelerime teşekkürler..Konular:(00:05) Davranışsal Balon(02:05) Önceki bölümlerin özeti(04:20) Dan Ariely(06:41) 10 Emir çalışması(08:13) Tekrarlanabilirlik Krizi(11:57) p-hacking(15:50) Deri rengi ve kırmızı kartlar(17:28) Many Labs(19:15) Ariely'nin 10 Emir çalışmasının tekrarı(21:26) Patreon Teşekkürleri.Kaynaklar:Makale (pdf): The Dishonesty of Honest People (2008)Yazı: The Mind of a Con Man (Diederik Stapel hakkında)Video: The scientist who faked over 50 studiesMakale: The Reproducibility Project: Psychology (2015)Brian Nosek ve Many LabsVideo: The Fall of a Superstar PsychologistYazı: The replication crisis has engulfed economicsMakale: 10 Emir çalışmasının tekrarı.------- Podbee Sunar -------Bu podcast, Cambly hakkında reklam içerir.Cambly'de yılın en büyük indirimi %60'dan fular60 koduyla faydalanmak için tıklayınız.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

AUDIT 15 FUN
Episode 108 - Data Controversy - Brian Nosek

AUDIT 15 FUN

Play Episode Listen Later Oct 1, 2023 21:10


In the latest AUDIT 15 FUN episode, I interviewed Brian Nosek, a professor at the University of Virginia and Executive Director at the Center for Open Science, about Francesca Gino, a dishonesty researcher accused by the Data Colada team of data fraud. What evidence did the Data Colada team provide to support the fraud allegations? Are there alternative explanations that could clear Gino of any wrongdoing? In the broader context, how can we ensure researchers maintain honesty in their work? Listen to the episode to hear Brian's insights and opinions.   Full transparency for this episode: Brian is a fundraising organizer for the Data Colada team. The views, thoughts, and opinions expressed by those interviewed on the AUDIT 15 FUN podcast are solely their own and do not represent the views, thoughts, and opinions of Jon Taber or of the AUDIT 15 FUN podcast.

R3ciprocity Podcast
Brian Nosek On Courage & Creating The Open Science Foundation

R3ciprocity Podcast

Play Episode Listen Later Sep 28, 2023 54:23


Professor Brian Nosek discusses the importance of courage and how to navigate difficult challenges and questions in science. Brian Arthur Nosek is a social-cognitive psychologist, professor of psychology at the University of Virginia, and the co-founder and director of the Center for Open Science. He also co-founded the Society for the Improvement of Psychological Science and Project Implicit. Brian Nosek received a Ph.D. in from Yale University in 2002 and is a professor in the Department of Psychology at the University of Virginia.

The Ongoing Transformation
Open Science: Moving from Possible to Expected to Required

The Ongoing Transformation

Play Episode Listen Later Sep 26, 2023 31:22


A decade ago, University of Virginia psychology professor Brian Nosek cofounded an unusual nonprofit, the Center for Open Science. It's been a cheerleader, enabler, and nagger to convince scientists that making their methods, data, and papers available to others makes for better science.  The Center for Open Science has built tools to register analysis plans and hypotheses before data are collected. It campaigns for authors and journals to state explicitly whether and where data and other research materials are available. Gradually, practices that were considered fringe are becoming mainstream. The White House declared 2023 the Year of Open Science.  Nosek refers to the pyramid of culture change as his strategy to push for reforms: first make a better practice possible, then easy, expected, rewarding, and finally, required. It starts with building infrastructure, then experience, reward systems, and ultimately policy.  In this podcast, Brian Nosek joins host Monya Baker to discuss the movement of scientific ideals toward reality. Resources: Center for Open Science Transparency and Openness Guidelines Reproducibility Project: Psychology Reproducibility Project: Cancer Biology Pyramid of Social Change

The HPS Podcast - Conversations from History, Philosophy and Social Studies of Science
BONUS EPISODE - Simine Vazire on Making Science Better

The HPS Podcast - Conversations from History, Philosophy and Social Studies of Science

Play Episode Listen Later Aug 18, 2023 23:47 Transcription Available


Welcome to a special bonus episode of The HPS Podcast with Professor of Psychology, Simine Vazire, discussing the ways in which HPS scholars and scientists can work together to create better science.We are releasing the episode to coincide with the campaign put together by Simine and others to support the legal defence of Data Colada – a group of professors who identify concerns with the integrity of published research. Members of Data Colada are being sued by Francesca Gino, a Harvard Business School Professor, after they published blog posts raising concerns about the data integrity of four papers on which Gino was a co-author. As the group says, “defending science requires defending legitimate scientific criticism against legal bullying”.In this podcast episode Indigo Keel, talks with Simine about more than just this one issue. They discuss Simine's connection to History and Philosophy of Science, the need for scientists to reflect on the practices of their discipline, issues that have arisen out of the replication crisis and cases of alleged scientific misconduct – including the Francesca Gino case. Simine highlights how philosophers of science can contribute to making science better.Relevant links:·       The GoFundMe Campaign to Support Data Colada's Legal Defense·       Vox Article: Is it Defamation to Point out Scientific Research Fraud?·       Data Colada Post (Part 1): “Clusterfake”·       Data Colada Post (Part 2): “My Class Year is Harvard”·       Data Colada Post (Part 3): “The Cheaters are Out of Order”·       Data Colada Post (Part 4): “Forgetting the Words”A transcript of the episode can be found here: https://www.hpsunimelb.org/post/bonus-episode-transcript Simine studies the research methods and practices used in psychology, as well as structural systems in science, such as peer review. Simine is editor in chief of Collabra: Psychology, one of the PIs on the repliCATS project (with Fiona Fidler), and co-founder (with Brian Nosek) of the Society for the Improvement of Psychological Science.Thanks for listening to The HPS Podcast with your current hosts, Samara Greenwood and Carmelina Contarino.You can find more about us on our blog, website, bluesky, twitter, instagram and facebook feeds. This podcast would not be possible without the support of School of Historical and Philosophical Studies at the University of Melbourne.www.hpsunimelb.org

Historically Thinking: Conversations about historical knowledge and how we achieve it
Episode 313: Intellectual Humility, Social Psychologically Speaking

Historically Thinking: Conversations about historical knowledge and how we achieve it

Play Episode Listen Later Apr 17, 2023 51:16


This is the second of our continuing series on intellectual humility and historical thinking. Today I'm interested in exploring the social science of intellectual humility. Igor Grossman is a social psychologist, an Associate Professor of Psychology at the University of Waterloo in Canada. “Most of our work,” he writes, describing his lab, “either focuses on how people make sense of the world around them—their expectations, lay theories, meta-cognitions, forecasts—or it concerns how larger cultural forces impact human behavior and societal change.” That makes him the perfect person to talk to about intellectual humility, and historical thinking. For Further Investigation Tenelle Porter, Abdo Elnakouri, Ethan A. Meyers, Takuya Shibayama, Eranda Jayawickreme and Igor Grossmann, "Predictors and consequences of intellectual humility" The Wisdom and Culture Lab World After COVID Igor Grossmann, Oliver Twardus, Michael E. W. Varnum, Eranda Jayawickreme, John McLevey, "Expert Predictions of Societal Change: Insights from the World after COVID Project" Brian Nosek of the University of Virginia (and the Center for Open Science) discusses the replication crisis with Russ Roberts The Center of Open Science has been a force for change in the "replication crisis"  

The Science of Success
Self Help For Smart People - How You Can Spot Bad Science & Decode Scientific Studies with Dr. Brian Nosek

The Science of Success

Play Episode Listen Later Apr 13, 2023 54:43


In this episode, we show how you can decode scientific studies and spot bad science by digging deep into the tools and skills you need to be an educated consumer of scientific information. Are you tired of seeing seemingly outrageous studies published in the news, only to see the exact opposite published a week later? What makes scientific research useful and valid? How can you, as a non-scientist, read and understand scientific information in a simple and straightforward way that can help you get closer to the truth - and apply those lessons to your life. We discuss this and much more with Dr. Brian Nosek. Dr. Brian Nosek is the co-founder and Executive Director of the Center for Open Science and a professor of psychology at the University of Virginia. Brian led the reproducibility project which involved leading some 270 of his peers to reproduce 100 published psychology studies to see if they could reproduce the results. This work shed light on some of the publication bias in the science of psychology and much more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Not Unreasonable Podcast
Brian Nosek on the Gap Between Values and Actions

The Not Unreasonable Podcast

Play Episode Listen Later Oct 8, 2022 70:39 Transcription Available


Brian Nosek, has been at the center of the two most important recent social revolutions in academia. First is implicit bias where Brian co-founded Project Implicit http://projectimplicit.net/ based on a pretty incredible idea: that we don't do what we say we value. The concept of implict bias has really taken off and the practice of implicit bias detection and training has gone "way out in front of the research" as we discuss.While he was busy kicking off a fundamental change in our society (felt very strongly in academia) he decided to upend (and massively upgrade) the culture of research itself by discovering that huge swaths of empirical research fails to replicate. I'm no academic but I would find this terrifying if I was. As Brian says in the interview: "in some fields, people still don't like it an email from me" because that means he's about to try to replicate their work!How was Brian able to pull all this off? There's even a technology innovation hidden in all this that makes his work possible. He's a true innovator and an honor to have him on the show!show notes: https://notunreasonable.com/?p=7611youtube: https://youtu.be/NkKuF--5V60

Everything Hertz
161: The memo (with Brian Nosek)

Everything Hertz

Play Episode Listen Later Sep 12, 2022 47:58


Dan and James are joined by Brian Nosek (Co-founder and Executive Director of the Center for Open Science) to discuss the recent White House Office of Science Technology & Policy memo ensuring free, immediate, and equitable access to federally funded research. They also cover the implications of this memo for scientific publishing, as well as the mechanics of culture change in science. Open Science Framework hits half a million users (https://www.cos.io/blog/celebrating-a-global-open-science-community) The White house memo (https://www.whitehouse.gov/wp-content/uploads/2022/08/08-2022-OSTP-Public-Access-Memo.pdf) Brian on Twitter (https://twitter.com/BrianNosek) Other links Everything Hertz on social media - Dan on twitter (https://www.twitter.com/dsquintana) - James on twitter (https://www.twitter.com/jamesheathers) - Everything Hertz on twitter (https://www.twitter.com/hertzpodcast) - Everything Hertz on Facebook (https://www.facebook.com/everythinghertzpodcast/) Support us on Patreon (https://www.patreon.com/hertzpodcast) and get bonus stuff! $1 per month: A 20% discount on Everything Hertz merchandise, access to the occasional bonus episode, and the the warm feeling you're supporting the show $5 per month or more: All the stuff you get in the one dollar tier PLUS a bonus episode every month Citation Quintana, D.S., Heathers, J.A.J. (Hosts). (2022, August 31) "161: The memo (with Brian Nosek)", Everything Hertz [Audio podcast], DOI: 10.17605/OSF.IO/A7D86 Special Guest: Brian Nosek.

The Body of Evidence
078 - Immunity / Omicron / Reproducibility

The Body of Evidence

Play Episode Listen Later Jan 9, 2022 62:50


What does the body of evidence have to say on the topic of the immune system? Plus: the omicron variant and its accompanying public health measures, and we go over what happens when scientists try to fulfill one of the promises of science: replicating results.   Block 1: (2:46) Immunity: what it does, self versus non-self, antigens, HLA, innate versus adaptive, fever, skin   Block 2: (12:07) Immunity: Geert Vanden Bossche, eosinophils, neutrophils, G-CSF, immune boosters, vitamin C, antibodies, T and B cells, immune memory, vaccines   Block 3: (30:40) Omicron, public health measures, and booster doses   Block 4: (48:34) Reproducibility of cancer studies     * Jingle by Joseph Hackl * Theme music: “Fall of the Ocean Queen“ by Joseph Hackl. * Assistant researcher: Nicholas Koziris   To contribute to The Body of Evidence, go to our Patreon page at: http://www.patreon.com/thebodyofevidence/.   To make a one-time donation to our show, you can now use PayPal! https://www.paypal.com/donate?hosted_button_id=9QZET78JZWCZE   Patrons get a bonus show on Patreon called “Digressions”! Check it out!     References:   1) Driving doggos: https://www.bbc.com/news/av/uk-15864761   2) Explanatory videos on the immune system: https://www.osmosis.org/learn/Introduction_to_the_immune_system   3) Understanding how vaccines work: https://www.cdc.gov/vaccines/hcp/conversations/understanding-vacc-work.html   4) Understanding how COVID-19 vaccines work: https://www.cdc.gov/coronavirus/2019-ncov/vaccines/different-vaccines/how-they-work.html   5) Cochrane review on vitamin C for the common cold: https://www.cochrane.org/CD000980/ARI_vitamin-c-for-preventing-and-treating-the-common-cold   6) Measles vaccine information: https://www.canada.ca/en/public-health/services/publications/healthy-living/canadian-immunization-guide-part-4-active-vaccines/page-12-measles-vaccine.html#p4c11a4 and https://www.canada.ca/en/public-health/services/publications/diseases-conditions/measles-rubella-surveillance/2021/week-46.html   7) Interview with Dr. Paul Offit for New York Mag's Intelligencer: https://nymag.com/intelligencer/2021/12/omicron-dr-paul-offit-is-skeptical-of-boosters-for-all.html   8) Brian Nosek paper 1: https://elifesciences.org/articles/67995   9) Brian Nosek paper 2: https://elifesciences.org/articles/71601   10) New Scientist article: https://www.newscientist.com/article/2300455-investigation-fails-to-replicate-most-cancer-biology-lab-findings/   11) Amgen's paper on the lack of replicability of cancer research: https://www.nature.com/articles/483531a   12) Jonathan's paper on circulating microRNA signatures for cancer: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5528532/     Music Credits:   Just Deep 2 by Sascha Ende® Link: https://filmmusic.io/song/312-just-deep-2 License: https://filmmusic.io/standard-license  

Stanford Psychology Podcast
25 - Brian Nosek: The Pursuit of Open and Reproducible Science

Stanford Psychology Podcast

Play Episode Listen Later Dec 23, 2021 51:23


Joseph chats with Brian Nosek, co-Founder and Executive Director of the Center for Open Science. The Center's mission is to increase the openness, integrity and reproducibility of scientific research. Brian is also a professor of Psychology at the University of Virginia where he runs the Implicit Social Cognition Lab. Brian studies the gap between values and practices with the goal of understanding why the gap exists, its consequences and how to reduce it. Brian co-founded Project Implicit, a collaborative research project that examines implicit cognition - thoughts and attitudes that occur outside our awareness. In 2015, he was named one of Nature's 10 and to the Chronicle for Higher Education Influence list. He won the 2018 Golden Goose Award from the American Association for the Advancement of Science - only the 2nd time a psychologist has won the award. Brian received his PhD from Yale University in 2002. In this episode, Brian discusses his 2021 Annual Review piece titled Replicability, Robustness and Reproducibility in Psychological Science; the paper reflects on the progress and challenges of the science reform movement in the last decade. Brian and Joseph talk about measures researchers and institutions can take to improve research reliability; they also reimagine how we fund and publish studies, share lessons learnt from the pandemic, and share resources for learning more about the reform movement. Paper: Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Almenberg, A. D., ... & Vazire, S. (2021). Replicability, robustness, and reproducibility in psychological science. Accessible preprint: https://psyarxiv.com/ksfvq/

The Accad and Koka Report
Ep. 191 The Myth of Statistical Inference (Part 2): At the Heart of the Replication Crisis

The Accad and Koka Report

Play Episode Listen Later Dec 11, 2021 48:31


Our guest is Michael Acree author of https://amzn.to/3Do4aIT (The Myth of Statistical Inference), published by Springer earlier this year. Dr. Acree is a former statistician at the University of California San Francisco. This is the second episode in a 2-part series. GUEST: Michael Acree, PhD. RELATED EPISODES: https://accadandkoka.com/episode49/ (Ep. 49) Many Statisticians, Many Answers: The Methodological Factor in the Replication Crisis (with Brian Nosek) https://accadandkoka.com/episode57/ (Ep. 57) Neither Fisher nor Bayes:The Limits of Statistical Inference https://accadandkoka.com/episode190 (Ep. 190) The Myth of Statistical Inference (Part 1) Historical Background WATCH ON YOUTUBE: https://youtu.be/8YElVMz763E (Watch the episode) on our YouTube channel Note: The Accad and Koka Report participates in the Amazon Affiliate program and may earn a small commission from purchases completed from links on the website, Support this podcast

Speaking of Psychology
How ‘open science' is changing psychological research, with Brian Nosek, PhD

Speaking of Psychology

Play Episode Listen Later Jun 16, 2021 41:25


Is psychology research in a crisis or a renaissance? Over the past decade, scientists have realized that many published research results, including some classic findings in psychology, don't always hold up to repeat trials. Brian Nosek, PhD, of the Center for Open Science, discusses how psychologists are leading a movement to address that problem, in psychology and in other scientific fields, by changing the way that research studies get funded, conducted and published. Listener Survey - https://www.apa.org/podcastsurvey

The Escaped Sapiens Podcast
Brian Nosek: Is science in (reproducibility) crisis? | Escaped Sapiens Podcast #12

The Escaped Sapiens Podcast

Play Episode Listen Later May 9, 2021 51:55


Brian Nosek discusses the reproducibility project, a crowdsourced collaboration of 270 authors, which attempted to repeat 100 published experimental and correlational psychological studies. When the results of the study were released in 2015 it was a shock to the scientific world: only 36% of the studies replicated, and of those that replicated, many of their results were smaller than those in the original papers. So is there only a problem in psychology or does it go much further? Find out more about Brian's work at the Center for Open Science: https://www.cos.io/

The Sports Medicine Podcast
Hilliard Discussion 10 Announcement Video

The Sports Medicine Podcast

Play Episode Listen Later Nov 5, 2020 3:12


The award winning Hilliard Discussions are back! But, there's a twist... Rather than the typical short lecture format, the Hilliard Discussions will be hosted as debates. What will this look like, you ask? Well, we will have two experts on opposing sides of a certain topic in the area of sports medicine and human performance. Then, they will debate and respectfully argue their points backed by evidence from their research/experience. Sounds fun, I know! Additionally, Hilliard Discussions will now occur semi-annually: one in January and one in September. The next Hilliard Discussion will be on January 27th, 2021 from 7:00pm - 8:30pm CST in an online format! A link to the event will be posted on huffinesinstitute.org soon. The topic that will be discussed is: Does the scientific literature inhibit scientific truth? Talking on the pro side of the topic will be Dr. Brian Nosek, and talking on the con side will be Dr. Andy Young.Hope to see you there!

Future Tense - ABC RN
Reinventing research – Part One: future scenarios and moving away from the publish or perish mantra

Future Tense - ABC RN

Play Episode Listen Later Sep 20, 2020 29:20


The research community is facing a “crisis of reproducibility”, according to the head of the Center for Open Science, Professor Brian Nosek. He says many of the traditional practices designed to make research robust, actually distort and diminish its effectiveness. In this episode, he details his ideas for reform. We also explore three plausible scenarios for how the academic sector could look in 2030.

Future Tense - ABC RN
Reinventing research – Part One: future scenarios and moving away from the publish or perish mantra

Future Tense - ABC RN

Play Episode Listen Later Sep 20, 2020 29:20


The research community is facing a “crisis of reproducibility”, according to the head of the Center for Open Science, Professor Brian Nosek. He says many of the traditional practices designed to make research robust, actually distort and diminish its effectiveness. In this episode, he details his ideas for reform. We also explore three plausible scenarios for how the academic sector could look in 2030.

The Body of Evidence
Interview - Elisabeth Bik, Science Detective

The Body of Evidence

Play Episode Listen Later Jun 21, 2020 53:39


Jonathan and Chris interview Dr. Elisabeth Bik, a science detective who has dedicated herself to identifying and reporting image duplications in the scientific literature that can be due to errors… or to fraud. This interview is part of a continuing series on bad science, including the special “Science Is a Human Enterprise” (part 1 and part 2) and the interview with Dr. Brian Nosek.   1:16 How Dr. Bik became a science detective 3:28 Duplications 10:43 Music or no music? 11:10 Screening 20,000 papers for duplications 16:38 Speed versus accuracy in the age of COVID 17:41 The failures of peer review 21:37 Dr. Bik's frustration 24:55 Justice League versus Paper Mill 34:33 Make a wish for a healthier science 39:27 Dr. Bik's drinking game 43:02 How fraudsters may cheat automation in the future 46:05 Will the cockroaches scare the public?   * Theme music: "Troll of the Mountain Swing" by the Underscore Orkestra.   To contribute to The Body of Evidence, go to our Patreon page at: http://www.patreon.com/thebodyofevidence/.     Links: 1) Dr. Bik's paper on screening 20,000 articles for duplications: https://mbio.asm.org/content/7/3/e00809-16 2) Dr. Bik on Twitter: https://twitter.com/MicrobiomDigest 3) Dr. Bik's website: https://scienceintegritydigest.com/ 4) PubPeer: https://pubpeer.com/

The Joy of x
Rebecca Goldin and Brian Nosek on Hard Truths in Math and Psychology

The Joy of x

Play Episode Listen Later Mar 24, 2020 46:02


The mathematician Rebecca Goldin and the psychology researcher Brian Nosek speak with host Steven Strogatz about what it’s like to be the bearers of unpopular truths. The post Rebecca Goldin and Brian Nosek on Hard Truths in Math and Psychology first appeared on Quanta Magazine. The post Rebecca Goldin and Brian Nosek on Hard Truths in Math and Psychology first appeared on Quanta Magazine

RoRICast
RoRICast 001 - Brian Nosek

RoRICast

Play Episode Listen Later Feb 13, 2020 46:49


In Episode 1 of RoRICast - Adam Dinsmore and James Wilsdon speak to Brian Nosek, co-Founder and Executive Director of the Center for Open Science, all about his hopes and expectations for the Open Science movement in the coming decade.

Big Ideas
Brian Nosek on Open Science & The Replication Crisis

Big Ideas

Play Episode Listen Later Feb 9, 2020 57:38


Brian Nosek (@BrianNosek) joins Erik Torenberg (@eriktorenberg) and Laura Deming (@laurademing) to open science, the replication crisis, and incentives in scientific research.

Philosophy Talk Starters
434: Cognitive Bias

Philosophy Talk Starters

Play Episode Listen Later Dec 14, 2019 51:44


More at www.philosophytalk.org/shows/cognitive-bias. Aristotle thought that rationality was the faculty that distinguished humans from other animals. However, psychological research shows that our judgments are plagued by systematic, irrational, unconscious errors known as ‘cognitive biases.’ In light of this research, can we really be confident in the superiority of human rationality? How much should we trust our own judgments when we are aware of our susceptibility to bias and error? And does our awareness of these biases obligate us to counter them? John and Ken shed their biases with Brian Nosek from the University of Virginia, co-Founder and Executive Director of the Center for Open Science.

The Psychology Podcast
174: Brian Nosek on Implicit Bias and Open Science

The Psychology Podcast

Play Episode Listen Later Aug 1, 2019 63:53


oday with have Brian Nosek on the podcast. Nosek is co-Founder and Executive Director of the Center for Open Science (http://cos.io/) that operates the Open Science Framework (http://osf.io/). The Center for Open Science is enabling open and reproducible research practices worldwide. Brian is also a Professor in the Department of Psychology at the University of Virginia. He received his Ph.D. from Yale University in 2002. He co-founded Project Implicit (http://projectimplicit.net/), a multi-university collaboration for research and education investigating implicit cognition–thoughts and feelings that occur outside of awareness or control. Brian investigates the gap between values and practices, such as when behavior is influenced by factors other than one’s intentions and goals. Research applications of this interest include implicit bias, decision-making, attitudes, ideology, morality, innovation, and barriers to change. Nosek applies this interest to improve the alignment between personal and organizational values and practices. In 2015, he was named one of Nature’s 10 and to the Chronicle for Higher Education Influence list. In this episode we discuss: The genesis of Project Implicit The current state of the field of implicit bias Overuses of the Implicit Association Test (IAT) The common desire people have for simple solutions The potential for misuse of the IAT for real-world selection How hard it is to study human behavior What the IAT is really capturing How the degree to which the IAT is trait or state-like varies by the topic you are investigating Cultural influences on the IAT Brian’s criticism of implicit bias training The latest state of the science on implicit bias How our ideologies creep in even when we are trying to be unbiased The difference between implicit attitudes and conscious attitudes  What would an equality of implicit associations look like? Why bias is not necessarily bad The genesis of The Reproducibility Project What are some classic psychological studies that haven’t replicated? The importance of having compassion for the scientist The importance of having the intellectual humility of uncertainty The importance of cultivating the desire to get it right (instead of the desire to be right) What is open science? What is #BroOpenScience? How hostility on social media can cause us to lose the view of the majority The importance of balancing getting it right with being kind to others

Circle of Willis
Brian Nosek

Circle of Willis

Play Episode Listen Later Jun 13, 2019 61:33


Welcome to Circle of Willis! For this episode I'm sharing a conversation I had a while ago with BRIAN NOSEK, professor of Psychology here, with me, at the University of Virginia, as well as co-Founder and Executive Director of the CENTER FOR OPEN SCIENCE, also here in Charlottesville. Brian earned his PhD at Yale University way back in 2002, only about a year before I first met him here, when I was just a jittery job candidate. Brian has been in the public eye quite a lot in the past decade or so, not only due to his work with the Implicit Association Test, otherwise known as the IAT, but also and perhaps mainly for his more recent path breaking efforts to increase the transparency and reproducibility of the work scientists do. I think you'll find that in our conversation, Brian is relentlessly thoughtful about everything that comes up. And I want to say here, publicly, that I think he's absolutely right, at the very least, about the toxicity of the current system of incentives and rewards faced by academic scientists. Occasionally you'll hear that "science is broken." It's a great, click-baity phrase that thrives in our current social media ecosystem. But it's completely wrong. Science is not and has never been broken. Even now, science is our most precious, life affirming, life saving, human activity. Literally nothing humans have invented has done more than science has to improve our welfare, to increase our sensitivity to the natural world, or to reveal the forces and mechanisms that form and constrain our miraculous universe. But the institutional structures within which science is done are in bad shape. At the foundation, public funding for science is dismal, and that problem is yoked to the steadily declining public commitment to higher education in general. Our institutions have come to rely on bloated federal grants to just keep the lights on, and the responsibility for securing those federal dollars has fallen heavily on the shoulders of scientists who ought to be focused on making discoveries and solving the world's problems. And because that is a heavy burden, institutional structures have formed to incentivize -- some would say coerce -- scientists into striving for those federal dollars. Want to get tenure? Better bring in some big federal grants. Want 12 months of continuous salary? Better bring in some big federal grants. You get the idea. But there are other problems, too. Want to get a good raise? You'd better publish a lot. Note that I didn't say you'd better publish excellent work. No one would say that excellent work isn't valued -- it is -- but what you really want is good numbers, because numbers are easier to evaluate. And we love indices we can point to, that can help us evaluate each other as algorithmically as possible. So each individual scientist has an h-index associated with their name (Google Scholar thinks mine is 44). Journals come with impact factors. And all of these indices are relatively easy to game, so professional advancement and stability orients itself toward gaming the indices at least as much as doing high quality work. In the meantime, a profession -- a passion, and even an art, really -- can gradually transform into a cynical race for money and prestige. And though a scientist may well grow skilled at reeling in the money during their career, whatever level of prestige they attain will ultimately fail them. As John Cacioppo argued in a previous episode of this very podcast, you and your specific work are not likely to be remembered for long, if at all. Prestige and recognition are understandable but ultimately foolish goals. Far better, Cacioppo argued, to focus your attention on the process -- on the doing of your work. And your best shot at enjoying that work -- perhaps at enjoying your life -- is to make sure that the work that you do is aligned with your values. Brian Nosek and I are in full agreement on at least one point: The system within which science is done -- particularly within which American science is done -- discourages a process-oriented focus, and, by extension, discourages us from aligning our scientific process with our values. Why? Because our institutions have to keep the lights on. So, science isn't broken at all. How could it be? Science is a system, a philosophy, perhaps even a moral commitment...to transparency and openness, to verifiability, to repeatability, to discovery, and, I would argue, to humility. Science is far more than a collection of methods and techniques, and, by the way, there is nothing about science that requires coverage by the New York Times to be valid. What may be broken is the system within which science manifests as a profession. So here's why I admire Brian Nosek so much: He isn't just complaining about things, the way I do. Instead, he's working hard to develop an alternative system -- a system based on the scientific process instead of rewarding outcomes, and, by extension, a scientific process based on deeply held scientific values. You and I may not agree with all the details in Brian's approach, but, you know, it's easy to criticize, right? Anyway, here are Brian Nosek and me, having a conversation in one of the conference rooms at the Center for Open Science. *    *    * Music for this episode of Circle of Willis was written and performed by Tom Stauffer of Tucson, Arizona. For information about how to purchase Tom’s music, as well as the music of his band THE NEW DRAKES, visit his Amazon page.  Circle of Willis is Produced by Siva Vaidhyanathan and brought brought to you by VQR and the Center for Media and Citizenship. Plus, we're a member of the TEEJ.FM podcast network.   Special thanks to VQR Editor Paul Reyes, WTJU FM General Manager Nathan Moore, as well as NPR reporter and co-founder of the very popular podcast Invisibilia, Lulu Miller.  

Circle of Willis
Preview: Brian Nosek

Circle of Willis

Play Episode Listen Later May 30, 2019 4:40


Hi Everyone! My conversation with BRIAN NOSEK is coming soon, but it isn't quite ready yet. In this preview we talk about how despite being the most successful endeavor in human history, science can be improved upon, not least through changing how we evaluate the success of individual scientists. Our current incentives might be encouraging us to make scientific “beauty out of mush.”  This conversation is priceless. More soon! Jim  

The Black Goat
Don't Be Told What You Want, Don't Be Told What You Need

The Black Goat

Play Episode Listen Later May 1, 2019 65:20


What if there were no journals? Would academic life be barren and empty, noisy and chaotic, happy and egalitarian, or something else entirely? In this episode we conduct an extended thought experiment about life without journals, in order to probe questions about what journals actually do for us anyway, what are other ways to achieve those things, and how we might overcome the downsides of the current scientific publishing ecosystem. How else could peer review work? How would researchers find information and know what to read? Would we just replace our current heuristics and biases with new ones? Plus: We answer a letter about whether to slow down to do higher-quality research or to focus on flashy results at top journals. Links: Scientific Utopia: I. Opening scientific communication, by Brian Nosek and Yoav Bar-Anan Mike Frank's Twitter thread on an ethical framework for open science The Black Goat is hosted by Sanjay Srivastava, Alexa Tullett, and Simine Vazire. Find us on the web at www.theblackgoatpodcast.com, on Twitter at @blackgoatpod, on Facebook at facebook.com/blackgoatpod/, and on instagram at @blackgoatpod. You can email us at letters@theblackgoatpodcast.com. You can subscribe to us on iTunes or Stitcher. Our theme music is Peak Beak by Doctor Turtle, available on freemusicarchive.org under a Creative Commons noncommercial attribution license. Our logo was created by Jude Weaver. This is episode 57. It was recorded on April 17, 2019.

The Body of Evidence
Interview - Brian Nosek on Open Science

The Body of Evidence

Play Episode Listen Later Apr 21, 2019 59:01


Jonathan and Chris interview Brian Nosek, a professor of psychology and the co-founder and director of the Center for Open Science. They discuss problems and solutions in modern scientific research, such as committing scientists… to stick to a protocol.    Table of contents. 2:00 The culture of science.  4:18 Publications as currency for career advancement.  7:53 What researchers tell each other at the bar.  10:22 Cynicism.  12:48 The solution to climate change (not really).  18:24 The paper is advertising for the research.  22:16 Weaknesses of the peer review process.  23:58 One data set, many scientists, different conclusions.  27:29 Resistance to sharing.  29:52 The road to the Center for Open Science.  37:49 Signs of success.  44:10 The generational gap in openness.  46:55 Registered reports.    LINKS:   The Center for Open Science website: http://www.cos.io Project Implicit: https://implicit.harvard.edu/implicit/ For scientists, the Open Science Framework: http://www.osf.io   Theme music: "Troll of the Mountain Swing" by the Underscore Orkestra.   To contribute to The Body of Evidence, go to our Patreon page at: http://www.patreon.com/thebodyofevidence/.

The Accad and Koka Report
Ep. 65 James Heathers: Why Science Needs Data Thugs

The Accad and Koka Report

Play Episode Listen Later Feb 22, 2019 72:12


  https://accadandkoka.com/wp-content/uploads/2019/02/cropped-Heathers_James_linderpix-NEU-52067-high-res-e1550797596964.jpg ()James Heathers, PhD Will it take data vigilantes to restore some order in the House of Science?  With the replication crisis showing no sign of letting up, some committed scientists have taken it upon themselves to find ways to sniff out cases of egregious fraud.  As it turns out, identifying scientific misbehavior is surprisingly easy! Our guest is a full-time research scientist, author/consultant at Northeastern University in Boston in a Computational Behavioral Science lab.  James Heathers completed his undergraduate work in Psychology and Industrial relations from the University of Sydney and obtained his doctorate degree on the topic of methodological improvements in heart rate variability at the same institution in 2015. He and a couple of his colleagues have captured the limelight after exposing problems in the work of a world-famous nutrition researcher, which led to the retraction of 5 papers.  These “data thugs” have since designed a couple of tools that can identify suspicious data through a simple analysis of descriptive statistics. GUEST: James Heathers, PhD: https://twitter.com/jamesheathers (Twitter), https://everythinghertz.com/ (podcast), and http://jamesheathers.com/ (website) LINKS: Brian Wansink. https://web.archive.org/web/20170312041524/http:/www.brianwansink.com/phd-advice/the-grad-student-who-never-said-no (The Grad Student Who Never Said “No”) (from the WayBack Machine internet archives) James Heathers. https://hackernoon.com/introducing-sprite-and-the-case-of-the-carthorse-child-58683c2bfeb (Introducing SPRITE and the Case of the Carthorse Child) Adam Marcus and Ivan Oransky. https://www.sciencemag.org/news/2018/02/meet-data-thugs-out-expose-shoddy-and-questionable-research (Meet the Data-Thugs Out to Expose Shoddy and Questionable Research) (Blog post in Science, Feb 2018) Tom Bartlett. https://www.chronicle.com/article/I-Want-to-Burn-Things-to/244488?key=ONA-J8qTe05O7njbTd0tJxVPc8Wh8rPZLgfV3j9qtQvPw_NSaQoPLX5LOtOxfok8TDJSbDZYakViRTN1RW9qdjFKT1BZUUJTc3dBUjM0N1AyRlFJV2dnVzEyQQ%5C (“I want to Burn Things to the Ground”: Are the foot soldiers behind psychology’s replication crisis saving science — or destroying it?) (Article in The Chronicles of Higher Education, September 2018) RELATED EPISODES: https://accadandkoka.com/episode48/ (Ep. 48 Many Statisticians, Many Answers: The Methodological Factor in the Replication Crisis) (with Brian Nosek) https://accadandkoka.com/episode57/ (Ep. 57 Neither Fisher Nor Bayes: The Limits of Statistical Inference) (with Michael Acree) WATCH ON YOUTUBE: Watch the episode on our YouTube channel Support this podcast

Idea Machines
Medical (d)Evolution with Dr. Robert McNutt [Idea Machines #10]

Idea Machines

Play Episode Listen Later Feb 12, 2019 69:54


In this episode I talk to Dr Robert McNutt about medical innovation, medical research and publishing, and patient choice. Robert has been practicing medicine for decades and has published many dozens of medical research papers. He is a former editor of JAMA - the Journal of the American Medical Association. He's created pain care simulation programs, run hospitals, sat on the national board of medical examiners, taught at the university of North Carolina and Wisconsin schools of medicine, and published dozens of articles and several books. On top of all of that he is a practicing oncologist. We draw on this massive experience with different sides of medicine to dig into how medical innovations happen and also less-than-positive changes. It's always fascinating to crack open the box of a different world so I hope you enjoy this conversation with Dr. Robert McNutt. Major takeaways The practice of medicine has changed significantly over the past several decades - there has an explosion of research and specialization. This proliferation has led to many innovations, but has also decreased the ratio of signal to noise in medical advice both for doctors and patients. For another perspective on the explosion of research, listen to my conversation with Brian Nosek. While it would be amazing to have a process that was based purely on very strict scientific method, health is so complicated that the ideal is impossible. That means, like so many imperfect system, that ultimately so much comes down to human judgement.  Notes Robert's Blog Robert's Book Tomaxin Case Study Observational Trials Dictaphones

You Are Not So Smart
147 - The Replication Crisis (rebroadcast)

You Are Not So Smart

Play Episode Listen Later Feb 10, 2019 45:11


"Science is wrong about everything, but you can trust it more than anything." That's the assertion of psychologist Brian Nosek, director of the Center for Open Science, who is working to correct what he sees as the temporarily wayward path of psychology. Currently, psychology is facing what some are calling a replication crisis. Much of the most headline-producing research in the last 20 years isn't standing up to attempts to reproduce its findings. Nosek wants to clean up the processes that have lead to this situation, and in this episode, you'll learn how. - Show notes at: www.youarenotsosmart.com - Become a patron at: www.patreon.com/youarenotsosmart SPONSORS • The Great Courses: www.thegreatcoursesplus.com/smart See omnystudio.com/listener for privacy information.

Idea Machines
Changing How We Do Science with Brian Nosek [Idea Machines #3]

Idea Machines

Play Episode Listen Later Dec 7, 2018 58:17


My guest this week is Brian Nosek, co-Founder and the Executive Director of the Center for Open Science. Brian is also a professor in the Department of Psychology at the University of Virginia doing research on the gap between values and practices, such as when behavior is influenced by factors other than one's intentions and goals. The topic of this conversation is how incentives in academia lead to problems with how we do science, how we can fix those problems, the center for open science, and how to bring about systemic change in general. Show Notes Brian’s Website Brian on Twitter (@BrianNosek) Center for Open Science The Replication Crisis Preregistration Article in Nature about preregistration results The Scientific Method If you want more, check out Brian on Econtalk Transcript Intro   [00:00:00] This podcast I talked to Brian nosek about innovating on the very beginning of the Innovation by one research. I met Brian at the Dartmouth 60th anniversary conference and loved his enthusiasm for changing the way we do science. Here's his official biography. Brian nozik is a co-founder and the executive director for the center for open science cos is a nonprofit dedicated to enabling open and reproducible research practices worldwide. Brian is also a professor in the department of psychology at the University of Virginia. He's received his PhD from Yale University in 2002 in 2015. He was on Nature's 10 list and the chronicle for higher education influence. Some quick context about Brian's work and the center for open science. There's a general consensus in academic circles that there are glaring problems in how we do research today. The way research works is generally like this researchers usually based at a university do experiments then when they have a [00:01:00] result they write it up in a paper that paper goes through the peer-review process and then a journal publishes. The number of Journal papers you've published and their popularity make or break your career. They're the primary consideration for getting a position receiving tenure getting grants and procedure in general that system evolved in the 19th century. When many fewer people did research and grants didn't even exist we get into how things have changed in the podcast. You may also have heard of what's known as the replication crisis. This is the Fairly alarming name for a recent phenomena in which people have tried and failed to replicate many well-known studies. For example, you may have heard that power posing will make you act Boulder where that self-control is a limited resource. Both of the studies that originated those ideas failed to replicate. Since replicating findings a core part of the scientific method unreplicated results becoming part of Cannon is a big deal. Brian has been heavily involved in the [00:02:00] crisis and several of the center for open science is initiatives Target replication. So with that I invite you to join my conversation with Brian idzik.   How does open science accelerate innovation and what got you excited about it?   Ben: So the  theme that  I'm really interested in is  how do we accelerate Innovations? And so just to start off with I love to ask you sort of a really broad question of  in your mind. How does having a more open science framework help us accelerate Innovations? And I guess parallel to that. Why what got you excited about it first place. Brian: Yeah, yeah, so that this is really a core of why we started the center for open science is to figure out how can we maximize the progress of science given that we see a number of different barriers to or number of different friction points to the PACE and progress of [00:03:00] Science. And so there are a few things. I think that how. Openness accelerates Innovation, and I guess you can think of it as sort of multiple stages at the opening stage openness in terms of planning pre-registering what your study is about why you're doing this study that the study exists in the first place has a mechanism of helping to improve Innovation by increasing The credibility of the outputs. Particularly in making a clear distinction between the things that we planned in advance that we're testing hypotheses of ideas that we have and we're acquiring data in order to test those ideas from the exploratory results the things that we learn once we've observed the data and we get insights but there are necessarily more uncertain and having a clear distinction between those two practices is a mechanism for. Knowing the credibility of the results [00:04:00] and then more confidently applying results. That one observes in the literature after the fact for doing next steps. And the reason that's really important I think is that we have so many incentives in the research pipeline to dress up exploratory findings that are exciting and sexy and interesting but are uncertain as if they were hypothesis-driven, right? We apply P values to them. We apply a story upfront to them we present them as. These are results that are highly credible from a confirmatory framework. Yeah, and that has been really hard for Innovation to happen. So I'll pause there because there's lots more but yeah, so listen, let's touch on that.   What has changed to make the problem worse?   Ben: There's there's a lot that right there. So you mentioned the incentives to basically make. Things that aren't really following the scientific method follow the clicker [00:05:00] following the scientific method and one of the things I'm always really interested in what has changed in the incentives because I think that there's definitely this. Notion that this problem has gotten worse over time. And so that means that that something has has changed and so in your mind like what what changed to make to sort of pull science away from that like, you know sort of ice training ideal of you have your hypothesis and then you test that hypothesis and then you create a new hypothesis to this. System that you're pushing back against. Brian: You know, it's a good question. So let me start with making the case for why we can say that nothing has changed and then what might lead to thinking something has changed in unpacking this please the potential reason to think that nothing has [00:06:00] changed is that the kinds of results that are the most rewarded results have always been the kinds of results that are more the most rewarded results, right? If I find a novel Finding rather than repeating something someone else has done. I'm like. To be rewarded more with publication without latex cetera. If I find a positive result. I'm more likely to gain recognition for that. Then a negative result. Nothing's there versus this treatment is effective, which one's more interesting. Well, we know which ones for interesting. Yeah. Yeah, and then clean and tidy story write it all fits together and it works and now I have this new explanation for this new phenomenon that everyone can can take seriously so novel positive clean and tidy story is the. They'll come in science and that's because it breaks new ground and offers a new idea and offers a new way of thinking about the world. And so that's great. We want those. We've always wanted those things. So the reason to think well, this is a challenge always is [00:07:00] because. Who doesn't want that and and who hasn't wanted that right? It turns out my whole career is a bunch of nulls where I don't do anything and not only fits together. It's just a big mess right on screen is not a way to pitch a successful career. So that challenge is there and what pre-registration or committing an advanced does is helps us have the constraints. To be honest about what parts of that are actual results of credible confrontations of pre-existing hypotheses versus stuff that is exploring and unpacking what it is we can find. Okay, so that in this in the incentive landscape, I don't think has changed. Mmm what thanks have changed. Well, there are a couple of things that we can point to as potential reasons to think that the problem has gotten worse one is that data acquisition many fields is a lot easier than it ever was [00:08:00] and so with access more data and more ways to analyze it more efficient analysis, right? We have computers that do this instead of slide rules. We can do a lot more adventuring in data. And so we have more opportunity to explore and exploit the malays and transform it into things signal. The second is that the competitive landscape is. Stronger, right there are fewer than the ratio of people that want jobs to jobs available is getting larger and larger and larger and that fact and then competitiveness for Grants and same way that competition than can. Very easily amplify these challenges people who are more willing to exploit more researcher degrees of freedom are going to be able to get the kinds of results more easily that are rewarded in the system. And so that would have amplify the presence of those in people that managed to [00:09:00] survive that competitive firm got it. So I think it's a reasonable hypothesis that people that it's gotten worse. I don't think there's definitive evidence but those would be the theoretical points. At least I would point to for that. That makes a lot of sense. So you had a just sort of jumping back. You had a couple a couple points and we had we have just touched on the first one.   Point Number Two about Accelerating Innovation   Ben: So I want to give you that chance to oh, yeah go back and to keep going through that. Brian:  Right. Yeah. So accelerating Innovation is the idea, right? So that's a point of participation is accelerating Innovation by by clarifying The credibility of claims as they are produced. Yes, we do that better than I think will be much more efficient that will have a better understanding of the evidence base as it comes out. Yeah second phase is the ability is the openness of the data and materials for the purposes of verify. Those [00:10:00] initial claims right? I do a study. I pre-registered. It's all great and I share it with you and you read it. And you say well that sounds great. But did you actually get that and what would have happened if you made different decisions here here and there right because I don't quite agree with the decisions that you made in your analysis Pipeline and I see some gaps there so you're being able to access the materials that I produced in the data that came from. Makes it so that you can one just simply verify that you can reproduce the findings that I reported. Right? I didn't just screw up the analysis script or something and that as a minimum standard is useful, but even more than that, you can test the robustness in ways that I didn't and I came to that question with some approach that you might look at it and say well I would do it differently and the ability to reassess the data for the same question is a very useful thing for. The robustness particularly in areas that are that have [00:11:00] complex analytic pipelines where there's are many choices to make so that's the second part then the third part is the ReUse. So not only should we be able to verify and test the robustness of claims as they happen, but data can be used for lots of different purposes. Sometimes there are things that are not at all anticipated by the data originator. And so we can accelerate Innovation by making it a lot easier to aggregate evidence of claims across multiple Studies by having the data being more accessible, but then also making that data more accessible and usable for. Studying things that no one no one ever anticipated trying to investigate. Yeah, and so the efficiency gain on making better use of the data that already exists rather than the Redundant just really do Revenue question didn't dance it your question you did as it is a massive efficiency. Opportunity because there is a lot of [00:12:00] data there is a lot of work that goes in why not make the most use of it began?   What is enabled by open science?   Ben: Yeah that makes a lot of sense. Do you have any like really good sort of like Keystone examples of these things in action like places where because people could replicate the. The the study they could actually go back to the pipeline or reuse the data that something was enabled. That wasn't that wouldn't have been possible. Otherwise, Brian: yeah. Well, let's see. I'll give a couple of local mean personal examples just to just to illustrate some of the points, please so we have the super fun project that we did just to illustrate this second part of the pipeline right this robustness phase of. People may make different choices and those choices may have implications for the reliability results. So what we did in this project was that we get we acquired a dataset [00:13:00] of a very rich data set of lots of players and referees and outcomes in soccer and we took that data set and then we recruit a different teams. 29 in the end different teams with lots of varied expertise and statistics and analyzing data and have them all investigate the same research. Which is our players with darker skin tone more likely to get a red card then players with lighter skin tone. And so that's you know, that's a question. We'll of Interest people have studied and then we had provided this data set. Here's a data set that you can use to analyze that and. The teams worked on their own and developed an analysis strategies for how they're going to test that hypothesis. They came up with their houses strategy. They submitted their analysis and their results to us. We remove the results and [00:14:00] then took their analysis strategies and then share them among the teams for peer review right different people looking at it. They have made different choices. They appear each other and then went back. They took those peer reviews. They didn't know what each other found but they took. Because reviews and they wanted to update their analysis they could and so they did all that and then submitted their final analyses and what we observed was that a huge variation in analysis choices and variation in the results. So as a simple Criterion for Illustrated the variation results two-thirds of the teams found a significant. Write P less than 0.05 standard for deciding whether you see something there in the data, right and Atherton teams found a null. So the and then of course they debated amongst each other which was analysis strategy was the right strategy but in the end it was very clear among the teams that there are lots of reasonable choices that could be made. And [00:15:00] those reasonable choices had implications for the results that were observed from the same data. Yeah, and it's Standard Process. We do not see the how it's not easy to observe how the analytics choices influence the results, right? We see a paper. It has an outcome we say those are what the those fats those the outcomes of the data room. Right, but what actually the case is that those are the outcomes the data revealed contingent on all those choices that the researcher made and so that I think just as an illustrative illustrative. So it helps to figure out the robustness of that particular finding given the many different reasonable choices. That one would make where if we had just seen one would have had a totally different interpretation, right either. Yeah, it's there or it's not there.   How do you encode context for experiments esp. with People?   Ben:  Yeah, and in terms of sort of that the data and. [00:16:00] Really sort of exposing the the study more something that that I've seen especially in. These is that it seems like the context really matters and people very often are like, well there's there's a lot of context going on in addition to just the procedure that's reported. Do you have any thoughts on like better ways of sort of encoding and recording that context especially for experiments that involve? Brian: Yeah. Yeah. This is a big challenge is because we presume particularly in the social and life sciences that there are many interactions between the different variables. Right but climate the temperature the time of day the circadian rhythms the personalities whatever it is that is the different elements of the subjects of the study whether they be the plants or people or otherwise, yeah. [00:17:00] And so the. There are a couple of different challenges here to unpack one is that in our papers? We State claims at the maximal level of generality. We can possibly do it and that that's just a normal pattern of human communication and reasoning right? I do my study in my lab at the University of Virginia on University of Virginia undergraduates. I don't conclude in the. University of university University of Virginia undergraduates in this particular date this particular time period this particular class. This is what people do with the recognition that that might be wrong right with recognition. There might be boundary conditions but not often with articulating where we think theoretically those boundary conditions could be so in one step of. Is actually putting what some colleagues in psychology of this great paper about about constraints on [00:18:00] generality. They suggest what we need in all discussion sections of all papers is a sexually say when won't this hold yeah, just give them what you know, where where is this not going to hold and just giving people an occasion to think about that for a second say oh. - okay. Yeah, actually we do think this is limited to people that live in Virginia for these reasons right then or no, maybe we don't really think this applies to everybody but now we have to say so you can get the call it up. So that alone I think would make a huge difference just because it would provide that occasion to sort of put the constraints ourselves as The Originators of findings a second factor, of course is just sharing as much of the materials as possible. But often that doesn't provide a lot of the context particularly for more complex experimental studies or if there are particular procedural factors right in a lot of the biomedical Sciences there. There's a lot of nuance [00:19:00] into how it is that this particular reagent needs to be dealt with how they intervention needs to be administered Etc. And so I like the. Moves towards video of procedures right? So there is a journal Journal of visualized events jove visualized experiments that that that tries to that gives people opportunities to show the actual experimental protocol as it is administered. To try to improve it a lot of people using the OSF put videos up of the experiment as they administered it. So to maximize your ability to sort of see how it is that it was done through. So those steps I think can really help to maximize the transparency of those things that are hard to put in words or aren't digitally encoded oil. Yeah, and those are real gaps   What is the ultimate version of open science?   Ben: got it. And so. In your mind what is sort of like the endgame of all this? What is it? Like what [00:20:00] would be the ideal sort of like best-case scenario of science? Like how would that be conducted? So I say you get to control the world and you get to tell everybody practicing science exactly what to do. What would that look like? Brian: Well, if it if I really had control we would just all work on Wikipedia and we would just revising one big paper with the new applicants. Ask you got it continuously and we get all of our credit by. You know logging how many words that I changed our words that survived after people have made their revisions and whether those words changed are on pages that were more important for the overall scientific record versus the less important spandrels. And so we would output one paper that is the summary of knowledge, which is what Wikipedia summarizes. All right, so maybe that's that's maybe going a little bit further than what like [00:21:00] that we can consider. The realm of conceptually possible. So if we imagine a little bit nearer term, what I would love to see is the ability to trace the history of any research project and that seems more achievable in the sense that. If a every in fact, my laboratory is getting close to this, right every study that we do is registered on the OSF. And once we finish the studies, we post the materials and the data or as we're doing it if we're managing the materials and data and then we attach a paper if we write a paper at the end preprint or the final report so that people can Discover it and all of those things are linked together. Be really cool if I had. Those data in a standardized framework of how it is that they are [00:22:00] coded so that they could be automatically and easily integrated with other similar kinds of data so that someone going onto the system would be able to say show me all the studies that ever investigated this variable associated with this variable and tell me what the aggregate result is Right real-time meta-analysis of the entire database of all data that I've ever been collected that. Enough flexibility would help to really very rapidly. I think not just spur Innovations and new things but to but help to point out where there are gaps right there a particular kinds of relationships between things particular effects of predict interventions where we know a ton and then we have this big assumption in our theoretical framework about how we get from X to y. And then as we look for variables that help us to identify whether X gets us to why we feel there just isn't stuff. The literature has not filled that Gap. So I think there are huge benefits for that [00:23:00] kind of aggregate ability. But mostly what I want to be able to do is instead of saying you have to do research in any particular way. The only requirement is you have to show us how you did your research and your particular way so that the marketplace of ideas. Can operate as efficiently as possible and that really is the key thing? It's not preventing bad ideas from getting into the system. It's not about making sure that the different kinds of best things are the ones that immediately are through with not that about Gatekeepers. It's about efficiency in how it is. We call that literature of figuring out which things are credible which things are not because it's really useful to. The ideas into the system as long as they can be. Self-corrected efficiently as well. And that's where I think we are not doing well in the current system. We're doing great on generation. [00:24:00] We're General kinds of innovative ideas. Yeah, but we're not is parsing through those ideas as efficiently as it could decide which ones are worth actually investing more resources in jumping. A couple levels in advance that   Talmud for Science   Ben:  that makes a lot of sense and actually like I've definitely come across many papers just on the internet like you go and Google Scholar and you search and you find this paper and in fact, it has been refuted by another paper and there's no way to know that yeah, and so. I does your does the open science framework address that in any way? Brian:  No, it doesn't yet. And this is a critical issue is the connectivity between findings and the updating of knowledge because the way that like I said doesn't an indirect way but it doesn't in the systematic way that actually would solve this problem. The [00:25:00] main challenge is that we treat. Papers as static entities. When what their summarizing is happening very dynamically. Right. It may be that a year later. After that paper comes out one realizes. We should have analyze that data totally different. We actually analyzed it wrong is indefensible the way that we analyzed it. Right right. There are very few mechanisms for efficiently updating that paper in a way that would actually update the knowledge and that's something where we all agree. That's analyze the wrong way, right? What are my options? I could. Retract the paper. So it's no longer in existence at all. Supposedly, although even retracted papers still get cited we guess nuts. So that's a base problem. Right or I could write a correction, which is another paper that comments on that original paper that may not itself even be discoverable with the original paper that corrects the analysis. Yeah, and that takes months and years. [00:26:00] All right. So the really what I think is. Fundamental for actually addressing this challenge is integrating Version Control with scholarly publishing. So that papers are seen as Dynamic objects not static objects. And so if you know what I would love to see so here's another Milestone of this if we if I could control everything another Milestone would be if a researcher could have a very productive career with. Only working on a single paper for his or her whole life, right? So they have a really interesting idea. And they just continue to investigate and build the evidence and challenge it and figure, you know, just continue to unpack it and they just revise that paper over time. This is what we understand. Now, this is where it is. Now. This is what we've learned over here are some other exceptions but they just keep fine-tuning it and then you get to see the versions of that paper over its [00:27:00] 50-year history as that phenomenon got unpacked that. Plus the integration with other literature would make this much more efficient for exactly the problem that you raised which is we with papers. We don't know what the current knowledge base is. We have no real good way except for these. These attempts to summarize the existing literature with yet a new paper and that doesn't then supersede those old papers. It's just another paper is very inefficient system.   Can Social Sciences 'advance' in the same way as the physical sciences?   Ben: Ya know that that totally makes sense. Actually. I just I have sort of a meta question that I've argued with several people about which is do you feel like. We can make advances in our understanding of sort of like [00:28:00] human-centered science in the same way that we can in like chemistry or physics. Like people we very clearly have like building blocks of physics and the Builds on itself. And there's I've had debates with people about whether you can do this in. In the humanities and the social sciences. What are your thoughts on that? Brian:  Yeah. It is an interesting question and the. What seems to be the biggest barrier is not anything about methodology in particular but about complexity? Yeah, right, if the problem being many different inputs can have similar impact cause similar kinds of outcomes and singular inputs can have multivariate outcomes that it influences and all of those different inputs in terms of causal elements may have interactive effects on the [00:29:00] outs, so. How can we possibly develop Rich enough theories to predict the actions effectively and then ultimately explain the actions effectively of humans in a complex environments. It doesn't seem that we will get to the beautiful equations that underlie a lot of physics and chemistry and count for a substantial amount of evidence. So the thing that I don't feel like I under have any good hand along with that is if it's a theoretical or practical limit right is it just not possible because it's so complex and there isn't this predicted. Or it's just that's really damn hard. But if we had big enough computers if you had enough data, if we were able to understand complex enough models, we would be able to predict it. Right so is as a mom cycle historians, right? They figure it out right the head. [00:30:00] Oxidizing web series righty they could account for 99.9 percent of the variance of what people do next and but of course, even there it went wrong and that was sort of the basis of the whole ceilings. But yeah, I just don't know I don't have a way to. I don't yet have a framework for thinking about how is it that I could answer that question whether it's a practical or theoretical limit. Yeah. What do you think? Ben:  What do I think I think that it's great. Yeah, so I usually actually come down on the I think it's a practical limit now how much it would take to get there might make it effectively a theoretical limit right now. But that there's there's nothing actually preventing us from like if you if you could theoretically like measure everything why not? I [00:31:00] think that is just with again. It's like the it's really a measurement problem and we do get better at measuring things. So that's the that's that's where I come down on but I.   How do you shift incentives in science?   Yep, that's just purely like I have no good argument. going going back to the incentives. It seems to me like a lot of what like I'm completely convinced that these changes would. Definitely accelerate the number of innovations that we have and so and it seems like a lot of these changes require shifting scientists incentives. And so and that's like a notoriously hard thing so we both like how are you going about shifting those incentives right now and how might they be shifted in the future. [00:32:00] Brian: Yeah, that's a great question. That's what we spend. A lot of our time worrying about in the sense of there is very little at least in my experience is very distal disagreement on the problems and the opportunities for improving the pace of Discovery and Innovation based on the solutions. It really is about the implementation. How is it that you change that those cultural incentives so that we can align. The values that we have for science with the practices that researchers do on a daily basis and that's a social problem. Yeah, there are technical supports. But ultimately it's a social problem. And so the the near term approach that we have is to recognize the systems of rewards as they are. And see how could we refine those to align with some of these improved practices? So we're not pitching. Let's all work on [00:33:00] Wikipedia because that's that is so far distant from. What they systems have reward for scientist actually surviving and thriving in science that we wouldn't be able to get actually pragmatic traction. Right? So I'll give one example of can give a few but here's the starting with one of an example that integrates with current incentives but changes them in a fundamental way and that is the publishing model of registered reports. Sophie in the standard process right? I do my research. I write up my studies and then I submit them for peer review at the highest possible prestigious Journal that I can hoping that they will not see all the flaws and if they'll accept it. I'll get all the do that process me and I understand it anyway - journal and the P plus Terminal C and eventually somewhere and get accepted. The register report model makes one change to the process and that is to move. The critical point of peer review [00:34:00] from after the results are known and I've written up the report and I'm all done with the research to after I've figured out what the question that I want to investigate is and what the methodology that I'm going to use so I don't have an observed the outcomes yet. All I've done is frame question. An articulated why it's important and a methodology that I'm going to just to test that question and that's what the peer reviewers evaluate right? And so the key part is that it fits into the existing system perfectly, right? The the currency of advancement is publication. I need to get as many Publications as I can in the most prestigious Outlets. I can to advance my career. We don't try to change that. Instead we just try to change. What is the basis for making a decision about publication and by moving the primary stage of peer reviewed before the results are known does a fundamental change in what I'm being rewarded for as the author [00:35:00] right? Yeah, but I'm being rewarded for as the author in the current system is sexy results, right get the best most interesting most Innovative results. I can write and the irony of that. Is that the results of the one thing that I'm not supposed to be able to control in your study? Right? Right. What I'm supposed to be able to control is asking interesting questions and developing good methodologies to test those questions. Of course that's oversimplifying a bit. There are in there. The presumption of emphasizing results is that my brilliant insights at the outset of the project are the reason that I was able to get those great results, right, but that depends on the credibility of that entire Pipeline and put that aside but the moving it to at the design stage means that my incentive as an author is to ask the most important questions that I can. And develop the most compelling and effective and valid methodologies that I can to test them. [00:36:00] Yeah, and so that changes to what it is presumably we are supposed to be being rewarded for in science. The other thing that it changes in the there's a couple of other elements of incentive changes that it has an impact on that are important for the whole process right for reviewers instant. It's. When I am asked to review a paper in my area of research when I when all the results are there, I have skin in the game as a reviewer. I'm an expert in that area. I may have made claims about things in that particular area. Yeah, if the paper challenges my cleanse make sure to find all kinds of problems with the methodology. I can't believe they did this is this is a ridiculous thing, right? We write my paper. That's the biggest starting point problem challenge my results all well forget out of you. But the amount of course if it's aligned with [00:37:00] my findings and excites me gratuitously, then I will find lots of reasons to like the paper. So I have these Twisted incentives to reinforce findings and behave ideologically as a reviewer in the existing system by moving peer review to the design stage. It fundamentally changes my incentives to right so say I'm in a very contentious area of research and there's only ten opponents on a particular claim when we are dealing with results You can predict the outcome right it people behave ideologically even when they're not trying to when you don't know the results. Both people have the same interests, right? If I truly believe in the phenomenon that I'm studying and the opponents of my point of view also believe in their perspective, right then both want to review that study and that design and that methodology to maximize its quality to reveal the truth, which I think I [00:38:00] have and so that alignment actually makes adversaries. To some extent allies and in review and makes the reviewer and the author more collaborative, right the feedback that I give on that paper can actually help the methodology get better. Whereas in the standard process when I say here's all the things you did wrong. All the author has this to say well geez, you're a jerk. Like I can't do anything about that. I've already done the research and so I can't fix it. Yeah. So the that shifts earlier is much more collaborative and helps with that then the other question is the incentives for the journal right? So in the. Journal editors have strong incentives of their own they want leadership. They want to have impact they don't want the one that destroyed their journal and so [00:39:00] the incentives and the in the existing model or to publish sexy results because more people were read those results. They might cite those results. They might get more attention for their Journal, right? And shifting that to on quality designs then shift their priorities to publishing the most rigorous research the most rust robust research and to be valued based on that now. Yeah, so I'll pause there there's lots of other things to say, but those I think are some critical changes to the incentive landscape that still fits. Into the existing way that research is done in communicated.   Don't people want to read sexy results?   Ben: Yeah. I have a bunch of questions just to poke at that last point a little bit wouldn't people still read the journals that are publishing the most sexy results sort of regardless of whether they were web what stage they're doing that peer review. Brian:  Yeah. This is a key concern of editors and thinking about adopting registered reports. [00:40:00] So we have about a hundred twenty-five journals that are offering this now, but we continue to pitch it to other groups and other other ones, but one of the big concerns that Hunters have is if I do this then I'm going to end up publishing a bunch of no results and no one will read my journal known will cite it and I will be the one that ruined my damn door. All right. So it is a reasonable concern because of the way the system works now, so there's a couple answers to that but the one is empirical which is is it actually the case that these are less red or less cited than regular articles that are published in those. So we have a grant from the McDonald Foundation to actually study registered reports. And the first study that we finished is a comparison of articles that were done as register reports with this in the same published in the same Journal. [00:41:00] Articles that were done the regularly to see if they are different altmetrics attention, right citation and attention and Oppa in media and news and social media and also citation impact at least early stage citation impact because the this model is new enough that it isn't it's only been working for since 2014. In terms of first Publications and what we found in that is that at least in this initial data set. There's no difference in citation rates, and if anything the register report. Articles have gotten more altmetric impact social media news media. That's great. So at least the initial data suggests that who knows if that will sustain generalize, but the argument that I would make in terms of a conceptual argument is that if Studies have been vetted. In terms of without knowing the results. These are important results to know [00:42:00] right? So that's what the actors and the reviewers have to decide is do we need to know the outcome of this study? Yeah, if the answer is yes that this is an important enough result that we need to know what happened that any result is. Yeah, right. That's the whole idea is that we're doing the study harder find out what the world says about that particular hypothesis that particular question. Yeah, so it become citable. Whereas when were only evaluating based on the results. Well, yeah things that Purity people is that that's crazy, but it happened. Okay, that's exciting. But if you have a paper where it's that's crazy and nothing happened. Then people say well that was a crazy paper. Yeah, and that paper would be less likely to get through the register report kind of model that makes a lot of sense. You could even see a world where because they're being pre-registered especially for more like the Press people can know to pay attention to it. [00:43:00] So you can actually almost like generate a little bit more height. In terms of like oh we're not going to do this thing. Isn't that exciting? Yeah, exactly. So we have a reproducibility project in cancer biology that we're wrapping up now where we do we sample a set of studies and then try to replicate findings from those papers to see where where can we reproduce findings in the where are their barriers to be able to reproduce existing? And all of these went through the journal elife has registered reports so that we got peer review from experts in advance to maximize the quality of the designs and they published instead of just registering them on OSF, which they are they also published the register reports as an article of its own and those did generate lots of Interest rule that's going to happen with this and that I think is a very effective way to sort of engage the community on. The process of actual Discovery we don't know the answer to these [00:44:00] things. Can we build in a community-based process? That isn't just about let me tell you about the great thing that I just found and more about. Let me bring you into our process. How does were actually investigating this problem right and getting more that Community engagement feedback understanding Insight all along the life cycle of the research rather than just as the end point, which I think is much more inefficient than it could be.   Open Science in Competitive Fields and Scooping   Ben: Yeah and. On the note of pre-registering. Have you seen how it plays out in like extremely competitive Fields? So one of the world's that I'm closest to is like deep learning machine learning research and I have friends who keep what they're doing. Very very secret because they're always worried about getting scooped and they're worried about someone basically like doing the thing first and I could see people being hesitant to write down to [00:45:00] publicize what they're going to do because then someone else could do it. So, how do you see that playing out if at all? Brian: Yeah scoping is a real concern in the sense that people have it and I think that is also a highly inflated concern based on the reality of what happens in practice but nevertheless because people have the concern systems have to be built to address it. Yeah, so one simple answer on the addressing the concern and then reasons to be skeptical at the. The addressing the concern with the OSF you can pre-register an embargo your pre-registrations from to four years. And what that does is it still gets all the benefits of registering committing putting that into an external repository. So you have independent verification of time and date and what you said you were going to do but then gives you as the researcher the flexibility to [00:46:00] say I need this to remain private for some period of time because of whatever reason. As I need it to be private, right? I don't want the recent participants that I am engaged in this project to discover what the design is or I don't want it competitors to discover what the design is. So that is a pragmatic solution is sort of dress. Okay, you got that concern. Let's meet that concern with technology to help to manage the current landscape. There are a couple reasons to be skeptical that the concern is actually much of a real concerning practice Tristan. And one example comes from preprints. So a lot of people when they pre princess sharing the paper you have of some area of research prior to going through peer review and being published in a journal write and in some domains like physics. It is standard practice the archive which is housed at Cornell is the standard for [00:47:00] anybody in America physics to share their research through archive prior to publication in other fields. It's very new or unknown but emerging. But the exact same concern about scooping comes up regularly where they say there's so many people in our field if I share a preprint someone else with the lab that is productive lab is going to see my paper. They're going to run the studies really fast. They're going to submit it to a journal that will publish and quickly and then I'll lose my publication because it'll come out in this other one, right and that's a commonly articulated concern. I think there are very good reasons to be skeptical of it in practice and the experience of archive is a good example. It's been operating since 1991 physicists early in its life articulated similar kinds of concerns and none of them have that concern now, why is it that they don't have that concern now? Well the Norms have shifted from the way you establish priority [00:48:00] is not. When it's published in the journal, it's when you get it onto archive. Right? Right. So a new practice becomes standard. It's when is it that the community knows about what it is you did that's the way you get that first finder Accolade and that still carries through to things like publication a second reason is that. We all have a very inflated sense of self importance that our great our kids right? There's an old saw in in venture capital of take your best idea and try to give it to your competitor and most of the time you can write. We think of our own ideas really amazing and everyone else doesn't yeah people sleeping other people. Is Right Southern the idea that there are people looking their chops on waiting for your paper your registration to show up so they can steal your [00:49:00] idea and then use it and claim it as their own is is great. It's shows High self-esteem. And that's great. I am all for high self. I don't know and then the last part is that. It is a norm violation to do that to such a strong degree to do the stealing of and not crediting someone else for their work, but it's actually very addressable in the daily practice of how science operates which is if you can show that you put that registration or that paper up on a independent service and then it was it appeared prior to the other person doing it. And then that other group did try to steal it and claim it as their own. Well, that's misconduct. And if they did if they don't credit you as the originator then that's something that is a norm violation and how science operates and I'm actually pretty confident in the process of dealing with Norm [00:50:00] violations in the scientific Community. I've had my own experience with the I think this very rarely happens, but I have had an experience with it. I've posted papers on my website before there were pretty print services in the behavioral sciences since I. Been a faculty member and I've got a Google Scholar one day and was reading. Yeah, the papers that I have these alerts set up for things that are related to my work and I paper showed up and I was like, oh that sounds related to some things. I've been working on. So I've clicked on the link to the paper and I went to the website. So I'm reading the paper. I from these authors I didn't recognize and then I realized wait that's that's my paper. I need a second and I'm an author and I didn't submit it to that journal. And it was my paper. They had taken a paper off of my website. They had changed the abstract. They run it through Google translate. It looks like it's all Gobbledy gook, but it was an abstract. But the rest of it was [00:51:00] essentially a carbon copy of our paper and they published. Well, you know, so what did I do? I like contacted the editor and we actually is on retraction watch this story about someone stealing my paper and retraction watch the laughing about it and it got retracted. And as far as we heard the person that had gone it lost their job, and I don't know if that's true. I never followed. But there are systems place is the basic point to deal with the Regis forms of this. And so I have I am sanguine about those not be real issues. But I also recognize they are real concerns. And so we have to have our Technology Solutions be able to address the concerns as they exist today. And I think the those concerns will just disappear as people gain experience.   Top down v Bottom up for driving change   Ben: Got it. I like that distinction between issues and concerns that they may not be the same thing. To I've been paying attention to   sort of the tactics that you're [00:52:00] taking to drive this adoption. And there's  some bottom up things in terms of changing the culture and getting  one Journal at a time to change just by convincing them and there's also been some some top-down approaches that you've been using and I was wondering if you could just sort of go through those and what you feel like. Is is the most effective or what combinations of things are are the most effective for really driving this change? Brian: Yeah. No, it's a good question because this is a culture change is hard especially with the decentralized system like science where there is no boss and the different incentive drivers are highly distributed. Right, right. He has a richer have a unique set of societies. Are relevant to establishing my Norms you could have funders that fund my work a unique set of journals that I publish in and my own institution. And so every researcher [00:53:00] has that unique combination of those that all play a role in shaping the incentives for his or her behavior and so fundamental change if we're talking about just at the level of incentives not even at the level of values and goals requires. Massive shift across all of those different sectors not massive in terms of the amount of things they need to shift but in the number of groups that need to make decisions tissue. Yeah, and so the we need both top-down and bottom-up efforts to try to address that and the top down ones are. That we work on at least are largely focused on the major stakeholders. So funders institutions and societies particularly ones that are publishing right so journals whether through Publishers societies, can we get them like with the top guidelines, which is this framework that that has been established to promote? What are the transparency standards? What could we [00:54:00] require of authors or grantees or employees of our organizations? Those as a common framework provide a mechanism to sort of try to convince these different stakeholders to adopt new standards new policies to that that then everybody that associated with that have to follow or incentivised to follow simultaneously those kinds of interventions don't necessarily get hearts and minds and a lot of the real work in culture change. Is getting people to internalize what it is that mean is good science is rigorous work and that requires a very bottom up community-based approach to how Norms get established Within. What are effectively very siloed very small world scientific communities that are part of the larger research community. And so with that we do a lot [00:55:00] of Outreach to groups search starting with the idealists right people who already want to do these practices are already practicing rigorous research. How can we give them resources and support to work on shifting those Norms in their small world communities and so. Out of like the preprint services that we host or other services that allow groups to form. They can organize around a technology. There's a preprint service that our Unity runs and then drive the change from the basis of that particular technology solution in a bottom-up way and the great part is that to the extent that both of these are effective they become self reinforcing. So a lot of the stakeholder leaders and editor of a journal will say that they are reluctant. They agree with all the things that we trying to pitch to them as ways to improve rigor and [00:56:00] research practices, but they don't they don't have the support of their Community yet, right. They need to have people on board with this right well in we can the bottom. It provides that that backing for that leader to make a change and likewise leaders that are more assertive are willing to sort of take some chances can help to drive attention and awareness in a way that facilitates the bottom-up communities that are fledgling to gain better standing and we're impact so we really think that the combination of the two is essential to get at. True culture change rather than bureaucratic adoption of a process that now someone told me I have to do yeah, which could be totally counterproductive to Scientific efficiency and Innovation as you described. Ben: Yeah, that seems like a really great place to to end. I know you have to get running. So I'm really grateful. [00:57:00] This is this has been amazing and thank you so much. Yeah, my pleasure.  

The Accad and Koka Report
Ep. 48 Many Statisticians, Many Answers: The Methodological Factor in the Replication Crisis

The Accad and Koka Report

Play Episode Listen Later Dec 5, 2018 55:19


https://accadandkoka.com/wp-content/uploads/2018/11/Brian-Nosek-e1542826437859.jpeg ()Brian Nosek, PhD In 550 BC, the Greek philosopher Heraclitus famously declared: “No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.”  In this episode, we learn from our guest whether scientists can step into the same data pool and obtain the same research results twice. Brian Nosek is Professor of Psychology at the University of Virginia.  He is also the co-founder and Executive Director of the Center for Open Science, an organization dedicated to fostering transparency and collaboration in scientific research. In 2015, Professor Nosek and his team published in the journal Science a widely acclaimed and widely discussed paper that shed light on the extent to which psychological research findings may not be reproducible when the research is conducted anew. More recently, his Center conducted a unique project where a single data set was sent to be analyzed by about 30 independent teams of statisticians for the purpose of answering a single question.  The variability in the methods chosen and in the answers obtained was also perhaps sobering, if not perplexing. GUEST: Brian Nosek, PhD.  https://cos.io/about/team/brian-nosek-co-founder-and-executive-director/ (Profile) and https://twitter.com/BrianNosek?lang=en (Twitter) LINKS: Silberzahn R, Uhlmann EL, Martin DP et al. Many analysts, one dataset: Making transparent how variations in analytical choices affect results. (2018, Advances in Methods and Science in Psychological Research, open access pre-print https://psyarxiv.com/qkwst/ (here)) Klein RA, Vianello M, Hasselman F, et al.  Many Labs 2: Investigating Variation in Replicability Across Sample and Setting. (2018, open access pre-print https://psyarxiv.com/9654g (here)) Open Science Collaboration. Estimating the reproducibility of psychological science. (2015 in Science, open access https://www.researchgate.net/publication/281286234_Estimating_the_Reproducibility_of_Psychological_Science (here)) WATCH ON YOUTUBE: There is no video available for this episode. Support this podcast

Analysis
The Replication Crisis

Analysis

Play Episode Listen Later Nov 12, 2018 28:15


Many key findings in psychological research are under question, as the results of some of its most well-known experiments – such as the marshmallow effect, ego depletion, stereotype threat and the Zimbardo Stanford Prison Experiment – have proved difficult or impossible to reproduce. This has affected numerous careers and led to bitter recriminations in the academic community. So can the insights of academic psychology be trusted and what are the implications for us all? Featuring contributions from John Bargh, Susan Fiske, John Ioannidis, Brian Nosek, Stephen Reicher, Diederik Stapel and Simine Vazire. Presenter David Edmonds Producer Ben Cooper

replication crisis john ioannidis brian nosek john bargh simine vazire diederik stapel
Everything Hertz
69: Open science tools (with Brian Nosek)

Everything Hertz

Play Episode Listen Later Oct 9, 2018 49:02


We’re joined by Brian Nosek (Centre for Open Science and University of Virginia) to chat about building technology to make open science easier to implement, and shifting the norms of science to make it more open. We also discuss his recent social sciences replication project in which researchers accurately predicted which studies would replicate. Here’s what we cover: - What is the Centre for Open Science? - How did Brian go from Psychology professor to the director of tech organisation? - How can researchers use the Open Science Framework (OSF)? - How does OSF remove friction for conducting open science? - Registered reports (now available at 131 journals!) - What factors converged to cause the emerging acceptance of open science? - The social sciences replication project - Can researchers anticipate which findings can replicate? - What happened when Brian and his team tried to submit their replication attempts of Science papers to Science? - The experience of reviewing registered reports Links: Centre for open science https://cos.io Open Science Framework https://osf.io Project Implicit https://www.projectimplicit.net/index.html The social sciences replication project paper https://www.nature.com/articles/s41562-018-0399-z Brian on Twitter https://www.twitter.com/briannosek Dan on twitter https://www.twitter.com/dsquintana James on twitter https://www.twitter.com/jamesheathers Everything Hertz on twitter https://www.twitter.com/hertzpodcast Everything Hertz on Facebook https://www.facebook.com/everythinghertzpodcast/ Music credits: Lee Rosevere freemusicarchive.org/music/Lee_Rosevere/ Special Guest: Brian Nosek.

Tatter
Episode 20: The Humean Stain, Part 2

Tatter

Play Episode Listen Later Jul 9, 2018 56:33


ABOUT THIS EPISODE Implicit bias has been studied by many social psychologists, and one particular measure, the Implicit Association Test (or IAT) has often been used in that research. It has also been used by practitioners, often for purposes of raising participants' awareness of their own biases. And millions have completed IAT's online at the Project Implicit website. In this episode, I continue a discussion with six people who have all thought about the IAT, with the conversation covering such topics as (a) how well the IAT predicts discriminatory behavior and other behavior, (b) whether it's appropriate for the Project Implicit website to give individualized feedback to visitors who complete online IAT's there, and (c) the content and effectiveness of implicit bias training. My guests are psychologists Calvin Lai, Brian Nosek, Mike Olson, Keith Payne, and Simine Vazire, as well as journalist Jesse Singal. LINKS --Interpreting correlation coefficients (by Deborah J. Rumsey) (https://www.dummies.com/education/math/statistics/how-to-interpret-a-correlation-coefficient-r/) --Project Implicit (where you can take an IAT) (https://implicit.harvard.edu/implicit/) --Brian Nosek's departmental web page (https://med.virginia.edu/faculty/faculty-listing/ban2b/) --Calvin Lai's departmental web page (https://psychweb.wustl.edu/lai) --"Psychology's favorite tool for measuring racism isn't up to the job" (Jesse Singal, in The Cut) (https://www.thecut.com/2017/01/psychologys-racism-measuring-tool-isnt-up-to-the-job.html) --Keith Payne's departmental web page (http://bkpayne.web.unc.edu/) --Michael Olson's departmental web page (https://psychology.utk.edu/faculty/olson.php) --Simine Vazire's departmental web page (http://psychology.ucdavis.edu/people/svazire) --The Black Goat (podcast on which Simine Vazire is a co-host) (http://www.theblackgoatpodcast.com/) --"Understanding and and using the Implicit Association Test: III. Meta-analysis of predictive validity (Greenwald, Poehlmann, Uhlmann, & Banaji, 2009) (http://faculty.washington.edu/agg/pdf/GPU&B.meta-analysis.JPSP.2009.pdf) --"Statistically small effects of the Implicit Association Test can have societally large effects" (Greenwald, Banaji, & Nosek, 2015) (https://faculty.washington.edu/agg/pdf/Greenwald,Banaji&Nosek.JPSP.2015.pdf) --"Using the IAT to predict ethnic and racial discrimination: Small effects sizes of unknown societal significance" (Oswald, Mitchell, Blanton, Mitchell, & Tetlock, 2015) (https://s3.amazonaws.com/academia.edu.documents/44267412/Using_the_IAT_to_predict_ethnic_and_raci20160331-25218-20vauz.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1530481600&Signature=lS5rybckXwezHZrqSzHTlW%2FgKtI%3D&response-content-disposition=inline%3B%20filename%3DUsing_the_IAT_to_predict_ethnic_and_raci.pdf) --"Arbitrary metrics in psychology" (Blanton & Jaccard, 2006) (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.314.2818&rep=rep1&type=pdf) --"The bias of crowds: How implicit bias bridges personal and systemic prejudice" (Payne, Vuletich, & Lundberg, 2017; access is subscription-controlled) (https://www.tandfonline.com/doi/full/10.1080/1047840X.2017.1335568) --"Measuring individual differences in implicit cognition: The Implicit Association Test" (Greenwald, McGhee, & Schwartz, 1998) (http://faculty.fortlewis.edu/burke_b/Senior/BLINK%20replication/IAT.pdf) --A summary of David Hume's thoughts on the association of ideas (http://www.livingphilosophy.org.uk/philosophy/David_Hume/the_Association_of_Ideas.htm) --Two Psychologists Four Beers (podcast featuring psychologists Yoel Inbar and Mickey Inzlicht) (https://fourbeers.fireside.fm/) --Very Bad Wizards (podcast featuring psychologist David Pizarro and philosopher Tamler Sommers) (https://verybadwizards.fireside.fm/) Cover art credit: "Still Life with Bottles, Wine, and Cheese," John F. Francis (1857; public domain, from Wikimedia Commons, copyright tag: PD-US) Special Guests: Brian Nosek, Calvin Lai, Jesse Singal, Keith Payne, Michael Olson, and Simine Vazire.

The Science of Success
Self Help For Smart People - How You Can Spot Bad Science & Decode Scientific Studies with Dr. Brian Nosek

The Science of Success

Play Episode Listen Later Jul 5, 2018 56:09


In this episode, we show how you can decode scientific studies and spot bad science by digging deep into the tools and skills you need to be an educated consumer of scientific information. Are you tired of seeing seemingly outrageous studies published in the news, only to see the exact opposite published a week later? What makes scientific research useful and valid? How can you, as a non-scientist, read and understand scientific information in a simple and straightforward way that can help you get closer to the truth - and apply those lessons to your life. We discuss this and much more with Dr. Brian Nosek.  Dr. Brian Nosek is the co-founder and Executive Director of the Center for Open Science and a professor of psychology at the University of Virginia. Brian led the reproducibility project which involved leading some 270 of his peers to reproduce 100 published psychology studies to see if they could reproduce the results. This work shed light on some of the publication bias in the science of psychology and much more.Does the science show that extrasensory perception is real?Is there something wrong with the rules of the science or the way that we conduct science?What makes academic research publishable is not the same thing as what makes academic research accuratePublication is the currency of advancement in scienceNovel, positive, cleanWhat does “Nulls Hypothesis significance testing” / P-Value less than .05 even mean?Less than 5% of the time would you observe this evidence if there was no relationshipThe incentives for scientific publishing often skew, even without conscious intent by scientists, towards only publishing studies that support their hypothesis and conclusionsThe conclusions of many scientific studies may not be reproducible and may, in fact, be wrong How the reasoning challenges and biases of human thinking skew scientific results and create false conclusionsConfirmation biasOutcome bias“The Reproducibility Project” in psychologyTook a sample of 100 studies Across those 100 studies - the evidence was able to be reproduced only 40% of the timeThe effect size was 50% of what it was What The Reproducibility Project spawned was not a conclusion, but a QUESTIONHow do we as lay consumers determine if something is scientifically valid or not?We discuss the basic keys to understanding, reading, and consuming scientific studies as a non-scientist and ask how do we determine the quality of evidence?Watch out for any DEFINITIVE conclusionsThe sample size is very important, the larger the betterAggregation of evidence is better - “hundreds of studies show"Meta-studies / meta-analysis are important and typically more credibleLook up the original paperIs there doubt expressed in the story/report about the data? (how could the evidence be wrong, what needs to be proven next, etc)Valid scientific research often isn’t newsworthy - it takes lots of time to reach valid scientific conclusions It’s not just about the OUTCOME of a scientific study - the confidence in those outcomes is dependent on the PROCESS Where do we go from here as both individuals and scientists? How can we do better?Transparency is keyPreregistration - commit to a designThe powerful tool of “pre-registration” and how you can use it to improve your own thinking and decision-makingHomework - deliberately seek out people who disagree with you, build a “team of rivals" Learn more about your ad choices. Visit megaphone.fm/adchoices

Tatter
Episode 19: The Humean Stain, Part 1

Tatter

Play Episode Listen Later Jul 2, 2018 58:27


On April 12, 2018, Donte Robinson and Rashon Nelson, two African-American men, were arrested for trespassing at a Philadelphia Starbucks (https://www.npr.org/sections/thetwo-way/2018/04/14/602556973/starbucks-police-and-mayor-weigh-in-on-controversial-arrest-of-2-black-men-in-ph). They were waiting for another person to join them for a meeting, when a manager called the police because they hadn't made a purchase. In the face of ensuing controversy, Starbucks closed stores nationwide one afternoon at the end of May in order to hold anti-bias training sessions (https://www.npr.org/2018/05/17/611909506/starbucks-training-focuses-on-the-evolving-study-of-unconscious-bias) for employees. As in this case and elsewhere (https://www.theatlantic.com/politics/archive/2017/12/implicit-bias-training-salt-lake/548996/), the topic of implicit racial bias has captured many imaginations. Implicit bias has been studied by many social psychologists, and one particular measure, the Implicit Association Test (or IAT) has often been used in that research. It has also been used by practitioners, often for purposes of raising participants' awareness of their own biases. And millions have completed IAT's online at the Project Implicit website. In this episode, I talk with six people who have all thought about the IAT, with the conversation covering such topics as (a) what kinds of mental associations might be revealed by performance on the IAT, (b) how reliable is it as a measure, and (c) whether or not the research debates surrounding the IAT are an example of good science. My guests are psychologists Calvin Lai, Brian Nosek, Mike Olson, Keith Payne, and Simine Vazire, as well as journalist Jesse Singal. LINKS --Scientific American Frontiers episode on implicit bias (https://cosmolearning.org/documentaries/scientific-american-frontiers-796/7/) --Project Implicit (where you can take an IAT) (https://implicit.harvard.edu/implicit/) --Brian Nosek's departmental web page (https://med.virginia.edu/faculty/faculty-listing/ban2b/) --Calvin Lai's departmental web page (https://psychweb.wustl.edu/lai) --Michael Olson's departmental web page (https://psychology.utk.edu/faculty/olson.php) --Keith Payne's departmental web page (http://bkpayne.web.unc.edu/) --Simine Vazire's departmental web page (http://psychology.ucdavis.edu/people/svazire) --"Psychology's favorite tool for measuring racism isn't up to the job" (Jesse Singal, in The Cut) (https://www.thecut.com/2017/01/psychologys-racism-measuring-tool-isnt-up-to-the-job.html) --"Statistically small effects of the Implicit Association Test can have societally large effects" (Greenwald, Banaji, & Nosek, 2015) (https://faculty.washington.edu/agg/pdf/Greenwald,Banaji&Nosek.JPSP.2015.pdf) --"Using the IAT to predict ethnic and racial discrimination: Small effects sizes of unknown societal significance" (Oswald, Mitchell, Blanton, Mitchell, & Tetlock, 2015) (https://s3.amazonaws.com/academia.edu.documents/44267412/Using_the_IAT_to_predict_ethnic_and_raci20160331-25218-20vauz.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1530481600&Signature=lS5rybckXwezHZrqSzHTlW%2FgKtI%3D&response-content-disposition=inline%3B%20filename%3DUsing_the_IAT_to_predict_ethnic_and_raci.pdf) --A summary of David Hume's thoughts on the association of ideas (http://www.livingphilosophy.org.uk/philosophy/David_Hume/the_Association_of_Ideas.htm) Cover art credit: "Still Life with Bottles, Wine, and Cheese," John F. Francis (1857; public domain, from Wikimedia Commons, copyright tag: PD-US) Special Guests: Brian Nosek, Calvin Lai, Jesse Singal, Keith Payne, Michael Olson, and Simine Vazire.

Library Channel (Video)
Improving Openness and Innovation in Scholarly Communication with Brian Nosek

Library Channel (Video)

Play Episode Listen Later May 12, 2018 59:28


Brian Nosek, co-founder and executive director of the Center for Open Science, outlines the most urgent challenges in achieving a more open science future and how the scholarly communication community can change practices to validate and recognize open research. Nosek, a professor of psychology at the University of Virginia, is presented by the UC San Diego Library. Series: "Library Channel" [Science] [Education] [Show ID: 33455]

Library Channel (Audio)
Improving Openness and Innovation in Scholarly Communication with Brian Nosek

Library Channel (Audio)

Play Episode Listen Later May 12, 2018 59:28


Brian Nosek, co-founder and executive director of the Center for Open Science, outlines the most urgent challenges in achieving a more open science future and how the scholarly communication community can change practices to validate and recognize open research. Nosek, a professor of psychology at the University of Virginia, is presented by the UC San Diego Library. Series: "Library Channel" [Science] [Education] [Show ID: 33455]

The Podcast @ DC
Brian Nosek - How To Make the Core Principles Of Research Part of Daily Practice

The Podcast @ DC

Play Episode Listen Later Nov 4, 2017 44:59


Lab Director David Yokum and Executive Director of the Center for Open Science, Brian Nosek, discuss how the core principles of research are not part of daily practice, and they offer some ideas of how we might make them. ****************************** About our guest: Brian Nosek is co-Founder and Executive Director of the Center for Open Science that operates the Open Science Framework. COS is enabling open and reproducible research practices worldwide. Brian is also a Professor in the Department of Psychology at the University of Virginia. He received his Ph.D. from Yale University in 2002. He co-founded Project Implicit, a multi-university collaboration for research and education investigating implicit cognition--thoughts and feelings that occur outside of awareness or control. Brian investigates the gap between values and practices, such as when behavior is influenced by factors other than one's intentions and goals. Research applications of this interest include implicit bias, decision-making, attitudes, ideology, morality, innovation, barriers to change, open science, and reproducibility. In 2015, he was named one of Nature's 10 and to the Chronicle for Higher Education Influence list.

Adam Ruins Everything
Ep. 38: Professor Brian Nosek On Science's Reproducibility Crisis and Opportunity

Adam Ruins Everything

Play Episode Listen Later Nov 1, 2017 50:35


We've seen it time and time again. A journal publishes a seemingly significant scientific study which gains traction in the press only to be subsequently deemed irreproducible.

Parsing Science: The unpublished stories behind the world’s most compelling science, as told by the researchers themselves.

Tim Errington and Brian Nosek from the Center for Open Science share insights from replicating a high-profile anti-cancer treatment study.  For more information, including materials discussed during this episode, visit ParsingScience.org. Subscribe: iTunes | Android | RSS.

Parsing Science: The unpublished stories behind the world’s most compelling science, as told by the researchers themselves.

Brian Nosek and Tim Errington from the Center for Open Science talk about the important role of open science in accelerating scientific progress.  For more information, including materials discussed during this episode, visit ParsingScience.org. Subscribe: iTunes | Android | RSS.

Circle of Willis
Circle of Willis, Trailer 2

Circle of Willis

Play Episode Listen Later Sep 4, 2017 1:45


Hey Everyone! It's Trailer 2 of CIRCLE OF WILLIS, featuring lightening fast excerpts from my conversations with Lisa Diamond, John Cacioppo, Nilanjana Dasgupta, David Sloan Wilson, Jay Van Bavel, Lisa Feldman Barrett, Brian Nosek, Susan Johnson, and Eli Finkel. And there's SO MUCH MORE!  Episodes 1 and 2 are almost ready! Watch this space! Jim

The Black Goat
SIPSapalooza

The Black Goat

Play Episode Listen Later Aug 23, 2017 62:27


The Society for the Improvement of Psychological Science, or SIPS, held its second conference July 30 - August 1, 2017. SIPS is a new organization that works to improve methods and practices in psychology. The conference is unlike a typical academic meeting -- instead of symposia and keyones, the schedule is filled with hackathons, unconferences, and more. In the first part of this episode, we talk about where SIPS came from and what it is all about. Then we present conversations that we recorded with SIPS attendees. Interviews: Alexa talks to three SIPS veterans: Brett Mercier, Dylan Wiwad, and Alex Uzdavines. Simine talks to Mike Frank and Brian Nosek about whether there is space for morality and politics in science. Sanjay talks to Rich Lucas, Bill Chopik, and Katie Corker about unconferences, Spartans, and beer city USA. Alexa talk to Danielle Young, Joanna Schug, and Leigh Wilton about whether you can be a productive researcher and keep up with Netflix. Simine talks to Rodica Damian, Cory Costello, & Dan Morgan about the worst thing about SIPS. Sanjay talks to Koji Takahashi, and Nick Mikulak about optimism vs. pessimism Alexa talks to Melissa Kline about memorable SIPS moments. Simine talks to Roger Giner-Sorolla, Michèle Nuijten, and Eric Vanman about interesting conversations, and a new goat mascot. Sanjay talks to Ivy Onyeador, Alex Danvers, and Victor Keller about diversity, scientific self-correction, and their favorite member of The Black Goat. The Black Goat is hosted by Sanjay Srivastava, Alexa Tullett, and Simine Vazire. Find us on the web at www.theblackgoatpodcast.com, on Twitter at @blackgoatpod, or on Facebook at facebook.com/blackgoatpod/. You can email us at letters@theblackgoatpodcast.com. You can subscribe to us on iTunes. Our theme music is Peak Beak by Doctor Turtle, available on freemusicarchive.org under a Creative Commons noncommercial attribution license. This is episode 15. It was recorded August 18, 2017, with interviews conducted August 1, 2017.

Sci-gasm
Can We Still Trust Science? with Professor Brian Nosek

Sci-gasm

Play Episode Listen Later Jul 17, 2017 30:19


6Byrne freaks out as he learns of a psychology paper that found less than 40% of psych studies can actually have their results repeated. Byrne begins to question the scientific method as well as his motives for creating a science podcast. In an attempt to help Byrne with his existential crisis, Wade decides to organise an interview with the lead author of the paper that is responsible for not only changing the face of psychology, but scientific research as a whole, Professor Brian Nosek. See acast.com/privacy for privacy and opt-out information.

Select Episodes
Cognitive Bias

Select Episodes

Play Episode Listen Later Jul 9, 2017 51:44


More at https://www.philosophytalk.org/shows/cognitive-bias. Aristotle thought that rationality was the faculty that distinguished humans from other animals. However, psychological research shows that our judgments are plagued by systematic, irrational, unconscious errors known as ‘cognitive biases.' In light of this research, can we really be confident in the superiority of human rationality? How much should we trust our own judgments when we are aware of our susceptibility to bias and error? And does our awareness of these biases obligate us to counter them? The Philosophers shed their biases with Brian Nosek from the University of Virginia, co-Founder and Executive Director of the Center for Open Science.

Podcast Historique Hystérique
Charlottesville, 2015 : Le Reproducibility Project

Podcast Historique Hystérique

Play Episode Listen Later Jun 19, 2017 4:50


En 2012, on frissonne : Brian Nosek, professeur de psychologie à l’université de Virginie, annonce qu’il va coordonner la réplication de … Lire la suite

charlottesville lire brian nosek reproducibility project
You Are Not So Smart
100 - The Replication Crisis

You Are Not So Smart

Play Episode Listen Later Apr 20, 2017 49:52


"Science is wrong about everything, but you can trust it more than anything." That's the assertion of psychologist Brian Nosek, director of the Center for Open Science, who is working to correct what he sees as the temporarily wayward path of psychology. Currently, psychology is facing what some are calling a replication crisis. Much of the most headline-producing research in the last 20 years isn't standing up to attempts to reproduce its findings. Nosek wants to clean up the processes that have lead to this situation, and in this episode, you'll learn how. - Show notes at: www.youarenotsosmart.com - Become a patron at: www.patreon.com/youarenotsosmart SPONSORS • The Great Courses: www.thegreatcoursesplus.com/smart • Squarespace: www.squarespace.com | Offer Code = sosmart See omnystudio.com/listener for privacy information.

Hi-Phi Nation
Hackademics II: The Hackers

Hi-Phi Nation

Play Episode Listen Later Mar 14, 2017 44:25


One scientist decided to put the entire field of psychology to test to see how many of its findings hold up to scrutiny. At the same time, he had scientists bet on the success-rate of their own field. We look at the surprising paradoxes of humans being human, trying to learn about humans, and the elusive knowledge of human nature. Guest voices include Brian Nosek of the Center for Open Science, Andrew Gelman of Columbia University, Deborah Mayo of Virginia Tech, and Matthew Makel of Duke TiP. A philosophical take on the replication crisis in the sciences. Learn more about your ad choices. Visit megaphone.fm/adchoices

Special Issue
Episode 6: 3 Things Societies Can Do To Promote Research Integrity

Special Issue

Play Episode Listen Later Feb 17, 2017 13:08


How can the scientific community incentivize openness and reproducibility as part of the research process? Brian Nosek, Executive Director of the Center for Open Science, says it might be easier than you think.

Bold Signals
S3E01 Science and Technology with Brian Nosek

Bold Signals

Play Episode Listen Later Dec 9, 2016 64:01


Welcome to third season of Bold Signals! In this episode: 1. Scenes from the Replication Crisis: Ronald Fisher and the P-Value. [0:01:00] 2. An extended interview with Brian Nosek, Social Psychologist and Director of the Center for Open Science. [0:09:25] 3. Bold Signals Documentary Club: Cosmos: A Personal Journey (Episode 1). [0:56:58] Music in this Episode: "Enterprise 1" by Languis "Trees Don't Sleep" by Zachary Cale, Mighty Moon & Ethan Schmid "Shoegaze" by Jahzzar "Lights of Tomorrow" by Starover Blue "Not a Song" by Scrapple "Cabalista" by Wild Flag Cover Art: "Homebrew" by Robert Tinney https://dx.doi.org/10.6084/m9.figshare.4308719

Rationally Speaking
Rationally Speaking #172 - Brian Nosek on "Why science needs openness"

Rationally Speaking

Play Episode Listen Later Nov 13, 2016 48:14


There's a growing anxiety about the quality of scientific research, as a depressingly large fraction of articles fail to replicate. Could "openness" solve that problem? This episode features Brian Nosek, a professor of psychology and founder of the Center for Open Science. He and Julia discuss what openness means, some clever approaches to boosting openness, and whether openness could have any downsides (for example, in the cases of peer review or data sharing).

EARadio
EA Global: The Replication Crisis (Julia Galef, Stuart Buck, Brian Nosek, Ivan Oransky, and Stephanie Wykstra)

EARadio

Play Episode Listen Later Oct 12, 2016 61:03


Source: Effective Altruism Global (original video).

Science Soapbox
Brian Nosek: on the nature of capital-T Truth (and Transparency) in science

Science Soapbox

Play Episode Listen Later Mar 4, 2016 43:34


How can we make the scientific endeavor more open and transparent? Dr. Brian Nosek is taking on that very question as the executive director of the Center for Open Science, a non-profit technology company developing software that stands to revolutionize the practice of science. The Science Soapbox team chats with Dr. Nosek about the nature of scientific capital-T Truth, the dawn of the “Science Internet,” and why the graduate students of the future should be so jazzed about his Open Science Framework. For show notes, visit sciencesoapbox.org/podcast and subscribe on iTunes or Stitcher.

Health News Watchdog
Brian Nosek - Center for Open Science

Health News Watchdog

Play Episode Listen Later Dec 4, 2015 8:26


Brian Nosek, PhD, is director of the Center for Open Science and a psychology professor at the University of Virginia. That center’s mission is to increase openness, integrity and reproducibility of scientific research. I talked with him at the Stanford METRICS conference, “Improving Biomedical Research 2015.” This audio is wrapped into a broader blog post at http://www.healthnewsreview.org/?p=42582

EconTalk
Brian Nosek on the Reproducibility Project

EconTalk

Play Episode Listen Later Nov 16, 2015 67:18


Brian Nosek of the University of Virginia and the Center for Open Science talks with EconTalk host Russ Roberts about the Reproducibility Project--an effort to reproduce the findings of 100 articles in three top psychology journals. Nosek talks about the findings and the implications for academic publishing and the reliability of published results.

Science Signaling Podcast
Moralizing gods, scientific reproducibility, and a daily news roundup

Science Signaling Podcast

Play Episode Listen Later Aug 27, 2015 35:44


Brian Nosek discusses the reproducibility of science, Lizzie Wade delves into the origin of religions with moralizing gods. David Grimm talks about debunking the young Earth, a universal flu vaccine, and short, sweet paper titles. Hosted by Sarah Crespi. [Image credit: DIPTENDU DUTTA/AFP/GETTY IMAGES]

Science Magazine Podcast
Moralizing gods, scientific reproducibility, and a daily news roundup

Science Magazine Podcast

Play Episode Listen Later Aug 27, 2015 34:27


Brian Nosek discusses the reproducibility of science, Lizzie Wade delves into the origin of religions with moralizing gods. David Grimm talks about debunking the young Earth, a universal flu vaccine, and short, sweet paper titles. Hosted by Sarah Crespi. [Image credit: DIPTENDU DUTTA/AFP/GETTY IMAGES]

Rob Wiblin's top recommended EconTalk episodes v0.2 Feb 2020
Nosek on Truth, Science, and Academic Incentives

Rob Wiblin's top recommended EconTalk episodes v0.2 Feb 2020

Play Episode Listen Later Sep 10, 2012 56:28


Brian Nosek of the University of Virginia talks with EconTalk host Russ Roberts about how incentives in academic life create a tension between truth-seeking and professional advancement. Nosek argues that these incentives create a subconscious bias toward making research decisions in favor of novel results that may not be true, particularly in empirical and experimental work in the social sciences. In the second half of the conversation, Nosek details some practical innovations occurring in the field of psychology, to replicate established results and to publicize unpublished results that are not sufficiently exciting to merit publication but that nevertheless advance understanding and knowledge. These include the Open Science Framework and PsychFileDrawer.

EconTalk at GMU
Nosek on Truth, Science, and Academic Incentives

EconTalk at GMU

Play Episode Listen Later Sep 10, 2012 56:28


Brian Nosek of the University of Virginia talks with EconTalk host Russ Roberts about how incentives in academic life create a tension between truth-seeking and professional advancement. Nosek argues that these incentives create a subconscious bias toward making research decisions in favor of novel results that may not be true, particularly in empirical and experimental work in the social sciences. In the second half of the conversation, Nosek details some practical innovations occurring in the field of psychology, to replicate established results and to publicize unpublished results that are not sufficiently exciting to merit publication but that nevertheless advance understanding and knowledge. These include the Open Science Framework and PsychFileDrawer.

EconTalk
Nosek on Truth, Science, and Academic Incentives

EconTalk

Play Episode Listen Later Sep 10, 2012 56:28


Brian Nosek of the University of Virginia talks with EconTalk host Russ Roberts about how incentives in academic life create a tension between truth-seeking and professional advancement. Nosek argues that these incentives create a subconscious bias toward making research decisions in favor of novel results that may not be true, particularly in empirical and experimental work in the social sciences. In the second half of the conversation, Nosek details some practical innovations occurring in the field of psychology, to replicate established results and to publicize unpublished results that are not sufficiently exciting to merit publication but that nevertheless advance understanding and knowledge. These include the Open Science Framework and PsychFileDrawer.

EconTalk Archives, 2012
Nosek on Truth, Science, and Academic Incentives

EconTalk Archives, 2012

Play Episode Listen Later Sep 10, 2012 56:28


Brian Nosek of the University of Virginia talks with EconTalk host Russ Roberts about how incentives in academic life create a tension between truth-seeking and professional advancement. Nosek argues that these incentives create a subconscious bias toward making research decisions in favor of novel results that may not be true, particularly in empirical and experimental work in the social sciences. In the second half of the conversation, Nosek details some practical innovations occurring in the field of psychology, to replicate established results and to publicize unpublished results that are not sufficiently exciting to merit publication but that nevertheless advance understanding and knowledge. These include the Open Science Framework and PsychFileDrawer.

The 7th Avenue Project
The Life Unconscious: Psychologist Brian Nosek

The 7th Avenue Project

Play Episode Listen Later Oct 3, 2011 73:02


For the last 15 years, Brian Nosek has been studying the hidden biases, preferences and thought patterns that lurk just below the threshold of self-awareness. Those unconscious attitudes are often at odds with our conscious account of ourselves, yet they may influence our outlook, our choices and even our actions. One of the tools Nosek and colleagues have used to expose latent racial preferences and other forms of bias is a simple online test, the Implicit Association Test, or IAT. In this edition of the show, I take the test myself and talk to Brian about implications of his research for our understanding of the mind, decisionmaking, politics and society.