Podcasts about replicability

  • 43PODCASTS
  • 45EPISODES
  • 42mAVG DURATION
  • ?INFREQUENT EPISODES
  • Nov 7, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about replicability

Latest podcast episodes about replicability

Matters Microbial
Matters Microbial #64: Making Sense of the Microbiome

Matters Microbial

Play Episode Listen Later Nov 7, 2024 61:26


Today, Dr. Patrick Schloss, Professor in the Department of Microbiology and Immunology in the School of Medicine at the University of Michigan, joins the #QualityQuorum to discuss how the human microbiome is studied, possible pitfalls in such data analysis, and what tools he and his coworkers have developed to lead toward repeatable, hypothesis-driven science. Host: Mark O. Martin Guest: Patrick Schloss Subscribe: Apple Podcasts, Spotify Become a patron of Matters Microbial! Links for this episode An overview of how the gut microbiome is analyzed. One of the articles discussed by Dr. Schloss exploring reproducibility in microbiome studies: “Identifying and Overcoming Threats to Reproducibility, Replicability, Robustness, and Generalizability in Microbiome Research.” Another article discussed by Dr. Schloss, regarding the link between the microbiome and obesity:  “Looking for a Signal in the Noise:  Revisiting Obesity and the Microbiome.” An article from Dr. Schloss' research team that explores a link between the human microbiome and a type of colorectal cancer. A link to the MOTHUR project, used to analyze microbiome data. A link to a video by Dr. Schloss:  “Understanding Disease Through the Lens of the Microbiome.” Dr. Schloss' YouTube channel about data analysis. Dr. Schloss' research group website. Dr. Schloss' faculty website. Intro music is by Reber Clark Send your questions and comments to mattersmicrobial@gmail.com

EUVC
David Dana, Head of VC Investments at EIF on how emerging managers can show performance | E314

EUVC

Play Episode Listen Later May 21, 2024 36:02


Today, we are joined by David Dana.David is the Head of VC Investments at the European Investment Fund, leading the Disruptive Tech & Innovation VC Team with 10 investment professionals. He contributed to developing the new #InvestEU financing program of the EU, aiming at ensuring the technological sovereignty of Europe.Previously, he has been in charge of EIF investments in VC funds in France, Israel, and of the Luxembourg Future Fund. David was also overviewing EIF activities with Accelerators. Over 15 years, invested +€3.5B of capital in over 120 VC funds, mainly following deeptech strategies. Previously, spent 6 years as investment professional at SGAM AI Funds of Funds and senior advisor Corporate Finance at PWC.David is also a regular contributor to podcasts and speaker at industry events, expert in deeptech fields such as AI, Space, Web 3.0, Blockchain, Quantum Technologies, Semiconductors, Tech Transfer.Go to eu.vc for our core learnings and the full video interview

Chit Chat Money
The Best Investor In Britain? How Terry Smith Beats The Market With Quality Stocks

Chit Chat Money

Play Episode Listen Later Mar 20, 2024 73:26


On this episode of Chit Chat Stocks, Ryan and Brett analyze the British investment fund called Fundsmith and its founder Terry Smith. We go through: (00:00) Introduction and Background (02:19) Buy Right and Sit Tight (03:18) Don't Over Diversify (04:16) About Terry Smith (08:55) Margin of Safety (16:14) Fundsmith Performance (26:09) Criteria for Good Companies (32:57) Businesses Resilient to Change (39:55) Do Performance Fees Work? (41:36) Fee Structures and Performance Fees (44:50) 10 Rules for Investors (49:43) FundSmith Holdings (54:20) Portfolio Management and Strategy (01:00:45) Fundsmith's Standout Features (01:03:16) Replicability and Outperformance ***************************************************** Subscribe to our YouTube channel: https://www.youtube.com/@ChitChatStocks  Follow us on Twitter/X: ⁠https://twitter.com/chitchatstocks  Follow us on Substack: ⁠https://chitchatstocks.substack.com/  ********************************************************************* ⁠Public.com⁠ just launched options trading, and they're doing something no other brokerage has done before: sharing 50% of their options revenue directly with you. That means instead of paying to place options trades, you get something back on every single trade.  -Earn $0.18 rebate per contract traded  -No commission fees  -No per-contract fees  By sharing 50% of their options revenue, Public has created a more transparent options trading experience. You'll know exactly how much they make from each trade because they literally give you half of it. Activate options trading at ⁠Public.com/chitchatstocks⁠ by March 31 to lock in your lifetime rebate.  Options are not suitable for all investors and carry significant risk.  Certain complex options strategies carry additional risk. Options can be risky and are not suitable for all investors. See the Characteristics and Risks of Standardized Options to learn more. For each options transaction, Public Investing shares 50% of their order flow revenue as a rebate to help reduce your trading costs. This rebate will be displayed as a negative number in the “Additional Fees” column of your Trade Confirmation Statement and will be immediately reflected in the total dollars paid or received for the transaction. Order flow rebates are only issued for options trades and not for transactions involving other assets, including equities. For more information, refer to the Fee Schedule. All investing involves the risk of loss, including loss of principal. Brokerage services for US-listed, registered securities, options and bonds in a self-directed account are offered by Open to the Public Investing, Inc., member FINRA & SIPC. See public.com/#disclosures-main for more information. ********************************************************************* FinChat.io is The Complete Stock Research Platform for fundamental investors. With its beautiful design and institutional-quality data, FinChat is incredibly powerful and easy to use. Use our LINK and get 25% off any premium plan: ⁠https://finchat.io/chitchat/?lmref=J3bklw  ********************************************************************* Disclosure: Chit Chat Stocks hosts and guests are not financial advisors, and nothing they say on this show is formal advice or a recommendation.

Tiny Marketing
Ep. 66: Master The Art of Time Management and Client Experience | Expert Guest: Savanna Kahle, Knapsack Creative

Tiny Marketing

Play Episode Listen Later Mar 17, 2024 31:43 Transcription Available


 Head over to leadfeeder.com and sign up for a 14-day (no strings attached) free trial!In this episode, Sarah Noel Block shares her incredible experience with Knapsack Creative's web design process. Discover how their efficient, collaborative approach and structured project management lead to seamless client experiences. Learn how you can apply these principles to your own projects.Key Takeaways:Efficient and Collaborative Process: Knapsack Creative's approach to web design is highly efficient and collaborative, with a focus on completing websites in as little as a day. This process involves time blocking, screen sharing, and video chats for feedback, ensuring that decisions are made quickly and the project moves forward smoothly.Client Experience and Fit: The importance of ensuring a good fit between the client and the agency is emphasized, as the fast-paced nature of the process requires clients to be quick decision-makers. The initial fit call and clear communication throughout the process are crucial to align expectations and ensure a seamless experience.Structured Project Management: The use of tools like Asana and Dropbox Paper for project management and content snare for gathering information allows for a highly organized approach. This structure, along with a clear project map and time-blocked schedule, helps both the team and the client stay on track.Preparation and Flexibility: The agency prepares a significant portion of the website before the design day, allowing for a focus on finer details and revisions during the collaborative session. There's also a buffer in scheduling to accommodate any unexpected changes or additional work.Replicability and Scalability: The processes and systems Knapsack Creative has developed are not only efficient for their own projects but are also applicable and adaptable to other industries and project types. The emphasis on clear SOPs, time blocking, and structured project management can be replicated to improve efficiency and client experience in various contexts.Meet Savanna:Savanna is the Director of Knapsack Creative, a Squarespace web design company that's on a mission to create the world's best web design experience. She began working at Knapsack in 2018 as an "intern" (they weren't hiring, so she offered to start coming in for free). She eventually was hired on as a web designer and for several years, Savanna cranked out websites for Knapsack. In 2022 she took over for the founder, Ben Manley, to run the day to day. Now she brings her background in design and experience building hundreds of sites to perfect the systems and processes for the business. When she's not meeting with new clients or heading up new initiatives, she can be found exploring Lynchburg with her husband, going for walks in nature, or enjoying a day out on the lake.WebsiteInstagramWebsite: https://www.sarahnoelblock.comLinkedIn: https://www.linkedin.com/in/sarahnoelblock/Newsletter: https://tinymarketing.me/newsletterTiny Marketing CommunityClick here to ask a question about the episode

BJKS Podcast
92. Tom Hardwicke: Meta-research, reproducibility, and post-publication critique

BJKS Podcast

Play Episode Listen Later Feb 2, 2024 66:48 Transcription Available


 Tom Hardwicke is a Research Fellow at the University of Melbourne. We talk about meta-science, incuding Tom's work on post-publication critique and registered reports, what his new role as editor at Psychological Science entails, and much more.BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith.Support the show: https://geni.us/bjks-patreonTimestamps0:00:00: What is meta-science/meta-research?0:03:15: How Tom got involved in meta-science0:21:51: Post-publication critique in journals0:39:30: How Tom's work (registered reports) led to policy changes at journals0:44:08: Tom is now the STAR (statistics, transparency, and rigor) editor at Psychological Science0:48:17: How to best share data that can be used by people with different backgrounds0:54:51: A book or paper more people should read0:56:36: Something Tom wishes he'd learnt sooner1:00:13: Jobs in meta-science1:03:29: Advice for PhD students/postdocsPodcast linksWebsite: https://geni.us/bjks-podTwitter: https://geni.us/bjks-pod-twtTom's linksWebsite: https://geni.us/hardwicke-webGoogle Scholar: https://geni.us/hardwicke-scholarTwitter: https://geni.us/hardwicke-twtBen's linksWebsite: https://geni.us/bjks-webGoogle Scholar: https://geni.us/bjks-scholarTwitter: https://geni.us/bjks-twtReferences & linksEpisodes w/ Nosek, Vazire, & Chambers: https://geni.us/bjks-nosekhttps://geni.us/bjks-vazirehttps://geni.us/bjks-chambersFoamhenge: https://en.wikipedia.org/wiki/FoamhengeMETRICS: https://metrics.stanford.edu/AIMOS: https://www.youtube.com/@aimosinc4164Chambers & Mellor (2018). Protocol transparency is vital for registered reports. Nature Human Behaviour.Hardwicke, Jameel, Jones, Walczak & Weinberg (2014). Only human: Scientists, systems, and suspect statistics. Opticon1826.Hardwicke & Ioannidis (2018). Mapping the universe of registered reports. Nature Human Behaviour.Hardwicke, Serghiou, Janiaud, Danchev, Crüwell, Goodman & Ioannidis (2020). Calibrating the scientific ecosystem through meta-research. Annual Review of Statistics and Its Application.Hardwicke, Thibault, Kosie, Tzavella, Bendixen, Handcock, ... & Ioannidis (2022). Post-publication critique at top-ranked journals across scientific disciplines: a cross-sectional assessment of policies and practice. Royal Society Open Science.Hardwicke & Vazire (2023). Transparency Is Now the Default at Psychological Science. Psychological Science.Kidwell, Lazarević, Baranski, Hardwicke, Piechowski, Falkenberg, ... & Nosek (2016). Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS biology.Nosek, Hardwicke, Moshontz, Allard, Corker, Dreber, ... & Vazire (2022). Replicability, robustness, and reproducibility in psychological science. Annual review of psychology.Ritchie (2020). Science fictions: Exposing fraud, bias, negligence and hype in science.

The Ensemble Podcast, by CrunchDAO
Replicability in AI, P-Hacking and implications in Quant. Finance - Prof. Lopez De Prado & Dr. Simon

The Ensemble Podcast, by CrunchDAO

Play Episode Listen Later Dec 1, 2023 37:46 Very Popular


During the Awards Ceremony of the ADIA Lab Market Prediction Competition, we discuss Replicability in AI, P-Hacking and implications in Quantitative Finance.  Panel: - Prof. Marcos Lopez de Prado, Global Head - Quantitative R&D at ADIA - Dr. Horst Simon - Director at ADIA Lab - Matteo Manzi, Cofounder & Lead Quant Researcher at CrunchDAO Follow us:  Join the group on Linkedin: https://www.linkedin.com/groups/12920374/ CrunchDAO on Linkedin: https:/linkedin.com/crunchdao-com    CrunchDAO on X https://x.com/CrunchDAO What is CrunchDAO? Crunchdao serves as a secure intermediary, enabling data scientists to keep control of their models while powering financial institutions. Predict & Compete:  Register here: https://crunchdao.com

BJKS Podcast
80. Simine Vazire: scientific editing, the purpose of journals, and the future of psychological science

BJKS Podcast

Play Episode Listen Later Nov 10, 2023 81:29 Transcription Available


Simine Vazire is a Professor of Psychology at the University of Melbourne. In this conversation, we talk about her work on meta-science, the purpose of journals and peer review, Simine's plans for being Editor-in-Chief at Psychological Science, the hidden curriculum of scienitic publishing, and much more.BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith.Support the show: https://geni.us/bjks-patreonTimestamps0:00:00: What is SIPS and why did Simine cofound it?0:05:10: Why Simine resigned from the NASEM Reproducibility & Replicability committee0:13:07: Do we still need journals and peer review in 2023?0:28:04: What does an Editor-in-Chief actually do?0:37:09: Simine will be EiC of Psychological Science0:59:44: The 'hidden curriculum' of scientific publishing1:04:03: Why Siminie created a GoFundMe for DataColada1:15:10: A book or paper more people should read1:17:10: Something Simine wishes she'd learnt sooner1:18:44: Advice for PhD students and postdocsPodcast linksWebsite: https://geni.us/bjks-podTwitter: https://geni.us/bjks-pod-twtSimine's linksWebsite: https://geni.us/vazire-webGoogle Scholar: https://geni.us/vazire-scholarTwitter: https://geni.us/vazire-twtBen's linksWebsite: https://geni.us/bjks-webGoogle Scholar: https://geni.us/bjks-scholarTwitter: https://geni.us/bjks-twtReferences/linksEpisode of Black Goat Podcast I mentioned: https://blackgoat.podbean.com/e/simine-flips-out/Mini-interview with Simine in Science: https://www.science.org/content/article/how-reform-minded-new-editor-psychology-s-flagship-journal-will-shake-thingsMy 2nd interview w/ Adam Mastroianni, and his blog post on peer review:https://geni.us/bjks-mastroianni_2Interview w/ Chris Chambers and Peer community in RRhttps://geni.us/bjks-chambersSimine's vision statement for Psychological Sciencehttps://drive.google.com/file/d/1mozmB2m5kxOoPvQSqDSguRrP5OobutU6/viewGOFUNDME for Data Colada's legal feeshttps://www.gofundme.com/f/uhbka-support-data-coladas-legal-defenseFrancesca  Gino's responsehttps://www.francesca-v-harvard.org/NYT Magazine article about Amy Cuddy (and Joe Simmons)https://www.nytimes.com/2017/10/18/magazine/when-the-revolution-came-for-amy-cuddy.htmlStreisand effecthttps://en.wikipedia.org/wiki/Streisand_effectHolcombe (during dogwalk). On peer review. Personal communication to Simine.Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science.Reich (2009): Plastic fantastic: How the Biggest Fraud in Physics Shook the Scientific

The Nonlinear Library
EA - Discounts in cost-effectiveness analyses [Founders Pledge] by Rosie Bettle

The Nonlinear Library

Play Episode Listen Later Aug 16, 2023 69:30


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Discounts in cost-effectiveness analyses [Founders Pledge], published by Rosie Bettle on August 16, 2023 on The Effective Altruism Forum. Replicability and Generalisability This report aims to provide guidelines for producing discounts within cost-effectiveness analyses; how to take an effect size from an RCT and apply discounts to best predict the real-world effect of an intervention. The goal is to have guidelines that produce accurate estimates, and are practical for researchers to use. These guidelines are likely to be updated further, and I especially invite suggestions and criticism for the purpose of further improvement. A google docs version of this report is available here. Acknowledgements: I would like to thank Matt Lerner, Filip Murár, David Reinstein and James Snowden for helpful comments on this report. I would also like to thank the rest of the FP research team for helpful comments during a presentation on this report. Summary This document provides guidelines for estimating the discounts that we (Founders Pledge) apply to RCTs in our cost-effectiveness analyses for global health and development charities. To skip directly to these guidelines, go to the 'Guidance for researchers' sections (here, here and here; separated by each type of discount). I think that we should separate out discounts into internal reliability and external validity adjustments, because these components have different causes (see Fig 1.) For internal reliability (degree to which the study accurately assesses the intervention in the specific context of the study- aka if an exact replication of the study was carried out, would we see the same effect?); All RCTs will need a Type M adjustment; an adjustment that corrects for potential inflation of effect size (Type M error). The RCTs that are likely to have the most inflated effect sizes are those that are low powered (where the statistical test used has only a small chance of successfully detecting an effect, see more info here), especially if they are providing some of the first evidence for the effect. Factors to account for include publication bias, researcher bias (e.g. motivated reasoning to find an exciting result; running a variety of statistical tests and only reporting the ones that reach statistical significance would be an example of this), and methodological errors (e.g. inadequate randomisation of test trial subjects). See here for guidelines, and here to assess power. Many RCTs are likely to need a 50-60% Type M discount, but there is a lot of variation here; table 1 can help to sense-check Type M adjustments. A small number (

FSR Energy & Climate
OneNet series: Reference IT Implementation for OneNet (WP6)

FSR Energy & Climate

Play Episode Listen Later Jun 29, 2023 6:03


In this podcast, we asked Vassilis Sakas and Konstantinos Kotsalo (European Dynamics), what has already been achieved with Work Package 6, the main challenges faced, the main requirements they have identified for the decentralized middleware layer, and what solutions have been adopted to develop it. The overall objective of this WP is to set the basis of the work to be done in the ONENET proposal. That is to say, it will look back to the market solutions and digital platforms presented so far in the EU pilot projects, revisit European policy frameworks, summarize their contributions and benefits, and build on this information to sketch the new products and business use cases proposed in the ONENET approach. These products and business use cases will engage strongly the consumers in order to maximize the flexibility resources that the grid operators can use to meet the clean energy challenges. The differences among EU markets will be reviewed and specific priorities for KPIs, Scalability and Replicability of Onenet solutions will be devised in order to enable the pan-EU integration of these new services and products digitally procured for system operation. Read more about OneNet project: https://onenet-project.eu/

PaperPlayer biorxiv neuroscience
Predicting Parkinson's disease progression using MRI-based white matter radiomic biomarker and machine learning: a reproducibility and replicability study

PaperPlayer biorxiv neuroscience

Play Episode Listen Later May 5, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.05.05.539590v1?rss=1 Authors: Arafe, M., Bhagwat, N. P., Chatelain, Y., Dugre, M., Sokolowski, A., Wang, M., Xiao, Y., Sharp, M., Poline, J.-B., Glatard, T. Abstract: Background: The availability of reliable biomarkers of Parkinson's disease (PD)progression is critical to the understanding of the disease and development of treatment options. Magnetic Resonance Imaging (MRI) provides a promising source of PD biomarkers, however, neuroimaging results have been shown to be markedly sensitive to analytical conditions and population sampling, which motivates investigations of their robustness. This study is part of a project to investigate the replicability of 11 structural MRI measures of PD identified in a recent review. Objective: This paper attempts to reproduce (similar data, similar analysis) and replicate (variations in data and analysis) the design of the machine learning (ML) model described in [1] to predict PD progression from T1-weighted MRIs. Methods: We used the Parkinson's Progression Markers Initiative dataset (PPMI, ppmi-info.org) used in [1] and we followed as closely as possible the original methods. We also investigated slight methodological variations in cohort selection, feature extraction, ML model design, and evaluation techniques. Results: The Area under the ROC Curve (AUC) achieved by our model closely reproducing the original study remained lower than 0.5. Across all tested models, we obtained a peak AUC of 0.685, which is better than chance performance but remained lower than the AUC value of 0.795 reported in [1]. Conclusion: We managed to train a model that predicts disease progression with a performance better than chance on a cohort extracted from the PPMI dataset, using methods adapted from [1]. However, the performance of this model remains substantially lower than the one reported in [1]. Our difficulties to reproduce or replicate the original work are likely explained by the relatively low sample size in the original study. We provide recommendations on how to improve the reproducibility of MRI-based ML models of PD in the future. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

Two Psychologists Four Beers
Episode 104: Quantifying the Narrative of Replicable Science

Two Psychologists Four Beers

Play Episode Listen Later Mar 29, 2023 69:18


Yoel and Alexa discuss a recent paper that takes a machine learning approach to estimating the replicability of psychology as a discipline. The researchers' investigation begins with a training process, in which an artificial intelligence model identifies ways that textual descriptions differ for studies that pass versus fail manual replication tests. This model is then applied to a set of 14,126 papers published in six well-known psychology journals over the past 20 years, picking up on the textual markers that it now recognizes as signals of replicable findings. In a mysterious twist, these markers remain hidden in the black box of the algorithm. However, the researchers hand-examine a few markers of their own, testing whether things like subfield, author expertise, and media interest are associated with the replicability of findings. And, as if machine learning models weren't juicy enough, Yoel trolls Alexa with an intro topic hand-selected to infuriate her.

Access 2 Perspectives – Conversations. All about Open Science Communication
Reproducibility and replicability in teaching and learning as a road to Open Science - A conversation with Claudia Frick

Access 2 Perspectives – Conversations. All about Open Science Communication

Play Episode Listen Later Mar 20, 2023 49:49


Claudia Frick is Professor of Information Services, Science, and Scholarly Communication at TH Köln. After her doctorate in meteorology, she completed a Master's degree in library and information science. Claudia and Jo discuss open science, including concerns around data sensitivity, the relationship between data sets and publications, and the importance of collaboration and metadata. They also touch on the need for researchers to consider the value and accessibility of their work and the limitations of open science in terms of time and resources. More details at⁠ https://access2perspectives.pubpub.org/pub/a-conversation-with-claudia-frick/ Host:⁠ Dr Jo Havemann⁠, ORCID iD ⁠0000-0002-6157-1494 ⁠Editing: ⁠Ebuka Ezeike⁠ Music:⁠ Alex Lustig⁠, produced by⁠ Kitty Kat ⁠License:⁠ Attribution 4.0 International (CC BY 4.0)   ⁠ At Access 2 Perspectives, we guide you in your complete research workflow toward state-of-the-art research practices and in full compliance with funding and publishing requirements. Leverage your research projects to higher efficiency and increased collaboration opportunities while fostering your explorative spirit and joy. Website:⁠ https://access2perspectives.pubpub.org --- Send in a voice message: https://podcasters.spotify.com/pod/show/access2perspectives/message

Psychology Tidbits
PSYCHOLOGY STUDIES SHOW LOW REPLICABILITY

Psychology Tidbits

Play Episode Listen Later Mar 6, 2023 5:36


FSR Energy & Climate
OneNet series: Products and services definition in support of OneNet (WP2)

FSR Energy & Climate

Play Episode Listen Later Feb 3, 2023 3:43


In this podcast, we asked Anastasis Tzoumpas (Ubitech), what has already been achieved with Work Package 2, the main challenges faced, and how OneNet can contribute to harmonising the EU electricity markets. The overall objective of this WP is to set the basis of the work to be done in the ONENET proposal. That is to say, it will look back to the market solutions and digital platforms presented so far in the EU pilot projects, revisit European policy frameworks, summarize their contributions and benefits and build on this information to sketch the new products and business use cases proposed in the ONENET approach. These products and business use cases will engage strongly the consumers in order to maximize the flexibility resources that the grid operators can use to meet the clean energy challenges. The differences among EU markets will be reviewed and specific priorities for KPIs, Scalability and Replicability of Onenet solutions will be devised in order to enable the pan-EU integration of these new services and products digitally procured for system operation. Read more about OneNet project: https://onenet-project.eu/

Homo Fabulus
Quel concept scientifique mérite d'être mieux connu ? 200 intellectuell·e·s répondent.

Homo Fabulus

Play Episode Listen Later Dec 8, 2022 21:59


Si vous pouviez réunir dans une pièce 100 de vos intellectuels contemporains préférés et leur poser une seule question, laquelle serait-ce ? Ici je vous propose « Quel concept scientifique mérite d'être mieux connu ? », pour vous faire (re)découvrir le site web https://www.edge.org/ Le lien pour permettre que mon prochain livre voie le jour : https://fr.ulule.com/livre-homofabulus/ Ce n'est que grâce à votre soutien que je peux affecter mes gènes à la production de vidéos ! Si vous aimez leur travail et souhaitez qu'il continue, faites augmenter ma fitness sur uTip ou Tipeee : https://utip.io/homofabulus https://tipeee.com/homofabulus/ Vous pouvez aussi acheter mon livre pour me soutenir (Grand Prix du livre sur le cerveau 2022 !) : https://amzn.to/3ytE7kH (le livre est aussi commandable dans n'importe quelle librairie.) Sachez quand même que je touche moins de 2€ par livre vendu, donc votre soutien sur uTip sera toujours plus important pour moi. Sélection de livres à lire au moins une fois dans sa vie (moins pompeusement, disons que si vous aimez les thèmes que j'aborde, vous aimerez ces livres. Attention, ce lien Amazon est sponsorisé, ce qui veut dire que je toucherai une commission si vous achetez en passant par lui (mais ça ne change rien pour vous)) : https://amzn.to/3cDhU6i Sur Facebook : https://www.facebook.com/H0moFabulus/ Sur Twitter : https://twitter.com/homofabulus pour les infos strictement liées à la chaîne et https://twitter.com/stdebove pour mon compte perso alimenté plus régulièrement Sur Insta : https://www.instagram.com/stephanedebove/ Musique : epidemicsound Références (les petits numéros qui s'affichent en bas à droite de la vidéo) : 1. Heyer, E. L'odyssée des gènes Illustrated édition. isbn : 978-2-08-142822-5 (FLAMMA RION, Paris, 2020). 2. Heyer, E. & Mazel, A. La vie secrète des gènes Illustrated édition. isbn : 978-2-08- 028975-9 (FLAMMARION, oct. 2022). 3. Dehaene, S. Apprendre ! : Les talents du cerveau, le défi des machines isbn : 978-2-7381- 4542-0 (JACOB, Paris, sept. 2018). 4. Marr, D. & Ullman, S. Vision : A Computational Investigation into the Human Repre sentation and Processing of Visual Information isbn : 978-0-262-51462-0 (The MIT Press, Cambridge, Mass, 1982). 5. Jablonski, N. G. & Chaplin, G. The Evolution of Human Skin Coloration. Journal of Human Evolution 39, 57-106. issn : 0047-2484 (Print)r0047-2484 (Linking) (2000). 6. Jensen, T. Z. T. et al. A 5700 Year-Old Human Genome and Oral Microbiome from Chewed Birch Pitch. Nature Communications 10, 5520. issn : 2041-1723 (déc. 2019). 7. Forsell, E. et al. Predicting Replication Outcomes in the Many Labs 2 Study. Journal of Economic Psychology. Replications in Economic Psychology and Behavioral Economics 75, 102117. issn : 0167-4870 (déc. 2019). 8. Nosek, B. A. et al. Replicability, Robustness, and Reproducibility in Psychological Science. Annual Review of Psychology 73, 719-748 (2022).

TOK Talk
Replicability: 2023 TOK Essay Title 1

TOK Talk

Play Episode Listen Later Oct 25, 2022 33:18


In this episode, I sat down with Donna Gee (IB Design Technology Teacher) and Michael Stewart (IB Psychology and TOK Teacher) to unpack and wrestle with 2023 TOK Essay Title 1: Is replicability necessary in the production of knowledge? Discuss with reference to two areas of knowledge. We had a rich discussion which I hope you'll find insightful into the role and relevance of replicability in different Areas of Knowledge. Links several examples discussed can be found on www.TOKTalk.org

areas essay replicability
Neural Information Retrieval Talks — Zeta Alpha
Open Pre-Trained Transformer Language Models (OPT): What does it take to train GPT-3?

Neural Information Retrieval Talks — Zeta Alpha

Play Episode Listen Later Jun 16, 2022 47:12


Andrew Yates (Assistant Professor at the University of Amsterdam) and Sergi Castella i Sapé discuss the recent "Open Pre-trained Transformer (OPT) Language Models" from Meta AI (formerly Facebook). In this replication work, Meta developed and trained a 175 Billion parameter Transformer very similar to GPT-3 from OpenAI, documenting the process in detail to share their findings with the community. The code, pretrained weights, and logbook are available on their Github repository (links below). Links ❓Feedback Form: https://scastella.typeform.com/to/rg7a5GfJ

The Interactome
Episode 6: Swab Story (with Dr. Ramy Arnaout, MD, DPhil)

The Interactome

Play Episode Listen Later Apr 18, 2022 60:09


Join us and special guest Dr. Ramy Arnaout, MD, DPhil on a discussion of how he led his team to resolve a nationwide COVID swab shortage within three weeks through 3D printing and effective cooperation. Quite possibly the coolest episode we've ever done, it's a conversation with takeaways you don't want to miss!   Dr. Arnaout's reflection in the Journal of Clinical Microbiology   02:12 Introducing Dr. Arnaout 06:25 Swab crisis basics, team formation, influences from prior studies of cooperation 25:36 Team size and roles, effective utilization of resources, swab design and 3D printing 34:02 Luck, complex networks, Kevin Bacon 42:53 Replicability beyond crises, importance of institutional support, effective leadership

Stanford Psychology Podcast
25 - Brian Nosek: The Pursuit of Open and Reproducible Science

Stanford Psychology Podcast

Play Episode Listen Later Dec 23, 2021 51:23


Joseph chats with Brian Nosek, co-Founder and Executive Director of the Center for Open Science. The Center's mission is to increase the openness, integrity and reproducibility of scientific research. Brian is also a professor of Psychology at the University of Virginia where he runs the Implicit Social Cognition Lab. Brian studies the gap between values and practices with the goal of understanding why the gap exists, its consequences and how to reduce it. Brian co-founded Project Implicit, a collaborative research project that examines implicit cognition - thoughts and attitudes that occur outside our awareness. In 2015, he was named one of Nature's 10 and to the Chronicle for Higher Education Influence list. He won the 2018 Golden Goose Award from the American Association for the Advancement of Science - only the 2nd time a psychologist has won the award. Brian received his PhD from Yale University in 2002. In this episode, Brian discusses his 2021 Annual Review piece titled Replicability, Robustness and Reproducibility in Psychological Science; the paper reflects on the progress and challenges of the science reform movement in the last decade. Brian and Joseph talk about measures researchers and institutions can take to improve research reliability; they also reimagine how we fund and publish studies, share lessons learnt from the pandemic, and share resources for learning more about the reform movement. Paper: Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Almenberg, A. D., ... & Vazire, S. (2021). Replicability, robustness, and reproducibility in psychological science. Accessible preprint: https://psyarxiv.com/ksfvq/

Talk the Talk - a podcast about linguistics, the science of language.
42: Replicability Crisis (with Martine Grice and Bodo Winter)

Talk the Talk - a podcast about linguistics, the science of language.

Play Episode Listen Later Dec 1, 2021 94:30


The sciences are facing a replicability crisis. Some landmark studies were once considered settled, but then failed when they were retested. So have any linguistic experiments been toppled? And how do we fix this problem? Dr Martine Grice and Dr Bodo Winter have contributed to a special issue of Linguistics, and they join us for this fun episode.

The Gradient Podcast
Peter Henderson on RL Benchmarking, Climate Impacts of AI, and AI for Law

The Gradient Podcast

Play Episode Listen Later Oct 28, 2021 88:42


In episode 14 of The Gradient Podcast, we interview Stanford PhD Candidate Peter HendersonSubscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSPeter is a joint JD-PhD student at Stanford University advised by Dan Jurafsky. He is also an OpenPhilanthropy AI Fellow and a Graduate Student Fellow at the Regulation, Evaluation, and Governance Lab. His research focuses on creating robust decision-making systems, with three main goals: (1) use AI to make governments more efficient and fair; (2) ensure that AI isn't deployed in ways that can harm people; (3) create new ML methods for applications that are beneficial to society.Links:Reproducibility and Reusability in Deep Reinforcement Learning. Benchmark Environments for Multitask Learning in Continuous DomainsReproducibility of Bench-marked Deep Reinforcement Learning Tasks for Continuous Control.Deep Reinforcement Learning that MattersReproducibility and Replicability in Deep Reinforcement Learning (and Other Deep Learning Methods)Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine LearningHow blockers can turn into a paper: A retrospective on 'Towards The Systematic Reporting of the Energy and Carbon Footprints of Machine LearningWhen Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset”How US law will evaluate artificial intelligence for Covid-19Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music" Get full access to The Gradient at thegradientpub.substack.com/subscribe

Casual Inference
Metascience with Noah Haber | Season 3 Episode 4

Casual Inference

Play Episode Listen Later Oct 25, 2021 68:13 Very Popular


In this episode Lucy D'Agostino McGowan and Ellie Murray chat with Noah Haber about metascience, causal language in the literature, and more!

MinDesign
פרק 08: תרמיות, זיופים ומשבר השחזורים עם ד"ר חגי אלקיים שלם

MinDesign

Play Episode Listen Later Sep 2, 2021 39:51


הפעם אירחנו את חגי אלקיים שלם בפרק מיוחד המוקדש כולו למאורעות אקטואליים בתחום הכלכלה ההתנהגותית.חגי הוא פסיכולוג פוליטי, יועץ אסטרטגי וחוקר ומנחה את הפודקאסטים "הספינר- מאחורי הקלעים של הפוליטיקה" ו"אסור להשוות".בעל תואר דוקטור מהאוניברסיטה העברית. על מה דיברנו?ב- 17 באוגוסט (או: לפני שבועיים), שלושה חוקרים ביחד עם כמה כותבים אנונימיים חשפו תרמיות וזיופים במחקר על... תרמיות וזיופים!הפוסט הזה גרם לרעידת אדמה בקרב קהילת הכלכלה ההתנהגותית, הפסיכולוגיה החברתית ומדעי ההתנהגות בכלל. מאז יש דיונים סוערים סביב השערורייה: הרבה מאמרים וכתבות נכתבו וכמובן נוצר הרבה מאוד רעש ברשת.כדי לעשות קצת סדר בבלאגן, ארגנו לכם פרק שהוא מורה נבוכים למאורעות האחרונים סביב הטענות לזיוף במחקר בהובלתו של דן אריאלי, למשבר השחזורים בפסיכולוגיה ובמדע בכלל, ולקשר בין האקדמיה ליישום בעולם האמיתי.היה לנו את הכבוד לראיין את ד"ר חגי אלקיים שלם, שהיה הראשון שאמל'ק בעברית את הפוסט שחשף את התרמית ומאז הפך להיות שליח ומקור לידע בנושא. ~~~

The Insightful Thinkers Podcast
The Replication Crisis

The Insightful Thinkers Podcast

Play Episode Listen Later Jul 27, 2021 21:35


Replicability is the hallmark of science. Science values replication so much that as long a study is sufficiently replicated, the claims it makes are considered valid even if they conflict with accepted theories. We trust scientific findings because experiments repeated under the same conditions produce the same results. Or do they?   https://www.insightfulthinkersmedia.com/   References:    Bausell, R. B. (2021). The problem with science the reproducibility crisis and what to do about it. Oxford University Press.   Fidler, Fiona and John Wilcox, "Reproducibility of Scientific Results", The Stanford Encyclopedia of Philosophy (Summer 2021 Edition), Edward N. Zalta (ed.).   Romero, F. (2019). Philosophy of Science and The Replicability Crisis.

Bedletter
Research Replication & the Fundamental Attribution Error w/ Richard Nisbett

Bedletter

Play Episode Listen Later Jun 1, 2021 15:18


Continuing the conversation with Richard E. Nisbett! Richard recently released his new book Thinking: A Memoir, in which he details different human reasoning errors, why those errors occur, and how to improve your reasoning. Today is the second part of Richard and Christian's conversation where they discuss the replicability of psychological research and the Fundamental Attribution Error. Episode 44. Find out more at cashliman.com  Thinking: A Memoir by Richard Nisbett on Amazon  Follow Christian on Instagram @cashliman Support the show (https://www.patreon.com/cashliman)

Data & Science with Glen Wright Colopy
Irina Gaynanova | Replicability, Reproducibility, Responsibility, and Optimism for the Future of Science

Data & Science with Glen Wright Colopy

Play Episode Listen Later Apr 27, 2021 62:49 Very Popular


Irina Gaynanova (Texas A&M) describes why she thinks that replicability is a prerequisite for reproducibility in science and how scientists can (personally) start improving the replicability of research. We also discuss how the concepts of replicability/reproducibility can differ according to the domain-specific context and the methods used. Please forward to any students or colleagues who would find this of interest!

Upside Swings
Replicability, Scalability, and the Draft

Upside Swings

Play Episode Listen Later Apr 14, 2021 65:26


The guys mix it up a bit and talk about the philosophy behind the draft They have some fun disagreements and discussions about the philosophies of replicability and scalability in prospects Follow the guys on Twitter @BryceHendrick14 , @sportsbydavis , @report_court

draft scalability replicability
Deadset Podcasting
Mistakes Made in Conversational Audio p3/5 - Too Much Editing / Too Little Humanity

Deadset Podcasting

Play Episode Listen Later Feb 5, 2021 21:59


Find DSP: https://www.deadsetpodcasting.com (https://www.deadsetpodcasting.com) Support DSP: https://buymeacoffee.com/deadpod (https://buymeacoffee.com/deadpod) Topic = Too Much Editing / Too Little Humanity. My Editing Principle (2011-2017) “Make everyone sound super smart, and cut out any and all verbal disfluencies”.  My Editing Principle in (2018-2021): “Edit the show so it doesn’t sound edited. Aim for intelligibility, enjoyability and replicability".   Intelligibility: remove some but not all of the disfluencies.  Enjoyability: have an organic feel and tone, and limit choppy edits. Replicability: commit to an editing style and amount that can work for months and years, not just weeks. Episode Production Chain Intro/Outro: Sennheiser MD46 > Scarlett 212 G3 > Audition/Auphonic Segment 1: EarPods Segment 2: AirPods 2 Segment 3: AirPods Pro Socials: @joshuacliston on Twitter, Instagram and Facebook.  Email: hello@deadsetpodcasting.com (Use FREE30 in the email subject line to save 30% on your first editing job). Hire Us To Edit Your Show(s): https://www.deadsetpodcasting.com/services (https://www.deadsetpodcasting.com/services) Support this podcast

From Fish To Philosopher
28: Some Thoughts on the Future of Libraries, Journals, Impact Factors, and Replicability | Winter 2016

From Fish To Philosopher

Play Episode Listen Later Sep 4, 2020 10:05


This episode acknowledges that some academic libraries are finding it increasingly difficult to meet the costs of current journal subscriptions while also recognizing that many papers that are published defy efforts to replicate their findings because of implausible, unreliable data.

PaperPlayer biorxiv neuroscience
Replicability, repeatability, and long-term reproducibility of cerebellar morphometry

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Sep 2, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.02.279786v1?rss=1 Authors: Sörös, P., Wölk, L., Bantel, C., Bräuer, A., Klawonn, F., Witt, K. Abstract: To identify robust and reproducible methods of cerebellar morphometry that can be used in future large-scale structural MRI studies, we investigated the replicability, repeatability, and long-term reproducibility of three fully-automated software tools: FreeSurfer, CERES, and CNN. Replicability was defined as computational replicability, determined by comparing two analyses of the same high-resolution MRI data set performed with identical analysis software and computer hardware. Repeatability was determined by comparing the analyses of two MRI scans of the same participant taken during two independent MRI sessions on the same day for the Kirby-21 study. Long-term reproducibility was assessed by analyzing two MRI scans of the same participant in the longitudinal OASIS-2 study. We determined percent difference, the image intraclass correlation coefficient, the coefficient of variation, and the intraclass correlation coefficient between two analyses. Our results show that CERES and CNN use stochastic algorithms that result in surprisingly high differences between identical analyses for CNN and small differences for CERES. Changes between two consecutive scans from the Kirby-21 study were less than {+/-}5% in most cases for FreeSurfer and CERES (i.e., demonstrating high repeatability). As expected, long-term reproducibility was lower than repeatability for all software tools. In summary, CERES is an accurate, as demonstrated before, and reproducible tool for fully-automated segmentation and parcellation of the cerebellum. We conclude with recommendations for the assessment of replicability, repeatability, and long-term reproducibility in future studies on cerebellar structure. Copy rights belong to original authors. Visit the link for more info

Research Matters Podcast
Jessica Schleider, PhD, on Open Science and Replicability Practices and Diversity, Equity, and Inclusion in Academia

Research Matters Podcast

Play Episode Listen Later Aug 14, 2020 65:45


Jessica Schleider, PhD, is an assistant professor of clinical psychology at Stony Book University and a graduate of the Clinical Psychology Program at Harvard University. When in graduate school, she learned about open science – not from her courses but from the Twitter-spere and later from The Black Goat Podcast. What she learned was compelling and unsettling and kept her up at night as she thought about the state of scientific research in general and her research in particular. Wanting to sleep better, she “made an inner commitment to myself that if I got the chance to build a lab, open science would be part of it from the start… Especially if someone was pursuing a relatively new area of research, I didn’t feel like there was any other way to go about it…The curtain had been pulled up, so I couldn’t trust my own work anymore unless these things were more clearly and rigorously incorporated.” In today’s episode, Dr. Schleider and I discus open science principles, how open science differs from run-of-the-mill research, and why it can feel daunting and intimidating to embrace open-science principles. Dr. Schleider is also a strong advocate for diversity, equity, and inclusivity in academia. We discuss the ways academia has traditionally favored those from privileged backgrounds. We also discuss specific steps she has used to ensure that her lab is a safe place for people from underrepresented groups, that opportunities in her lab are clear and transparent, and that a protocol has been set in place should there be any discriminatory behavior or remarks that originate in the lab over which she presides. In this episode, you’ll learn… How Dr. Schleider stumbled upon open science and the replicability revolution Why she decided to implement open science practices That Dr. Schleider thought she had been doing pre-registration because she had been registering clinical trials How open science pre-registration differs from traditional registrations Where Dr. Schleider registers her studies Why open science can be frustrating to implement Why open science requires a mindset change The stages of registered reports Tips from the episode On where to learn about open science… Improve Your Statistical Inferences Coursea course (see link below) The Black Goat Podcast (see link below) On the differences between regular registration and open science preregistration… Open science preregistration aims to make sure researchers don’t fall into biases, outcome switch, or p-hack. In open science, when you deviate from the plan, you’re transparent about it. Traditional preregistrations don’t require an analytic plan or explain how the data will be analyzed. On open science procedures she uses… Always file a preregistration Detail how effect size is computed Streamline process for double-checking data set preparation and analysis Document code Make all of your work accessible to the public On leveling the playing field in research and academia… Reconsider the GRE Make admissions more transparent Make education less expensive Formalize opportunities to get involved in research (so that those opportunities are not reserved for those who know to seek and ask for those opportunities) Links from the episode: Daniel Lakens’ Improve Your Statistical Inferences course The Black Goat Podcast Dr. Schelider’s lab Dr. Schelider’s lab manual As-predicted template Template for pre-registration for beginners (from her lab) Jamovi – easy to use R package Documents to guide those who are considering applying to her lab or grad school in general: How to apply to her lab Guide to applying to grad school in clinical psychology Find Dr. Schelider on Twitter Research Matters Podcast is hosted by Jason Luoma, who can be found on Twitter @jasonluoma or Facebook at: facebook.com/jasonluomaphd. You download the podcast through iTunes, Stitcher, or Spotify. 

The Black Goat
Does Not Compute

The Black Goat

Play Episode Listen Later Aug 13, 2020 61:37


Scientific journal articles have a lot of numbers. Scientists are smart people with even smarter computers, so an outsider might think that, if nothing else, you can count on the math checking out. But modern data analysis is complicated, and computational reproducibility is far from guaranteed. In this episode, we discuss a recent set of articles published at the journal Cortex. A group of authors set out to replicate an influential 2010 article that claimed that if you reactivate a fear-laden memory, it becomes possible to change the emotional association - something with clear relevance to clinical practice. Along the way, the replicating scientists encountered anomalies which led them to try to reproduce the analyses in the original study - and they discovered that they could not. We talk about what this means for science. What are the implications of knowing that for a nontrivial number if scientific studies, the math doesn't add up? Will a new era of open data and open code be enough to fix the problem? How much will Verification Reports - a new publication format that Cortex has introduced - help with that process? Plus: We answer a letter about swinging for the fences when your dream job comes up but you don't feel ready yet. Links: The three R's of scientific integrity: Replicability, reproducibility, and robustness, by Robert McIntosh and Chris Chambers The Validity of the Tool “statcheck” in Discovering Statistical Reporting Inconsistencies, by Michèle Nuijten et al Analytic reproducibility in articles receiving open data badges at Psychological Science: An observational study, by Tom Hardwicke et al The Black Goat is hosted by Sanjay Srivastava, Alexa Tullett, and Simine Vazire. Find us on the web at www.theblackgoatpodcast.com, on Twitter at @blackgoatpod, on Facebook at facebook.com/blackgoatpod/, and on instagram at @blackgoatpod. You can email us at letters@theblackgoatpodcast.com. You can subscribe to us on iTunes or Stitcher.   Our theme music is Peak Beak by Doctor Turtle, available on freemusicarchive.org under a Creative Commons noncommercial attribution license. Our logo was created by Jude Weaver.   This is episode 82. It was recorded on August 10, 2020.

Everything Hertz
94: Predicting the replicability of research

Everything Hertz

Play Episode Listen Later Oct 21, 2019 58:10


Dan and James chat with Fiona Fidler (University of Melbourne), who is leading the repliCATS project (https://replicats.research.unimelb.edu.au/), which aims to develop accurate techniques to elicit estimates of the replicability of research. This is also the first time they interview a guest live! Here's what they discuss... * The story behind repliCATS * Australia's best export, Tim Tams (https://en.wikipedia.org/wiki/Tim_Tam) * The SCORE project (https://www.wired.com/story/darpa-wants-to-solve-sciences-replication-crisis-with-robots/) organised by DARPA * Can anyone use the repliCATS methodology? * Dan, Fiona, and James talk about did their honours theses (this is roughly the Australian equivalent of a Masters) * What would a successful repliCATS project look like? * What sort of heuristics do people use to assess replicability? * The AIMOS conference (https://www.aimos2019conference.com/) * The role of replicability in public policy * This is Bob Katter (https://www.youtube.com/watch?v=1i739SyCu9I) * Should we be keeping the replication crisis behind closed doors? Other links - Dan on twitter (www.twitter.com/dsquintana) - James on twitter (www.twitter.com/jamesheathers) - Everything Hertz on twitter (www.twitter.com/hertzpodcast) - Everything Hertz on Facebook (www.facebook.com/everythinghertzpodcast/) Music credits: Lee Rosevere (freemusicarchive.org/music/Lee_Rosevere/) Support us on Patreon (https://www.patreon.com/hertzpodcast) and get bonus stuff! $1 a month or more: Monthly newsletter + Access to behind-the-scenes photos & video via the Patreon app + the the warm feeling you're supporting the show $5 a month or more: All the stuff you get in the $1 tier PLUS a bonus mini episode every month (extras + the bits we couldn't include in our regular episodes) Episode citation and permanent link Quintana, D.S., Heathers, J.A.J. (Hosts). (2019, October 21) "Predicting the replicability of research ", Everything Hertz [Audio podcast], DOI: 10.17605/OSF.IO/KZPYG, Retrieved from https://osf.io/kzpyg/ Special Guest: Fiona Fidler.

Towards Data Science
Joel Grus - The case against the jupyter notebook

Towards Data Science

Play Episode Listen Later Jul 15, 2019 47:32


To most data scientists, the jupyter notebook is a staple tool: it’s where they learned the ropes, it’s where they go to prototype models or explore their data — basically, it’s the default arena for their all their data science work. But Joel Grus isn’t like most data scientists: he’s a former hedge fund manager and former Googler, and author of Data Science From Scratch. He currently works as a research engineer at the Allen Institute for Artificial Intelligence, and maintains a very active Twitter account. Oh, and he thinks you should stop using Jupyter noteoboks. Now. When you ask him why, he’ll provide many reasons, but a handful really stand out: Hidden state: let’s say you define a variable like a = 1 in the first cell of your notebook. In a later cell, you assign it a new value, say a = 3 . This results is fairly predictable behavior as long as you run your notebook in order, from top to bottom. But if you don’t—or worse still, if you run the a = 3 cell and delete it later — it can be hard, or impossible to know from a simple inspection of the notebook what the true state of your variables is. Replicability: one of the most important things to do to ensure that you’re running repeatable data science experiments is to write robust, modular code. Jupyter notebooks implicitly discourage this, because they’re not designed to be modularized (awkward hacks do allow you to import one notebook into another, but they’re, well, awkward). What’s more, to reproduce another person’s results, you need to first reproduce the environment in which their code was run. Vanilla notebooks don’t give you a good way to do that. Bad for teaching: Jupyter notebooks make it very easy to write terrible tutorials — you know, the kind where you mindlessly hit “shift-enter” a whole bunch of times, and make your computer do a bunch of stuff that you don’t actually understand? It leads to a lot of frustrated learners, or even worse, a lot of beginners who think they understand how to code, but actually don’t. Overall, Joel’s objections to Jupyter notebooks seem to come in large part from his somewhat philosophical view that data scientists should follow the same set of best practices that any good software engineers would. For instance, Joel stresses the importance of writing unit tests (even for data science code), and is a strong proponent of using type annotation (if you aren’t familiar with that, you should definitely learn about it here). But even Joel thinks Jupyter notebooks have a place in data science: if you’re poking around at a pandas dataframe to do some basic exploratory data analysis, it’s hard to think of a better way to produce helpful plots on the fly than the trusty ol’ Jupyter notebook. Whatever side of the Jupyter debate you’re on, it’s hard to deny that Joel makes some compelling points. I’m not personally shutting down my Jupyter kernel just yet, but I’m guessing I’ll be firing up my favorite IDE a bit more often in the future.

Brain Buzz
Crisis Alert! Replicability in Science with Dr. Wolf Vanpaemel

Brain Buzz

Play Episode Listen Later Jun 28, 2019 50:55


In Episode 11 we are joined by Dr. Wolf Vanpaemel from the Faculty of Psychology and Educational Sciences at KU Leuven to discuss the crisis of confidence in the scientific community. Wolf shares with us how statistics and scientific replication has lead to a crisis of confidence in scientific research, and what this means for scientists, journalists, and the general public. What is the crisis of confidence and what is the role of replication and statistics in contributing to scientific discourse? Can we ever ‘prove’ anything or should we acknowledge that there is always room for error? How can researchers limit their ‘degrees of freedom’ to make for better science? Is there any reason for optimism or are we doomed? All this and much more in Crisis Alert! Replicability in Science with Dr. Wolf Vanpaemel!

At Play In The Garden of Eden
MHCLG says replicability is key benefit sought as first 16 Local Digital Fund projects announced

At Play In The Garden of Eden

Play Episode Listen Later Dec 21, 2018 28:30


Observers surprised that some of the projects selected for funding under the programme – eg managing missed bins, providing taxi licensing services or dealing with FoI requests - are not at the forefront of innovation are missing a key point. The Fund is not about blue sky innovation, but rather about ‘fixing the plumbing' of everyday services by making it easy for all councils to replicate a good digital solution that has been proven elsewhere. Take missed bins. Many councils have worked out good solutions for the particular context, collection practice and supplier set up that operates in their patch. But such solutions cannot easily be replicated elsewhere because typically they won't integrate easily with back office systems and processes operated by other authorities and the technologies used by their contractors.  Local Digital Funding is for creating, testing and documenting solutions in such a way as they can be easily plugged in anywhere, rather than being reinvented endlessly to accommodate 300+ different local set ups.    

Charlie McMahan Leadership Podcast
Leadership Development Tools - Intro to the Shapes

Charlie McMahan Leadership Podcast

Play Episode Listen Later Oct 22, 2018 13:07


Charlie shares the visual tools that are our language of leadership development. These tools are the learning circle, the semi-circle, the triangle, the square, and the matrix of invitation and challenge. Why such tools? Simply, they're easily replicable. So you can teach others and then they can teach others and so on. Replicability, discipleship, is the core of being a follower of Jesus.

ReproducibiliTea Podcast
Episode 4 - Reproducibility now

ReproducibiliTea Podcast

Play Episode Listen Later Jul 31, 2018 42:53


Episode 4 - Reproducibility Now This week we dive into the Open Science Collaboration’s (2015) paper “Estimating the reproducibility of psychological science” http://science.sciencemag.org/content/349/6251/aac4716 Highlights: [1:00] This paper has all of the authors [1:30] Direct vs conceptual replications [4:30] PhD students running replications as the basis of extending a paradigm [6:00] The 100 studies paper methods in brief [8:00] everything’s available for this collaborative effort, and that is awesome (https://osf.io/ezcuj) [9:00] Reproducibility vs Replicability - what are we actually talking about [9:30] Oxford summer school in reproducibiltiy (https://www.eventbrite.co.uk/e/oxford-reproducibility-school-tickets-48405892327) [11:00] paper discussing the computational reproducibility of papers [15:00] Replication is not only about the p value folks! [17:30] Sam brings up Bayes purely to be a douchebag [19:30] A bayesian approach - Sophia gives us the paper (http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0149794) and we move on [20:00] Replications as a method to diagnose problems in science. Are replications a viable problem solver? [24:00] Psychology is only a teenager really [26:00] If the original paper is trash, there’s probably no need to replicate it. Maybe just burn it down? [27:00] Figure 1 - the average effect size halved in the replication attempt and most effects did not replicate. [31:30] Do the results hint at more than publication bias? Are other QRPs involved? [33:00] Comparing reproducibility across subfields of psychology. But, are these studies representative of an entire subfield [35:30] Does journal impact factor mean anything? [39:30] Are we actually being critical of previous research in general? [41:00] “Our foundations have as many holes as a swiss cheese” Music credit: Kevin MacLeod - Funkeriffic freepd.com/misc.php

Conversations for Research Rockstars
Research Replicability: Lessons from Amy Cuddy

Conversations for Research Rockstars

Play Episode Listen Later Nov 6, 2017 21:29


Social psychologist Amy Cuddy’s research on "power poses" was questioned because other researchers were unable to replicate the results. Do perceptions of “bad science” in other social science fields cast a negative halo on perceived market research reliability? What are the risk factors in survey research that reduce likely replicability and validity? As a profession, could we be doing a better job of helping clients understand survey research reliability? A conversation with Kathryn Korostoff and special guest, Jeffrey Henning of Researchscape International. Stay tuned for our newest conversations on market research by subscribing to this podcast or to our YouTube channel: https://goo.gl/6HxkpW Thanks for listening!        

Nourish Balance Thrive
A Ketogenic Diet Extends Longevity and Healthspan in Adult Mice

Nourish Balance Thrive

Play Episode Listen Later Oct 27, 2017 56:10


Our Scientific Director Megan Hall (née Roberts) recently had some of the work from her Master’s degree published in the journal Cell Metabolism, which is seriously impressive. The paper appeared on Science Daily, and generally caused a bit of a stir in the low carb community. As we have direct access to the horse’s mouth, I’ve asked Megan to join me in this episode of the podcast to summarise the findings and give some thoughts on how it might relate to human health. Here’s the outline of this interview with Megan Hall: [00:00:55] Mastermind Talks. [00:01:47] The lead up to the study. [00:02:17] Time-restricted feeding. [00:02:38] Are they eating longer because of a less crappy diet? [00:04:21] Calorie restriction was the focus of Megan's lab. [00:05:27] Stephen Phinney, MD, PhD and Jon Ramsey, PhD. [00:06:13] Study design. [00:07:36] High-fat diets in rodents. [00:08:39] Two arms: longevity and healthspan. [00:10:55] Grip strength in a rodent. [00:11:40] Novel object test. [00:12:55] fMRI for body composition using the EchoMRI. [00:13:13] The results. Study: Roberts, Megan N., et al. "A Ketogenic Diet Extends Longevity and Healthspan in Adult Mice." Cell Metabolism 26.3 (2017): 539-546. [00:15:40] Valter Longo, PhD and USC Longevity Institute. Studies: Brandhorst, Sebastian, et al. "A periodic diet that mimics fasting promotes multi-system regeneration, enhanced cognitive performance, and healthspan." Cell metabolism 22.1 (2015): 86-99 and Wei, Min, et al. "Fasting-mimicking diet and markers/risk factors for aging, diabetes, cancer, and cardiovascular disease." Science translational medicine 9.377 (2017): eaai8700. [00:16:27] Study: Sleiman, Sama F., et al. "Exercise promotes the expression of brain derived neurotrophic factor (BDNF) through the action of the ketone body β-hydroxybutyrate." Elife 5 (2016): e15092. [00:17:34] Motor function and coordination. [00:18:58] The importance of preserving type IIA muscle fibers. Podcast: The Most Reliable Way to Lose Weight with Dr Tommy Wood and The High-Performance Athlete with Drs Tommy Wood and Andy Galpin. [00:19:18] Study: Zou, Xiaoting, et al. "Acetoacetate accelerates muscle regeneration and ameliorates muscular dystrophy in mice." Journal of Biological Chemistry291.5 (2016): 2181-2195. [00:20:04] Exercise performance. [00:21:13] Physiologic insulin resistance. [00:22:06] Podcast: Real Food for Gestational Diabetes with Lily Nichols. [00:24:21] Keto vs low-carb. [00:27:05] Studies: β-Hydroxybutyrate: A Signaling Metabolite and Ketone bodies as signalling metabolites. [00:27:49] YouTube: Histone deacetylation and inhibition. [00:29:19] I mentioned the Khan Academy, but in the end Megan liked these videos on HDAC inhibitors and cancer and Histone deacetylation and inhibition (also mentions p53!). [00:30:49] FOXO proteins. [00:31:30] Lysine residues. [00:31:48] Mn SOD. [00:32:10] mTOR, Dr. Ron Rosedale. [00:34:04] REDD1 protein. [00:34:32] P53 protein, metformin. [00:35:30] Less cancer in KD mice. [00:36:00] Warburg Effect. [00:36:21] Replicability. [00:36:57] Study: Newman, John C., et al. "Ketogenic Diet Reduces Midlife Mortality and Improves Memory in Aging Mice." Cell Metabolism 26.3 (2017): 547-557. [00:38:28] Press coverage of the study, “Eat Fat, Live Longer” at Sciencedaily.com. [00:41:01] Soybean oil in rodent diets. [00:41:34] Sex-dependent differences. [00:43:23] Takeaways. [00:44:21] Dogma displacement inertia. [00:45:19] Exogenous ketones. Study: Stubbs, Brianna Jane, et al. "On the metabolism of exogenous ketones in humans." Frontiers in Physiology 8 (2017): 848. [00:46:34] What does this mean for humans? [00:47:42] Weightloss. [00:48:36] Micromanaging the details. [00:50:33] Who are you and what are your goals -- Robb Wolf. Podcast: Wired to Eat with Robb Wolf. [00:51:55] Nourish Balance Thrive Highlights Series sign up. [00:53:11] Megan's purpose. [00:53:39] Book: Find Your Why: A Practical Guide for Discovering Purpose for You and Your Team by Simon Sinek and David Mead.

NLP Highlights
35 - Replicability Analysis for Natural Language Processing, with Roi Reichart

NLP Highlights

Play Episode Listen Later Oct 19, 2017 31:07


TACL 2017 paper by Rotem Dror, Gili Baumer, Marina Bogomolov, and Roi Reichart. Roi comes on to talk to us about how to make better statistical comparisons between two methods when there are multiple datasets in the comparison. This paper shows that there are more powerful methods available than the occasionally-used Bonferroni correction, and using the better methods can let you make stronger, statistically-valid conclusions. We talk a bit also about how the assumptions you make about your data can affect the statistical tests that you perform, and briefly mention other issues in replicability / reproducibility, like training variance. https://www.semanticscholar.org/paper/Replicability-Analysis-for-Natural-Language-Proces-Dror-Baumer/fa5129ab6fd85f8ff590f9cc8a39139e9dfa8aa2

Mindwise Podcast
The Replicability Crisis - Maarten Derksen

Mindwise Podcast

Play Episode Listen Later Oct 18, 2017 22:17


In this podcast, we talk to Maarten Derksen about the Replicability Crisis in Psychology based on a lecture in the Controversies in Psychology course. We touch upon the pros and cons of direct versus conceptual replications, ethical principles of conducting replications, but also bullying in the academia and the impact it has on the way science is done.

OpenAnesthesia Multimedia
Article of the Month - July 2016 - Thomas Vetter

OpenAnesthesia Multimedia

Play Episode Listen Later Jun 20, 2016 19:32


Replicability, Reproducibility, and Fragility of Research Findings--Ultimately, Caveat Emptor

Archaeology Conferences
0029 - CAA2016 - Nick Waber

Archaeology Conferences

Play Episode Listen Later May 9, 2016 8:48


Wireless Lithics: An Open Hardware Approach to Stroke Quantification and Replicability in Lithic Use-wear Experiments

experiments replicability
Law Technology Now
Crash or Soar? Predictive Coding

Law Technology Now

Play Episode Listen Later Oct 5, 2010 21:08


In this October edition of Law Technology Now, host Monica Bay chats with Anne Kershaw, principal of A. Kershaw Attorneys & Consultants and co-founder of the eDiscovery Institute and Joseph Howie, principal of Howie Consulting and EDI’s director of metrics development and communications. Kershaw and Howie are co-authors of Law Technology News’ October cover story, "Crash or Soar," and they discuss how predictive coding " using computers with some guidance from lawyers " can streamline document review and cut costs.