POPULARITY
Dr. Moria Levy is one of the foremost thought leaders driving the knowledge management discipline forward. In 1998, Dr. Levy established ROM Knowledgeware, a firm specializing in KM. ROM turned out to be one of the biggest KM firms worldwide, nowadays with 40 employees daily serving KM organizations worldwide. Moria didn't settle for owning and leading a big KM firm, exceptional in its size in this evolving market. She pushed forward to develop new methodologies addressing intranets, lessons learned, knowledge retention, collaborative knowledge development, and more. These methodologies, based on fieldwork, were later the basis for her research papers and books, having nowadays over 1,000 citations. In 2017 Moria Levy was included in the Journal Impact Factor of Thomson Reuters. Dr. Levy led the first worldwide initiative of a KM standard: the Israeli KM standard SII25006. This standard was published In 2012 and was the foundation on which KM in Israel later developed. In 2015 she was chosen by the International Standards Organization (ISO) to lead a global experts team in developing a KM standard. This resulted in ISO30401 – Knowledge management systems an agreed upon comprehensive compass for knowledge management excellence. https://www.youtube.com/playlist?list=PLtUccSCFMDzVh1D0pS143uh7Jk1rl893U #knowledgemanagement #knowledgeretention #KiSure #ISO30401 #KMGN
Wer forscht, muss publizieren. Wissenschaftsverlage verdienen daran gut. Und nicht alles läuft optimal bei der Qualitätssicherung. Wenn in der Corona-Pandemie eine Studie öffentlich kritisiert wurde, sorgte das immer für Aufsehen. Dahinter steckt ein komplexes System. Es geht um Qualitätssicherung, aber auch um viel Geld. Denn wer forscht, muss veröffentlichen. Bevor ein Journal einen Artikel annimmt, wird er von anderen Wissenschaftler*innen begutachtet. Doch wie funktioniert das eigentlich? Warum wird dafür kein Honorar gezahlt, und wieso läuft das anonym? Yasmin Appelhans erklärt im Gespräch mit Host Lucie Kluth, warum manche Wissenschaftsverlage eine größere Gewinnmarge als Amazon haben und was das für den Zugang zu Wissen bedeutet. Aus ihrer eigenen Zeit als Meeresbiologin berichtet Appelhans von ihren Erfahrungen mit dem mysteriösen "Reviewer No. 2" - und sie bringt Musik mit, die sich mit dem Thema beschäftigt. DIE HINTERGRUNDINFORMATIONEN • Liedparodie Bohemian Rhapsody: Bohemian Rhapsody (aka ‘The tale of a Post Doc') - Laboratory Parody. 2013. https://www.youtube.com/watch?v=Q1YIYx8VBkI [Aufgerufen am 6. Dezember 2022]. • Was Übersichtsarbeiten und Metaanalysen leisten können am Beispiel der Medizin: Ressing M, Blettner M, Klug SJ. Systematische Übersichtsarbeiten und Metaanalysen. Deutsches Ärzteblatt. 2009;106(27): 456–463. https://www.aerzteblatt.de/archiv/65225/Systematische-Uebersichtsarbeiten-und-Metaanalysen • Meinungsartikel in der Wissenschaft: Goh HH, Bourne P. Ten simple rules for writing scientific op-ed articles. PLOS Computational Biology. 2020;16(9): e1008187. https://doi.org/10.1371/journal.pcbi.1008187 • Wie der Journal Impact Factor berechnet wird: DeGroote S. Subject and Course Guides: Measuring Your Impact: Impact Factor, Citation Analysis, and other Metrics: Journal Impact Factor (IF). https://researchguides.uic.edu/if/impact [Aufgerufen am 4. Januar 2023]. • Wie der H-Index berechnet wird: Universität Zürich UZH. What is your h-index?. Blog der Hauptbibliothek. https://www.uzh.ch/blog/hbz/2018/05/29/what-is-your-h-index/ [Aufgerufen am 4. Januar 2023]. • Gründe, warum viele medizinsche Studien nicht veröffentlicht warden: Song F, Loke Y, Hooper L. Why Are Medical and Health-Related Studies Not Being Published? A Systematic Review of Reasons Given by Investigators. PLOS ONE. 2014;9(10): e110418. https://doi.org/10.1371/journal.pone.0110418. • Unbewusste Ungleichheit im Begutachtungssystem: Kuehn BM. Rooting out bias. eLife. 2017;6: e32014. https://doi.org/10.7554/eLife.32014. • Die Geschichte des wissenschaftlichen Begutachtungsprozesses: Spier R. The history of the peer-review process. Trends in Biotechnology. 2002;20(8): 357–358. https://doi.org/10.1016/S0167-7799(02)01985-6. • Wie sich das wissenschaftliche Publikationswesen entwickelt hat: Bargheer M. Historische Umbrüche im wissenschaftlichen Publikationswesen und ihr Widerhall in heutigen Techniken. In: Lackner K, Schilhan L, Kaier C (eds.) Publikationsberatung an Universitäten. 1st ed. Bielefeld, Germany: transcript Verlag; 2020. p. 21–52. https://doi.org/10.14361/9783839450727-003. [Aufgerufen am 2. Dezember 2022]. • Podcast des ZBW - Leibniz-Informationszentrums Wirtschaft: Podcast der ZBW. https://podcast.zbw.eu/ [Aufgerufen am 7. Dezember 2022]. • Umsatz Elsevier 2021: Online iXBRL Viewer. https://reports.relx.com/2021/esef-ar-uk/549300WSX3VBUFFJOO66-2021-12-31-Viewer.html [Aufgerufen am 7. Dezember 2022]. • Gebühren für Open-Access schrecken Forschende aus dem globalen Süden ab: Kwon D. Open-access publishing fees deter researchers in the global south. Nature. 2022; https://doi.org/10.1038/d41586-022-00342-w. • Liste der Zeitschriften mit Diamond Open Access in Deutschland: Bruns A, Taubert NC, Cakir Y, Kaya S, Beidaghi S. Diamond Open Access Journals Germany (DOAG). 2022; https://pub.uni-bielefeld.de/record/2963331 • Positionspapier der Deutschen Forschungsgemeinschaft (DFG) zum wissenschaftlichen Publikationswesen: Deutsche Forschungsgemeinschaft | AG Publikationswesen. Wissenschaftliches Publizieren als Grundlage und Gestaltungsfeld der Wissenschaftsbewertung. 2022 May [Accessed 4th January 2023]. https://zenodo.org/record/6538163 [Aufgerufen am 4. Januar 2023].
Wer forscht, muss publizieren. Wissenschaftsverlage verdienen daran gut. Und nicht alles läuft optimal bei der Qualitätssicherung. Wenn in der Corona-Pandemie eine Studie öffentlich kritisiert wurde, sorgte das immer für Aufsehen. Dahinter steckt ein komplexes System. Es geht um Qualitätssicherung, aber auch um viel Geld. Denn wer forscht, muss veröffentlichen. Bevor ein Journal einen Artikel annimmt, wird er von anderen Wissenschaftler*innen begutachtet. Doch wie funktioniert das eigentlich? Warum wird dafür kein Honorar gezahlt, und wieso läuft das anonym? Yasmin Appelhans erklärt im Gespräch mit Host Lucie Kluth, warum manche Wissenschaftsverlage eine größere Gewinnmarge als Amazon haben und was das für den Zugang zu Wissen bedeutet. Aus ihrer eigenen Zeit als Meeresbiologin berichtet Appelhans von ihren Erfahrungen mit dem mysteriösen "Reviewer No. 2" - und sie bringt Musik mit, die sich mit dem Thema beschäftigt. DIE HINTERGRUNDINFORMATIONEN • Liedparodie Bohemian Rhapsody: Bohemian Rhapsody (aka ‘The tale of a Post Doc') - Laboratory Parody. 2013. https://www.youtube.com/watch?v=Q1YIYx8VBkI [Aufgerufen am 6. Dezember 2022]. • Was Übersichtsarbeiten und Metaanalysen leisten können am Beispiel der Medizin: Ressing M, Blettner M, Klug SJ. Systematische Übersichtsarbeiten und Metaanalysen. Deutsches Ärzteblatt. 2009;106(27): 456–463. https://www.aerzteblatt.de/archiv/65225/Systematische-Uebersichtsarbeiten-und-Metaanalysen • Meinungsartikel in der Wissenschaft: Goh HH, Bourne P. Ten simple rules for writing scientific op-ed articles. PLOS Computational Biology. 2020;16(9): e1008187. https://doi.org/10.1371/journal.pcbi.1008187 • Wie der Journal Impact Factor berechnet wird: DeGroote S. Subject and Course Guides: Measuring Your Impact: Impact Factor, Citation Analysis, and other Metrics: Journal Impact Factor (IF). https://researchguides.uic.edu/if/impact [Aufgerufen am 4. Januar 2023]. • Wie der H-Index berechnet wird: Universität Zürich UZH. What is your h-index?. Blog der Hauptbibliothek. https://www.uzh.ch/blog/hbz/2018/05/29/what-is-your-h-index/ [Aufgerufen am 4. Januar 2023]. • Gründe, warum viele medizinsche Studien nicht veröffentlicht warden: Song F, Loke Y, Hooper L. Why Are Medical and Health-Related Studies Not Being Published? A Systematic Review of Reasons Given by Investigators. PLOS ONE. 2014;9(10): e110418. https://doi.org/10.1371/journal.pone.0110418. • Unbewusste Ungleichheit im Begutachtungssystem: Kuehn BM. Rooting out bias. eLife. 2017;6: e32014. https://doi.org/10.7554/eLife.32014. • Die Geschichte des wissenschaftlichen Begutachtungsprozesses: Spier R. The history of the peer-review process. Trends in Biotechnology. 2002;20(8): 357–358. https://doi.org/10.1016/S0167-7799(02)01985-6. • Wie sich das wissenschaftliche Publikationswesen entwickelt hat: Bargheer M. Historische Umbrüche im wissenschaftlichen Publikationswesen und ihr Widerhall in heutigen Techniken. In: Lackner K, Schilhan L, Kaier C (eds.) Publikationsberatung an Universitäten. 1st ed. Bielefeld, Germany: transcript Verlag; 2020. p. 21–52. https://doi.org/10.14361/9783839450727-003. [Aufgerufen am 2. Dezember 2022]. • Podcast des ZBW - Leibniz-Informationszentrums Wirtschaft: Podcast der ZBW. https://podcast.zbw.eu/ [Aufgerufen am 7. Dezember 2022]. • Umsatz Elsevier 2021: Online iXBRL Viewer. https://reports.relx.com/2021/esef-ar-uk/549300WSX3VBUFFJOO66-2021-12-31-Viewer.html [Aufgerufen am 7. Dezember 2022]. • Gebühren für Open-Access schrecken Forschende aus dem globalen Süden ab: Kwon D. Open-access publishing fees deter researchers in the global south. Nature. 2022; https://doi.org/10.1038/d41586-022-00342-w. • Liste der Zeitschriften mit Diamond Open Access in Deutschland: Bruns A, Taubert NC, Cakir Y, Kaya S, Beidaghi S. Diamond Open Access Journals Germany (DOAG). 2022; https://pub.uni-bielefeld.de/record/2963331 • Positionspapier der Deutschen Forschungsgemeinschaft (DFG) zum wissenschaftlichen Publikationswesen: Deutsche Forschungsgemeinschaft | AG Publikationswesen. Wissenschaftliches Publizieren als Grundlage und Gestaltungsfeld der Wissenschaftsbewertung. 2022 May [Accessed 4th January 2023]. https://zenodo.org/record/6538163 [Aufgerufen am 4. Januar 2023].
Traditionally, journal impact factor has been the way to measure the impact of ones scientific articles. However, social media platforms are increasingly being used for the dissemination of publications. Join the faculty as they debate novel metrics to measure the impact of articles.FacultyProf. Jesus Banales (Moderator)Dr Atoosa Rabiee (Faculty)Dr Elliot Tapper (Faculty)Dr Juan Turnes (Faculty)Related episodeNew era for science dissemination: The role of social mediaAll EASL Studio Podcasts are available on EASL Campus.
Join host Florence Theberge for the second part of a two part series as she speaks to experts about responsible research assessment, open research, DORA, the Leiden Manifesto, bibliometrics, Journal Impact Factor, Citations, and the Are you in? Impact Manifesto. https://www.emeraldgrouppublishing.com/
Join host Florence Theberge for the first part of a two part series as she speaks to experts about responsible research assessment, open research, DORA, the Leiden Manifesto, bibliometrics, Journal Impact Factor, Citations, and the Are you in? Impact Manifesto. https://www.emeraldgrouppublishing.com/
Duane Peters from the Lupus Foundation of America interviews the Lupus Science & Medicine Editors-in-Chief, Professor Jill Buyon and Professor Ronald van Vollenhoven. They discuss the recent release of LSM's first Journal Impact Factor™, explaining how it has been calculated, and what it means for the journal, its authors and readers. Access all the LSM journal metrics: https://lupus.bmj.com/pages/top-cited-articles/ and https://lupus.bmj.com/pages/about/.
Terima kasih sudah mendengarkan podcast bagian ke-1 dan bagian ke-2. Pada bagian ke-3 ini saya menceritakan alur alamiah yang justru ada pada bagian penelitian, khususnya publikasi. Kuncinya adalah tinggal Anda mau atau tidak menulis. Masalah riset dan media jurnal yang akan dipilih bisa mudah, asal Anda memang mau menulis. Minimum dua saja (agar aman) artikel Anda terbit di jurnal internasional yang berdomisili di luar Asia Tenggara (agar aman), dan memiliki Journal Impact Factor min 0,5 atau masuk ke kuartil 1 Scimago. Ada lebih banyak jurnal yang masuk kuartil 1 Scimago dibanding yang memiliki JIF >= 0,5. Daftar Scimago juga bebas diakses tanpa akun berbayar, tidak seperti JIF. Sekali lagi, pengusulan kenaikan pangkat/jabatan adalah proses yang kental nuansa administratif, jadi jangan terlalu dipusingkan dengan berbagai hal substansial. Itu hanya akan membuat Anda tidak jadi menulis. Demikian podcast ini. Selamat mendengarkan. Semoga membantu.
Can science be truly open if it doesn't allow for all perspectives? In this episode we talk diversity and inclusion with associate professor at the UMC Utrecht Gönül Dilaver. Also, we touch base on the newsynews and talk education with Gönül. Shownotes: - Interview Paul Boselie on not using the Journal Impact Factor: https://www.nature.com/articles/d41586-021-01759-5?fbclid=IwAR14xtmGL5yCfesD5ziYSbkdrB99lPcDidEAl1e4sVZDUWaodj0d9JNofag - Ovrir le Science, The French Government's bold Open Science Plan: https://www.ouvrirlascience.fr/second-national-plan-for-open-science/ - Two introductory open science workshops now available for all UU&UMCU groups, https://www.uu.nl/en/news/university-library-helps-research-groups-to-get-started-with-open-science
This episode marks the official end of the second year of this podcast! (unfortunately, there was still no present for Bart - consider becoming a Patron to help!) Apart from the plans for year 3, Bart & Dennis discussed the hot topic of the week: a provocative tweet by Richard Dawkins on Eugenics, and the dos and don’ts, and pros and cons of university rankings. Listen to the Full Conversation on Patreon! AN AMBIGUOUS(?) TWEET Richard Dawkins (a famous evolutionary biologist and member of the royal society) tweeted that, while he deplores eugenic practice, ”it” would still “work”, as “it” would work in farm animals and pets – as if breeding animals was the same as eugenics. https://twitter.com/RichardDawkins/status/1229060502984306689 https://twitter.com/RichardDawkins/status/1229083369641824266 His tweet – as you have probably guessed - led to a heated debate on eugenics. And geneticists joined the uproar arguing that breeding would not even work in humans: all inbreeding (animals or crops) leads to genetic weakness - just look at pugs! Or the Lannister family! Bart acknowledges that certain opinions based on science are offensive to certain groups. But how could we communicate such opinions? Especially on a platform like Twitter? Regularly, the lives of people at the center of outrage are affected negatively. In the end, you can’t separate a term like eugenics from its fascist ideology. And if you make divisive tweets, you must expect a strong reaction. EVALUATING UNIVERSITIES Bart and Dennis compare two very distinct ways of ranking universities, and they discuss whether such a ranking is useful at all – or possibly even harmful. On the one hand, there is “Nature Index” by nature publishing group. It ranks institutes based on the number of papers published in a selected set of journals and by how often these papers are shared with authors from other institutes (as a measure of collaboration). Dennis points out that the journals considered are subjectively selected by an undisclosed number of scientists with undisclosed affiliations. The selection is further restricted to journals listed on Web of Science, a database that excludes many journals described as “local” – mostly journals that don’t publish in English, or simply aren’t “Western”. It also comes at no surprise that more than 20% of the journals considered by Nature Index are publications of nature publishing group. At least they didn’t use the Journal Impact Factor – you would say - but is this better? On the other hand, there is the NGO “CHE” (Centrum für Hochschulentwicklung - which Dennis would translate to Centre for the Development of Higher Education) who rank Universities in German-language-areas. Their criteria are very different: they look, for example, at the job prospects of graduates, the facilities at the university (gym, library, dorm), and even at the town and the general quality of life. High school graduates looking for a place to live and study are certainly better served with a CHE-kind of ranking than with the publication-based Nature Index. Having more research funding may not reflect the quality of life and education. Bart argues that he never even used a ranking system to choose where to study - he went for the best party location! And, on a more serious note, Bart and Dennis agreed that rankings, in general, may not only serve as an indicator for how an institution could improve, they could also give unhelpful incentives to publish in certain journals and intensify “money makes money” funding biases. PLANS FOR YEAR 3 As per usual, the interview episodes on big science, science communication, and academia will of course continue. Interviews are recorded – or at least planned – up until summer! For the talk episodes, we will try something new: “Bart & Dennis Against Humanities”. Both biologists with PhDs in neuroscience, they want to explore the strange world of the Humanitie...
Who gets positions and funding in academia should depend on the merit of the researcher, project, or institute. But how do we assess these merits fairly, meaningfully and in a way that makes it comparable? I talked about metrics with Steffen Lemke, PhD student at the Leibniz Information Centre for Economics (ZBW), in Kiel, Germany. He is part of the *metrics project, which investigates new research metrics and their applicability. The project is funded by the German Researcher Association, DFG. Listen to the Full Conversation on Patreon! Citation Based Metrics In episode 9 I talked with Björn Brembs about the most prevalent metric used: the Journal Impact Factor. It turns out that the “JIF” is not a good metric. Another commonly used metric is the “H-index”. Like JIF it is based on citations - the number of times a scientific paper was mentioned in another scientific paper. But it aims to measure the output of a researcher rather than the journal. Both, H-index and JIF, have their own specific disadvantages. But they also share problems due to the source of data they use: citation indices. Citations are slow to accrue, which means it takes time to build a sufficient amount of data for proper evaluation. The indices are also incomplete and mostly locked behind paywalls. And finally, they are solely focused on journal articles. But peer-reviewed research articles aren’t the only output scientists generate. Especially social sciences often publish in other formats, like books and monographs. STEM researchers, too, often create other outputs, such as designs for experimental setups, or code. Finally, citation based metrics focus solely on the communication between scientists, not with the public. Altmetrics New, alternative metrics aim to change all that. "Altmetrics" is an umbrella term for a range of still experimental metrics. They use data one can find openly on the internet. This makes them fast and diverse. They look, for example, at the dissemination of research articles on social media. But they also look at the download numbers of open repositories for code, lecture videos, presentation slides, and other resources. In this way they may cover any research product you can find on the internet. Whether a metric predicts a scientific impact (citations) fast and well, can be tested. So far it appears that data from online reference managers can predict citations well. You don’t need to wait for citing authors to write and publish their own papers, you just look if they bookmarked your paper for later use. An obvious disadvantage of altmetrics is that they can be gamed. One can buy services from social media providers to advertise posts. Or one can use bots to amplify the impact on social media, or download files thousands of times. Soberingly, researchers found altmetrics not to cover humanities and social sciences, sufficiently. Less than 12% of the research output from these fields showed up in the altmetrics tested. Social Media Use Steffen Lemke and his co-authors asked why there is so little representation of social sciences on social media. Surprisingly, while social scientists usually justify their work with the relevance for the public, they see interacting with it on social media as a waste of time. Some answered in the survey, that they would be overwhelmed by information. It was hard to tell the quality of information on the internet. Others say they’d not be seen as serious would they be caught using social media - even for work - by their supervisors. Metric-Wiseness In Steffen’s article you will find the interesting term “metric-wiseness”. Coined by a different research group, it describes the knowledge of researchers about metrics, and the ability to understand their meaning and applicability. In their research surveys, the *metrics project asks researchers about their knowledge of metrics. Even very junior researchers know about JIF.
Was ist der Journal Impact Factor und was sagt er über die Güte eines wissenschaftlichen Fachaufsatzes aus?
In this week of the Recovering Academic podcast we talk with Dr. Dennis Eckmeier, his journey outside academia and his actual advocacy projects, including the Science for Progress podcast and its twitter rotating twitter account @SfPRocur. This was a joint podcast between the Recovering Academic and the Science for Progress podcasts, so we all discuss our reasons for leaving academia and realized Amanda was the only one of us that never did any experiments in the dark! There's always the transition period, but after you decide that you're going to do it, it feels good. @Doctor_PMS One of the challenges Dennis is facing is that, although advocacy is supposed to be for free, he is still trying to find alternatives of how he can proceed with this and make money with it. Some people make it sound like networking is like another skill, is like learning to act, but I've learned that it's not like that @DennisEckmeier The goal of the Science for Progress podcast is to explain how academia works to people that are not academics. What is sort of the opposite of our recovering academic podcast that tries to show what PhDs can do outside academia. You can contact Dennis through his webpage or his Twitter account: @DennisEckmeier Mentioned in this podcast: Marcha pela ciencia // March for science Lisbon EAT CHEESE LIVE FOREVER This Is Why The 2018 Nobel Prize In Physics, For Lasers, Is So Important Science Magazine article: Sunshine outside the ivory tower Science for Progress podcast: The Journal Impact Factor: how (not) to evaluate researchers – with Björn Brembs
Just some announcements this time In contrast to what was promised in the last podcast episode, we don't have a full question and answer episode this time. I hope this will not happen too often, in future. Dennis is a freelancer now. First thing is that I quit my postdoctoral fellowship to become a freelancer. You can see how I approach this on my website. Basically I want to offer my skills and expertise in scholarship and neuroscience to help people with their academic writing, be it papers or funding applications. This means that I am currently a bit low on finances, which makes financing Science for Progress more difficult, of course. More about how you can help me with that further down. Science for Progress News new volunteers for @sfprocurSusan Leemburg and Katharina Hennig are now helping me to find and curators, and manage the schedule. looking for a facebook page moderatorI have not given our facebook page the love it deserves. So I am looking for someone who would share relevant articles on there and in general keeps it lively. If you are interested, send me an email to socialadmin@scienceforprogress.eu inviting opinion piecesI hope you noticed that I made some design changes to make these bog posts more pleasing to the eye, in particular for longer reads. This is because I want to invite writers to publish opinion pieces with us. Sadly, I can not pay for such articles. I would really like to commission pieces to professional writers, but I simply can't. So if you are thinking about contributing an article despite of that, make sure it includes some promotion for yourself or your own project. More podcast episodes!I want to add some more discussion to the podcast. But because the interview episodes are usually already pretty dense in information and fill 30 minutes easily, I don't want to add this discussion to the interview. It is also good to have some time between interview and discussion so I can gather some feedback from you, our listeners. So we will alternate interviews and 'Q&A episodes', in which we will talk about some news, what is going on in Science for Progress, and then discuss the previous interview. This format should also be about 30 minutes in duration. This also means that we are moving from an episode every three weeks, to an episode every two weeks. Feedback Intersting interview with @brembs about journal impact factors- for people who know about the issues always interesting, for those who don’t even more important! #science #WhatScienceIsImpacting https://t.co/RnQwpajLc5— Simon Sprecher (@simon_sprecher) 17. September 2018 This is a great comment! Being interesting to people who know about the issue while being important to those who know is pretty much the sweet spot where I want the podcast to be. I hope there will be many more episodes receiving praise like this! I'm listening to @SciForProgress podcast on impact factor. Everyone should listen at least to the 1st 5 minutes of it. When they say this is known: I did not know! And I've been doing science for 10 years now.— Science is not Glamorous (@Science_glamour) 29. September 2018 I have been thinking the exact same thing! I knew things weren't 100% correct with the Journal Impact Factor, but I didn't know about the details, either! When Björn Brembs says 'it is known' he didn't mean everybody is aware, but that the information is openly available, if you look for it. Which I think most of us don't! Well, its wonderfull. As authentic and on spot as everything in the project— Zé (@93Antidote93) 13. September 2018 What more can I say than that this warms my heart. :D BECOMING A PATREON COMMUNITY! Become a Patron! As I mentioned further up, I currently do not have a steady income and it may take a while to get there. This is why I need your help to continue investing my time and money into improving and growing Science for Progress activities.
What is the Journal Impact Factor? The Journal Impact Factor is widely used as a tool to evaluate studies, and researchers. It supposedly measures the quality of a journal by scoring how many citations an average article in this journal achieves. Committees making hiring and funding decisions use the 'JIF' as an approximation for the quality of the work a researcher has published, and in extension as an approximation for the capabilities of an applicant. Listen to the Full Conversation on Patreon! JIF as a measure of researcher merit I find this practice already highly questionable. First of all, it appears the formula calculates a statistical mean. However, no article can receive less than 0 citations, while there is no upper limit to citations. Most articles - across all journal - receive only very few citations, and only a few may receive a lot of citations. This means we have a 'skewed distribution' when we plot how many papers received how many citations. The statistical mean, however, is not applicable for skewed distributions. Moreover, basic statistics and probability tell us that if you blindly choose one paper from a journal, it is impossible to predict -or even roughly estimate - its quality by the average citation rate, alone. It is further impossible to know the author's actual contribution to said paper. Thus, we are already stacking three statistical fallacies by applying JIF to evaluate researchers. But this is just the beginning! Journals don't have an interest in the Journal Impact Factor as a tool for science evaluation. Their interest is in the advertising effect of the JIF. As we learn from our guest, Dr. Björn Brembs (professor for neurogenetics at University of Regensburg), journals negotiate with the private company Clarivate Analytics (in the past it was Reuters) that provides the numbers. Especially larger publishers have a lot of room to influence the numbers above and below the division line in their favor. Reputation is not quality. There is one thing the Journal Impact Factor can tell us: how good the reputation of the journal is among researchers. But does that really mean anything substantial? Björn Brembs reviewed a large body of studies that compared different measures of scientific rigor with the impact factor of journals. He finds that in most research fields the impact factor doesn't tell you anything about the quality of the work. In some fields it may even be a predictor of unreliable science! This reflects the tendency of high ranking journals to prefer novelty over quality. How does this affect science and academia? The JIF is omnipresent. A CV (the academic resume) is not only judged by the name of the journals in a publication list. Another factor is the funding a researcher has been able to get. However, funding committees may also use JIF to evaluate whether an applicant is worthy of funding. Another point on a CV is the reputation of the advisers, who were also evaluated by their publications and funding. Another important point on a CV is the reputation of the institute one worked at, which is to some degree evaluated by the publications and the funding of their principle investigators. It is easy to see how this puts a lot of power into the hands of the editors of high ranking journals. Björn Brembs is concerned about the probable effect this has on the quality of science overall. If the ability to woe editors and write persuasive stories leads to more success than rigorous science, researchers will behave accordingly. And they will also teach their students to put even more emphasis on their editor persuasion skills. Of course not all committees use JIF to determine who gets an interview. But still the best strategy for early career researchers is to put all their efforts into pushing their work into high ranking journals. What now?! We also talk about possible solutions to the problem.