Podcasts about item response theory

  • 10PODCASTS
  • 13EPISODES
  • 27mAVG DURATION
  • ?INFREQUENT EPISODES
  • Mar 19, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about item response theory

Latest podcast episodes about item response theory

Quantitude
S5E19 Item Response Theory, Q.E.D.

Quantitude

Play Episode Listen Later Mar 19, 2024 49:37


In this week's episode Patrick and Greg provide an introduction to the Item Response Theory model: what it is, how it relates to traditional factor analysis, and how this modem approach improves upon some of the limitations of classical test theory.  Along the way they also mention weinerness, memorizing Latin for punishment, eggszampke, in ether words, ITR, switching a and b, I'm not defensive - you are, why biostatisticians hate us (page 3 subsection 8), binary babble, EAPs and MAPs, computer adaptive  testing on the playground, Bob's your uncle, and the liberal arts mic drop. Stay in contact with Quantitude! Twitter: @quantitudepod Web page: quantitudepod.org Merch: redbubble.com

Research in Action | A podcast for faculty & higher education professionals on research design, methods, productivity & more
Ep 185: Dr. Mary Ellen Dello Stritto and Dr. Kathleen Preston on Item Response Theory

Research in Action | A podcast for faculty & higher education professionals on research design, methods, productivity & more

Play Episode Listen Later Feb 17, 2020 37:45


On this episode, Dr. Mary Ellen Dello Stritto is joined by Dr. Kathleen Preston, an Associate Professor in the Department of Psychology at California State University, Fullerton. Dr. Preston teaches several statistics courses including introductory, advanced, and multivariate statistics, as well as psychometrics, and structural equation modeling. She earned her Ph.D. in 2011 in quantitative psychology from UCLA. Her research interests are in using Item Response Theory, specifically the nominal response model, to develop and refine psychological measurement tools. Dr. Preston is co-director of the Fullerton Longitudinal Study where she applies advanced statistical techniques to long-term longitudinal data. Dr. Preston is considered an expert in statistical analysis using R programming and she has recently published a textbook on analyzing multivariate statistics using R. She has given numerous invited statistical presentations and workshops at national and regional conferences, universities, and federal government agencies. Segment 1: Psychometrics and Item Response Theory [00:00-20:11] In this first segment, Kathleen discusses psychometrics, and how she got interested in quantitative psychology; she explains item response theory and the nominal response model and their applications. In this segment, the following resources are mentioned: Item Response Theory Nominal Response Model Segment 2: Analysis of the Fullerton Longitudinal Study [20:12 -37:44] In segment two, Kathleen discusses the Fullerton Longitudinal Study, the benefits and drawbacks of the study and the statistical methods she employs in her research. In this segment the following resources are mentioned: Fullerton Longitudinal Study Some of Kathleen's publications on the nominal response model and the Fullerton Longitudinal Study: Preston, K., Parral, S., Gottfried, A.W, Oliver, P., Gottfried, A. E., Ibrahim, S. & Delany, D. (2015). Applying the Nominal Response Model Within a Longitudinal Framework to Construct the Positive Family Relationships Scale. Educational and Psychological Measurement, 75, 901-930. Preston, K. S. J., Gottfried, A. W., Park, J. J., Manapat, P. D., Gottfried, A. E., & Oliver, P. H. (2018). Simultaneous Linking of Cross-Informant and Longitudinal Data Involving Positive Family Relationships. Educational and Psychological Measurement, 78(3), 409–429. To share feedback about this podcast episode, ask questions that could be featured in a future episode, or to share research-related resources, post a comment below or contact the “Research in Action” podcast: Twitter: @RIA_podcast or #RIA_podcast Email: riapodcast@oregonstate.edu Voicemail: 541-737-1111 If you listen to the podcast via iTunes, please consider leaving us a review. The views expressed by guests on the Research in Action podcast do not necessarily represent the views of Oregon State University Ecampus or Oregon State University.

Stats + Stories
Measuring the Data that Shapes Public Policy | Stats + Stories Episode 109

Stats + Stories

Play Episode Listen Later Aug 29, 2019 4:55


Libby Pier is the Research Manager at Education Analytics, overseeing and executing EA's diverse educational research portfolio, encompassing social-emotional learning, predictive analytics, academic growth measures, human capital analytics, and program evaluation. Nichole Webster is a research analyst at Education Analytics. She examines the item properties and performance of Social and Emotional Learning surveys and estimates teacher and school performance metrics in R. She’s part of ongoing research that examines how Item Response Theory models estimate error. She studied Mathematics and Applied Economics at the University of Wisconsin

Stats + Stories
Back to School Statistics | Stats + Stories Episode 107

Stats + Stories

Play Episode Listen Later Aug 15, 2019 27:56


Nichole Webster is a research analyst at Education Analytics. She examines the item properties and performance of Social and Emotional Learning surveys and estimates teacher and school performance metrics in R. She’s part of ongoing research that examines how Item Response Theory models estimate error. She studied Mathematics and Applied Economics at the University of Wisconsin Libby Pier is the Research Manager at Education Analytics, overseeing and executing EA's diverse educational research portfolio, encompassing social-emotional learning, predictive analytics, academic growth measures, human capital analytics, and program evaluation. Prior to joining EA, Libby worked as a postdoctoral researcher at UW-Madison's Center for Women's Health Research. She earned her PhD in Educational Psychology (Learning Sciences) with a focus on quantitative methods from UW-Madison in 2017, and a Master's in Urban Education from Loyola Marymount in 2010. Libby served as an 8th, 9th, and 10th grade English teacher in Los Angeles in both a traditional public school and a charter school through Teach For America.

Survival Skills Podcast
#95 Damon Bryant PhD - The Power of Behavioral Economics

Survival Skills Podcast

Play Episode Listen Later Oct 8, 2018 57:59


  Dr Damon Bryant is a business psychologist with over 15 years of experience in data analytics and psychological assessment tools. Currently the CEO of Light Pay Coin, a next generation cryptocurrency focused on digital payments in business and government.   His expertise is in psychological and behavioral assessments with an emphasis on AI-based systems and behavioral economics. He has derived mathematical information functions to quantify the usefulness of any observable behavior by humans or machines. He has also conceived and developed the internet-based, Smart Test Technology ® platform, an artificially intelligent assessment system.   His research is published in several professional journals including Applied Psychological Measurement, the Journal of Managerial Psychology, International Journal of Testing, and Psychometrika. Past honors include IBM’s Young Innovator Award for developing human resources software for use in international markets, Society for Industrial and Organizational Psychology‘s Robert J. Wherry Award for research in the area of test bias, and a letter of commendation from the Louisiana State Senate for developing and administering an Internet-based, police officer assessment center after Hurricane Katrina.   Books mentioned: Fundamentals of Item Response Theory by By Ronald K. Hambleton Item response theory for psychologists by Susan E. Embretson Flow: An Information-Based Theory of Challenge-Skill Balance by Damon Bryant (Author), Larry Davis (Author)  

Department 12: An I-O Psychology Podcast
Nathan Thompson on Computerized Adaptive Testing

Department 12: An I-O Psychology Podcast

Play Episode Listen Later Apr 12, 2017 12:11


Nathan Thompson, Chief Product Officer for Assessment Systems Corporation (ASC), defends computerized adaptive testing from a hostile cross-examination. You can read Dr. Thompson's blog here, or connect with him on LinkedIn, Twitter, or email. Click here for Dr. Thompson's recommended introduction to CAT. He also recommended the book Item Response Theory for Psychologists and the Concerto package for developing adaptive tests.

Learning Machines 101
LM101-046: How to Optimize Student Learning using Recurrent Neural Networks (Educational Technology)

Learning Machines 101

Play Episode Listen Later Feb 22, 2016 23:19


In this episode, we briefly review Item Response Theory and Bayesian Network Theory methods for the assessment and optimization of student learning and then describe a poster presented on the first day of the Neural Information Processing Systems conference in December 2015 in Montreal which describes a Recurrent Neural Network approach for the assessment and optimization of student learning called “Deep Knowledge Tracing”. For more details check out: www.learningmachines101.com and follow us on Twitter at: @lm101talk    

Linear Digressions
Item Response Theory: how smart ARE you?

Linear Digressions

Play Episode Listen Later Feb 7, 2016 11:46


Psychometrics is all about measuring the psychological characteristics of people; for example, scholastic aptitude. How is this done? Tests, of course! But there's a chicken-and-egg problem here: you need to know both how hard a test is, and how smart the test-taker is, in order to get the results you want. How to solve this problem, one equation with two unknowns? Item response theory--the data science behind such tests and the GRE. Relevant links: https://en.wikipedia.org/wiki/Item_response_theory

The Rights Track
Are we better at human rights than we used to be?

The Rights Track

Play Episode Listen Later Dec 10, 2015 22:51


In this Episode of the Rights Track and on International Human Rights Day, Todd asks Professor Chris Fariss of Pennsylvania State University about the methods he uses to look at the human rights performance of countries around the world and whether over time we have become better at practising and upholding people's human rights. 0.00-5.00 mins they discuss whether: the way we measure the human rights performance of different countries has improved in recent years there is more information available on people's lived experiences of human rights abuses our increased awareness of human rights problems has led to increased condemnation of countries our expectations of how a country will behave are higher than they used to be 05:00-13:02 mins is a discussion of Chris' research, specifically Respect for human rights has improved over time: modelling the changing standard of accountability. This part of the episode includes: an explanation of ‘changing standard of accountability' the data Chris used and the model he created to account for how the quality of his source information might change mention of the Political Terror Scale The Cingranelli and Richards Human Rights Data Project an explanation of Item Response Theory an explanation of how the model Chris developed works to measure the human rights performance of countries more scientifically 13:03-22.20 mins Todd and Chris discuss: the availability of the data getting students/coders to work with the data dealing with possible bias in data how Chris is taking his research further by updating the data, using new sources of information and applying it to different types of human rights abuse including Civil rights abuses our perspectives of human rights abuses over time compared with what the evidence tells us - do events like the Paris attacks and Syria influence our perspective and make us think that human rights are less protected than before the importance of putting events like these into context systematically what Chris' research tells us about what's really happening with human rights over time  

Learning Machines 101
LM101-038: How to Model Knowledge Skill Growth Over Time using Bayesian Nets

Learning Machines 101

Play Episode Listen Later Oct 26, 2015 23:55


In this episode, we examine the problem of developing an advanced artificially intelligent technology which is capable of tracking knowledge growth in students in real-time, representing the knowledge state of a student a skill profile, and automatically defining the concept of a skill without human intervention! The approach can be viewed as a sophisticated state-of-the-art extension of the Item Response Theory approach to Computerized Adaptive Testing Educational Technology described in Episode 37. Both tutorial notes and advanced implementational notes can be found in the show notes at: www.learningmachines101.com. 

Learning Machines 101
LM101-037: How to Build a Smart Computerized Adaptive Testing Machine using Item Response Theory

Learning Machines 101

Play Episode Listen Later Oct 12, 2015 34:56


In this episode, we discuss the problem of how to build a smart computerized adaptive testing machine using Item Response Theory (IRT). Suppose that you are teaching a student a particular target set of knowledge. Examples of such situations obviously occur in nursery school, elementary school, junior high school, high school, and college. However, such situations also occur in industry when top professionals in a particular field attend an advanced training seminar. All of these situations would benefit from a smart adaptive assessment machine which attempts to estimate a student’s knowledge in real-time. Such a machine could then use that information to optimize the choice and order of questions to be presented to the student in order to develop a customized exam for efficiently assessing the student’s knowledge level and possibly guiding instructional strategies. Both tutorial notes and advanced implementational notes can be found in the show notes at: www.learningmachines101.com .

Mathematik, Informatik und Statistik - Open Access LMU - Teil 03/03
A Unified Framework for Visualization and Inference in Item Response Theory Models

Mathematik, Informatik und Statistik - Open Access LMU - Teil 03/03

Play Episode Listen Later Jan 1, 2014


Wed, 1 Jan 2014 12:00:00 +0100 https://epub.ub.uni-muenchen.de/25523/1/MA_Komboz.pdf Abou El-Komboz, Basil ddc:500, Ausgewählte Abschlussarbeiten, Statistik, Mathematik, Informatik un

models visualizations basil statistik mathematik informatik inference ausgew abschlussarbeiten ddc:500 unified framework item response theory informatik und statistik
Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU - Teil 02/02

The aim of this thesis is to develop new statistical methods for the evaluation of assumptions that are crucial for reliably assessing group-differences in complex studies in the field of psychological and educational testing. The framework of item response theory (IRT) includes a variety of psychometric models for scaling latent traits such as the widely-used Rasch model. The Rasch model ensures objective measures and fair comparisons between groups of subjects. However, this important property holds only if the underlying assumptions are met. One essential assumption is the invariance property. Its violation is extensively discussed in the literature and termed differential item functioning (DIF). This thesis focuses on the methodology of DIF detection. Existing methods for DIF detection are briefly discussed and new statistical methods for DIF detection are introduced together with new anchor methods. The methods introduced in this thesis allow to classify items with and without DIF more accurately and, thus, to improve the evaluation of the invariance assumption in the Rasch model. This thesis, thereby, provides a contribution to the construction of objective and fair tests in psychology and educational testing.

existing dif rasch irt recursive partitioning model based ddc:500 ddc:510 item response theory informatik und statistik