Are you responsible for understanding an employees’ experience? Have you tried to incorporate people analytics in your organization but have struggled? Have you ever wondered what it means to have a data culture? Would you like to make more data-driven de
In this rich and timely conversation, Dr. Julianne Brown Meola joins Jennifer and Ron to explore the evolving landscape of talent management. Drawing on her extensive experience across multiple industries—including retail giants and Fortune 500 companies—Julianne offers a candid, practical, and strategic view of how organizations can navigate workforce challenges in a rapidly shifting environment.Key Takeaways: Talent management is more than hiring. It spans workforce planning, development, retention, and transitions—all aimed at growing both individuals and the organization.Measure what matters. Organizations often collect too much data without knowing how to use it. The focus should be on meaningful, actionable insights—not just numbers.AI is accelerating talent work. From training design to communication, AI offers efficiency—but ethical use and human oversight are key.Future readiness is top of mind. Organizations must build skill-based pipelines and prepare current employees for emerging challenges.Career paths are evolving. The traditional step-by-step ladder is giving way to skill-based growth and internal mobility. Flexibility and visibility are essential.Contact Dr. Brown Meola LinkedIn Profile Contact Millan We'd love to hear from you! If there's a people analytics topic you're curious about or would like to explore further, don't hesitate to reach out to us at survey@millanchicago.com. For more information about our services and insights, visit our website at www.millanchicago.com, or connect with us on LinkedIn: Millan Chicago.
In this episode, Shea Smith, an executive coach who helps leaders get unstuck, shares her insights on the importance of feedback for team and organizational growth. We discuss how to capture and measure the behavior of giving and receiving feedback as well as the challenges in quantifying feedback. The discussion highlights that ineffective feedback often stems from vague delivery, poor timing, and a lack of connection to individual and organizational goals. To improve, leaders need to be intentional, ensure feedback is truly heard and understood, and foster a culture of trust where feedback is seen as an opportunity for growth. Ultimately, the conversation explores how organizations can move beyond anecdotal evidence to quantify the ROI of investing in feedback processes and training, linking it to tangible outcomes like improved performance, reduced turnover, and increased engagement. Key TakeawaysEffective feedback is a two-way street that requires both skillful delivery and a receptive mindset. Measuring the impact of feedback goes beyond simply tracking conversations and involves looking at tangible outcomes and changes in behavior. Building a culture of trust and psychological safety is essential for individuals to be open to receiving and acting on feedback. Organizations should consider investing in training for both giving and receiving feedback to maximize its positive impact on performance and culture. Connecting feedback to individual and organizational goals increases its relevance and drives meaningful change. Contact Us We'd love to hear from you! If there's a people analytics topic you're curious about or would like to explore further, don't hesitate to reach out to us at survey@millanchicago.com.For more information about our services and insights, visit our website at www.millanchicago.com, or connect with us on LinkedIn: Millan Chicago. Contact Shea Smith Shea Smith is an executive coach who equips high-performing teams and individuals to break through barriers, expand their vision, and deliver outstanding results. Since 2014, she's been helping leaders get unstuck, think bigger, and achieve lasting transformation that redefines what's possible. Shea is passionate about empowering her clients to push beyond high performance and unlock extraordinary growth.thesheasmith.com, thesheasmith@gmail.com
In this episode, we're diving into the world of meta-analysis and its importance for decision-making in professional settings. While we won't get into the technical aspects, we will focus on how to effectively use and what to consider when reading a meta-analysis. We'll break down why this technique is critical, especially for those making policy or procedural decisions based on research. Key Takeaways: What is a meta-analysis? A meta-analysis combines data from multiple studies to provide a clearer and more reliable picture of the effect of a certain phenomenon. The goal is to take the variability from individual studies and make informed decisions based on a broader data set. Why does a meta-analysis matter? Meta-analyses are essential when making significant decisions in work settings, like policy or procedural changes. Instead of relying on anecdotal evidence or small sample studies, practitioners can reference meta-analyses that aggregate findings from hundreds of studies for more reliable data. Are all meta-analyses created equal? Not all meta-analyses are created equal. It's crucial to consider the quality of studies included and the transparency of the meta-analysis process. Look for proper reporting guidelines and study inclusion/exclusion criteria. If you're making decisions in your organization, consider reviewing meta-analyses related to your topic. Whether it's remote work, employee well-being, or another area, see what broader research trends tell you. Contact Us We'd love to hear from you! If there's a people analytics topic you're curious about or would like to explore further, don't hesitate to reach out to us at survey@millanchicago.com. For more information about our services and insights, visit our website at www.millanchicago.com, or connect with us on LinkedIn: Millan Chicago.
In this episode, we sit down with Regina Taute, a seasoned Talent Management expert with over 20 years of experience. Regina shares her insights on the evolving role of data in performance management, exploring how different work environments, including hybrid models, can impact performance evaluation. We discuss why technology should support, not define, your organizational culture, and the critical components needed for successful performance management, no matter where your team is working from. Key Takeaways: The Role of Data in Performance Evaluation: Using data in performance management is crucial for accurate assessments. The shift away from traditional numeric evaluations has brought challenges when performance isn't evaluated using quantitative approaches. Culture Drives the Process, Not Technology: Technology alone can't create an effective performance management system. Company culture should guide the process, ensuring it aligns with core values, mission, and business goals. Goal Setting is Key: Clear, measurable goals are at the heart of successful performance management. Regina highlights how effective goal setting connects to broader performance strategies and how it serves as a tool for growth and improvement. Adapting Performance Management to Hybrid Environments: Performance management needs to adapt to hybrid work environments. The key components for success include clear communication, flexibility, and consistent feedback, regardless of whether employees are in the office or working remotely. Contact Regina TauteLinkedInCollective GrowthEmail - regina@collectivegrowthcc.com Contact Millan We'd love to hear from you! If there's a people analytics topic you're curious about or would like to explore further, don't hesitate to reach out to us at survey@millanchicago.com. For more information about our services and insights, visit our website at www.millanchicago.com, or connect with us on LinkedIn: Millan Chicago.
In this episode, Jennifer and Ron sit down with Dr. David Costanza, a Professor of Commerce at the University of Virginia, to explore his groundbreaking research on generational differences in the workplace. While it's widely believed that different generations have distinct traits, Dr. Costanza reveals there's no solid scientific evidence to support this idea. They dig into the dangers of using generational labels, how these labels shape workplace dynamics, and uncover the real factors behind what we often misinterpret as generational divides. If you're ready for a fresh perspective on how age-related assumptions influence the workplace, don't miss this eye-opening discussion! Key Takeaways: Generational differences? Not so much. Dr. Costanza walks us through his research (including two extensive meta-analyses!) showing that there's no scientific basis for generational differences in the workplace. The harm in labeling. When we use generational labels, we risk stereotyping individuals based solely on when they were born—oversimplifying the unique experiences and skills they bring to the table. The real causes of 'differences.' Many of the traits we attribute to specific generations are more likely to be cohort or period effects—shaped by historical or cultural events rather than age. Focus on opportunities, not generations. Instead of making decisions based on generational assumptions, organizations should ensure that opportunities are accessible to everyone, regardless of age. Tailoring solutions to perceived generational gaps often misses the mark. Contact Dr. Costanza Academic Website LinkedInEmail - david.p.costanza@gmail.comContact Millan We'd love to hear from you! If there's a people analytics topic you're curious about or would like to explore further, don't hesitate to reach out to us at survey@millanchicago.com. For more information about our services and insights, visit our website at www.millanchicago.com, or connect with us on LinkedIn: Millan Chicago.
People Analytics Deconstructed is back for Season 2! In this season opener, hosts Ron and Jennifer kick things off by discussing what's in store for the upcoming season. From guest interviews with industry and academic experts to deep dives into workforce analytics, this season promises insightful conversations for anyone passionate about using data to drive decisions in the workplace. In this episode, Ron and Jennifer discuss the foundational framework that organizations should follow when solving challenges using analytics. They break down the key steps to take when making strategic decisions that are informed by data. Key Takeaways: As we kick off the new year, think about what's truly necessary for your organization. Use a data-informed approach to make strategic decisions. The essential steps for making data-driven decisions: Understand the focus of the effort – What are you trying to solve? What decision needs to be made? Collect key information – With a clear focus, gather the right data to inform your decision. Analyze the information – Understand what the data is telling you and apply the right techniques to summarize and identify patterns or differences. Interpret the analysis – Relate the analysis to the initial challenge to guide you toward a data-informed decision. Tune in for a fresh start to Season 2, where we'll continue exploring how analytics can shape the future of work! Contact Us! We'd love to hear from you! If there's a people analytics topic you're curious about or would like to explore further, don't hesitate to reach out to us at podcast@millanchicago.com. For more information about our services and insights, visit our website at www.millanchicago.com, or connect with us on LinkedIn: Millan Chicago.
In this episode, co-hosts Jennifer Miller and Ron Landis continue their conversation about developing a measure of employee engagement. This episode is the second part of a discussion about how to use a statistical technique called factor analysis to examine the dimensions of employee engagement.In this episode, we had conversations around these questions: What is rotation in factor analysis?How do we determine the number of factors to retain?How do we identify which items load on which factors? How do we interpret results? Key Takeaways: We should look at the results after rotating the initial solution. Factor rotation can be orthogonal or oblique. Oblique solutions allow for correlations between factors and orthogonal solution force them to remain independent. In most situations, we would likely start with an oblique rotation. We can determine the number of factors using a few different approaches. Kaiser's criterion retains factors with eigenvalues greater than 1.00. A scree plot visualizes the eigenvalues of each factor from largest to smallest. We look for where the plot “flattens out” to determine the number of factors to retain. A parallel analysis simulates results from random data that have the same structure as our focal data (i.e., same number of observations and items) and produces associated eigenvalues. We compare our observed eigenvalues to those produced through the parallel analysis and retain those in which our eigenvalues are greater. When we associate a given item to a particular factor, we are looking for the largest loading. We generally set a cut of at least .30 or .40 to associate an item with a factor. If an item has no loadings higher than our cut, we will say the item doesn't load on any factor and discard from further analysis. If an item has a high loading on multiple factors, we will seek to understand why and may choose to either drop or retain the item based on the context.We ended the episode by briefly talking about confirmatory factor analysis (CFA) as an alternative to EFA.
In this episode, co-hosts Jennifer Miller and Ron Landis continue their conversation about developing a measure of employee engagement. In this episode they focus on how to use a statistical technique called factor analysis to examine the dimensions of employee engagement. In this episode, we had conversations around these questions: What is factor analysis? What is the difference between exploratory factor analysis (EFA) and confirmatory factor analysis (CFA)? What is the difference between an EFA and principal components analysis (PCA)? What format should the data be in to complete the EFA? How does EFA work? What is a loading matrix? Key Takeaways: A factor analysis is useful to determine whether you are measuring what you intend to measure with a survey. We continue our example of measuring engagement with four dimensions (satisfaction with manager, satisfaction with co-workers, satisfaction with compensation, satisfaction with working conditions). There are three main aspects to conducting an EFA. First, you need to decide on the type of analysis (I.e., PCA, EFA). Second, you need to rotate the solution. Third, you need to interpret the results. In this episode, we cover the first and part of the second steps in EFA. We also discussed the concept of a loading matrix. First, each item is correlated with each factor. Each correlation can be squared to get the percentage of variance explained. Second, the sum of all the squared values down a column are computed, which is the eigenvalue. Third, communality is determined by summing across the row for each item. Finally, the uniqueness can be computed by 1-communality. The ultimate goal is to associate each item with a factor. The initial solution will almost never allow us to see the underlying structure. The concept of rotation was briefly mentioned and is covered in additional detail in the next episode.
In this episode, co-hosts Jennifer Miller and Ron Landis continue their conversation about developing a measure of employee engagement. They identify and discuss several steps/phases that should be present when we develop measures of any kind. In this episode, we had conversations around these topics: Aspects of item writing were discussed. In particular, we talked about making sure that we carefully consider how items will be read and interpreted by people who will respond to them. We talked about approaches to writing items (for example, do we do it as a group?, do we ask stakeholders to comment?, do we pilot test?). In particular, we talked about balancing the need to get feedback from different stakeholders while being as efficient as possible. We talked about the negative outcomes that may occur if we don't carefully consider whether individuals will interpret the items in the same way. We spent time talking about the number of scale points that we might include on our engagement. We emphasized the importance of balancing giving people options so that we get variability with a need to ensure our measures are reliable. We also talked about the importance of using the best anchors with the goal of ensuring that we collect the most accurate information. We emphasized the importance of a clear communication plan for launching and administering the survey. Key Takeaways: When creating an engagement measure, be sure that your items are clear and fit within the test blueprint. You should also ensure that the items are interpreted consistently by respondents. This can be facilitated through engaging various stakeholders during the item writing process. Think carefully about the number of scale points that you will use and the anchors that will accompany each of those scale points. Good items can be undercut if the anchors and/or scale points create confusion or frustration for the respondents. Make sure that whenever a survey is launched that there is a clear and comprehensive communication plan in place to ensure that people understand what is being measured, how the data are being used, and why it is important to respond.
In this episode, co-hosts Jennifer Miller and Ron Landis start their conversation about developing a measure of employee engagement. They identify and discuss several steps/phases that should be present when we develop measures of any kind. In this episode, we had conversations around: The importance of clearly, accurately, and comprehensively defining what it is that we are interested in measuring (e.g., engagement). As part of the definition process, the importance of thinking about whether our survey is measuring one big factor or is comprised of several subdimensions. The importance of specificity in defining what you want to measure. The importance of considering how engagement scores will ultimately be used. Specifically, are we interested in using overall scores or are dimension scores more helpful for decision making of building models (such as flight risk)? The definition as the foundation for creating a test blueprint. The blueprint helps ensure that we measure exactly what we want to measure (nothing more or nothing less) Aspects of item writing were also discussed. In particular, we talked about making sure that we carefully consider how items will be read and interpreted by people who will respond to them. Key Takeaways: When creating an engagement measure (or any measure), a critical step is to first carefully define what it is you are interested in measuring. Incompleteness, ambiguity or inaccuracy at the outset will lead to a poor measure. Carefully consider the dimensionality of what it is you are measuring. The dimensionality will serve as a foundation for creating the measure and for evaluating its validity. Every measure you use should always have a test blueprint that serves as a framework for guiding item writing When you write items, keep in mind that they are the only way you have of communicating with your respondents. Write items that are clear and to the point and presented in a manner that facilitates ease of understanding.
In this episode, co-hosts Jennifer Miller and Ron Landis continue their discussion on the importance of data cleaning and management. They review three of five aspects of data cleaning that are critical to checking prior to the analytic phase. In this episode, they discuss how to check for linearity and normality, outliers, and multicollinearity. In this episode, we had conversations around these questions: How do you check for linearity and normality in a data set? Why is normality important to check for in a data set? What are outliers? This includes both univariate and multivariate outliers. How do you identify outliers in your data? What are some ways to handle outliers? What is multicollinearity? Why is multicollinearity important to check and consider during the data analytic process? Key Takeaways: We should always consider the distribution of a variable with respect to our expectations regarding the distribution. If the distribution is inconsistent with what we expect, we should devote time and energy toward understanding why. In cases where our ultimate analyses require assumptions of normality, we need to ensure that our data are consistent with that assumption. We may elect to transform our data on the basis of these analyses, but should always be able to explain why we have done so. Outliers are cases inconsistent from other cases. In the univariate case, these are scores that are either extremely high or low. In the multivariate situation, we inspect the "profile" of scores across measured variables to assess the degree to which the case is consistent with others. Once cases are identified as outliers, then determining what to do with them is important. Our discussion focused on some common ways of dealing with outliers. Multicollinearity exists when two or more of the predictors are moderately or highly correlated. This is typically of concern when conducting analyses using the multiple regression framework. Specifically, we need to assess the degree to which predictor variables are overly redundant (highly correlated) prior to including them in our models. The variance inflation factor (VIF) or tolerance are commonly used to assess multicollinearity.
In this episode, co-hosts Jennifer Miller and Ron Landis discuss the importance of data cleaning and management. They identify five aspects of data cleaning that are critical to checking prior to the analytic phase. In some cases, data management is often embedded in the data encoding and storage process (I.e., certain rules are in place to ensure that data fields can only handle one type of data such as a date). In this episode, they discuss how to check for data accuracy and what to do with missing data. In this episode, we had conversations around these questions: What is data cleaning? Why is data cleaning important prior to the analytic phase? What are the five steps of data cleaning? How do you check for data accuracy in a data set? What does it mean to have missing data? What are some of the ways that you can evaluate your missing data? What do you do with missing data? Key Takeaways: Data management is imperative to the data analytic process. Without a strong focus on the management process, the analyses and subsequent interpretation and use may be misleading and incorrect. While this topic may seem boring or perhaps intuitive, it is necessary to have a plan for data cleaning. There are five broad aspects of data cleaning. Some of this depends on the data and focal question but in general, some or all of these steps should be considered when conducting analytics. As noted above and also in the episode, some of these steps may be more automatic due to the platform and storage restrictions. The five steps include checking for data accuracy, missing data, linearity and normality, outliers, and multicollinearity. Data accuracy refers to whether the data are accurate and conform to the fields in which they are included. A missing data analysis checks for missing values. Depending on the type and kind of data, there are various procedures for handling these missing values.
In another technically focused episode, co-hosts Jennifer Miller and Ron Landis discuss how to use multiple linear regression to test models involving moderation (or interaction). In episode 18, we discussed multiple linear regression in which we used multiple variables to predict the outcome or criterion variable. But what happens if you have a situation in which the relation between the predictor and outcome variable is actually dependent upon (or is conditional upon) the level of a third variable? In this episode, we deconstruct moderation and some applications of moderation. In this episode, we had conversations around these questions: What is moderation/interaction? Why might we want to use multiple linear regression (as opposed to analysis of variance, ANOVA) to test for moderation? What are some applications of moderation in People Analytics? What's the best way to communicate moderation results? What are some of the concerns when presenting visualizations depicting moderation? Key Takeaways: Moderation or interaction involves evaluating with the relation between a predictor and outcome variable is dependent (or conditional) on the level of a third variable. For example, we might be interested in whether employee engagement predicts jobs performance. In this case, we have a simple linear regression. If we add a third variable, such as working environment (I.e., remote or hybrid), we can now ask whether the relation between engagement and job performance is the same across different working environments. Moderation and interaction can be used interchangeably. One can use regression based approaches or ANOVA to test for the presence of interactions, though regression allows for the use of continuous predictor variables. Moderation is an application of multiple linear regression. In multiple linear regression, the effects are additive meaning that each variable contributes additively to explaining the outcome variable. In moderation, the effects are multiplicative in that a product term needs to be included in the model to examine whether the variance explained in the outcome variable is over and above the when each variable is added independently to the model. Related Links Millan Chicago
In this episode, co-hosts Jennifer Miller and Ron Landis discuss the emerging field of artificial intelligence (AI). In particular, they discuss machine learning and two broad categories of algorithms, unsupervised and supervised learning. In this podcast episode, we had conversations around these machine learning questions: What is artificial intelligence? What is machine learning? What are some applications of machine learning in People Analytics? What is the difference between supervised and unsupervised learning? 4 Key Takeaways on Machine LearningAI is the field of computers simulating human capabilities to process data. Several examples of AI exist in our everyday environment including products like Alexa and Siri and other processes like financial detection fraud, purchasing recommendations, and driverless cars. Machine learning helps automate the analytic process. Supervised learning is an approach that predicts or classifies outcomes via "labeled" datasets. In this approach, the user has to determine the outcome and inputs that are used by the algorithm. Regression is a common type of supervised learning. Unsupervised learning is an approach that uncovers hidden patterns in the data utilizing "unlabeled" datasets. In this approach, the user does not contribute to the initial model building process. Cluster analysis is one example of an unsupervised learning technique.Ron and Jennifer discuss how machine learning can be used in the context of People Analytics.
Earlier in this season, we discussed a commonly used technique called simple linear regression. In this technique, we used one variable to predict an outcome. But, let's face it – life is a little bit more complex than just having one predictor and many times, organizations have lots of data that can be used to predict an outcome. In another technically focused episode, co-hosts Ron Landis and Jennifer Miller deconstruct multiple linear regression. They focus on using multiple predictors to predict a single criterion variable. In this episode, we had conversations around the following multiple linear questions: What is multiple linear regression? What are some applications of multiple linear regression? What are some of the ways in which models can be built using multiple linear regression? What is mediation and moderation? 2 Key Takeaways on Multiple Linear RegressionMultiple linear regression uses multiple variables to predict an outcome (I.e., criterion) variable. The ultimate goal is to explain the variation in the criterion variable. One aspect to consider in this analysis is the relation between variables; that is, to what degree do the predictor variables correlate and how does that relation predict the outcome variable. Depending on the relation between predictors, either partial or full redundancy might be present. Ron and Jennifer discussed three questions that can be asked using multiple linear regression. First, you can assess the effects of particular predictors while controlling for others. Second, you can compare different sets of variables to find the most efficient model. Third, you can test for moderation and mediation. Related Links Millan Chicago What is Linear Regression?
In this episode, Ron Landis and Jennifer Miller deconstruct the importance of utilizing descriptive statistics as the foundation of starting the data analytic process. As many advanced statistical techniques are built on descriptives such as the mean and standard deviation, it is imperative to understand the characteristics of the data set being analyzed. In this episode, they have conservations around the following questions: What are the various ways in which central tendency is used to understand the nature of a data set? What are the advantages and disadvantages of using different measures of central tendency? What are the different measures of dispersion? What are some contexts in which certain measures of dispersion should be used? Links ExerciseExercise Solution
In this episode, Ron Landis and Jennifer Miller deconstruct the key characteristics to consider when developing visualizations. In working with data, many are faced with decisions about how to communicate results. Given that one of the primary functions of analytics is to inform various stakeholders of the results, visualizations and other representations of data often play an important role in communicating findings. In this episode, we had conversations around these questions: What are some of the best ways to design visualizations? What are the best practices when designing visualizations? What are some ways in which visualizations can be improved? Send us your questions! We're interested in answering people analytic questions! Let us know what challenges or opportunities you're currently working on. You can either send us a description or record a short audio file and send them to info[at]millanchicago.com. We will answer questions in future podcast episodes.
In the first "Analytics in Practice" episode, co-hosts Ron Landis and Jennifer Miller deconstruct how to utilize the data analytic process for performance appraisal. Given the widespread and varied use of performance assessments in organizations, there are numerous opportunities to reap the benefits of applying data analytic thinking to the process. In this episode, we had conversations around these questions: What are some of the decisions to consider for each step of the data analytic process in the context of performance assessment ? What are some of the ways in which performance assessment can be improved by thinking about through the lens of data analysis? Is there information collected during performance appraisal that could be used in ways to learn more about employee performance? Data analytic thinking takes place well before the actual analysis. We talked about how many of the choices we make when conducting performance assessments impact the data we ultimately can use. Key Takeaways: Performance appraisal involves numerous choices that can be informed by taking a data analytic perspective to the process. The process of more fully using data analytics within performance assessment is something that HR departments can address in a building fashion. That is, we can steadily build performance assessment on a data analytic foundation based on specific organizational goals and objectives. Send us your questions! We're interested in discussing real challenges and opportunities in the people analytic space! Let us know what people analytic challenges or opportunities you're currently working on. You can either send us a description or record a short audio file and send them to info[at]millanchicago.com. We will answer questions in future podcast episodes. Related Links Millan Chicago Landis article - Selecting response anchors with equal intervals for summated rating scales
In this episode, co-hosts Ron Landis and Jennifer Miller deconstruct building predictive models and specifically, utilizing forecasting in organizational context. In this episode, we had conversations around these questions: What are different types of data analytics? What are some of the decisions to consider when building predictive models? What are some contexts in which predictive models can be used in organizations? What are some of the data analytic requirements needed to utilize forecasting in organizational contexts? What are some clear steps that HR professionals can take to use predictive models? Key Takeaways: In general, we can think about three broad categories of data analytics: descriptive, inferential, and predictive. Ron and Jennifer provide a framework of how to build predictive models. First, all the relevant variables and relations among those variables need to be in the model. Second, the model needs to have data divided into a training set and test set to determine how well the model predicts the data. Third, they discuss how the model can be used in organizational contexts. Related Links Millan Chicago
In this episode, co-hosts Ron Landis and Jennifer Miller deconstruct natural language processing (NLP), a technique used to drive insights from text based information. They focus on how natural language processing can uncover information from different types of text such as performance management reviews, employee engagement responses, pulse survey responses, and job descriptions. In this episode, we had conversations around these questions: What is natural language processing? How can natural language processing be used in HR? What are some of the data analytic requirements needed to use natural language processing? What are some clear steps that HR professionals can take to use natural language processing? Key Takeaways: Natural language processing utilizes machine learning algorithms to interpret and process text. Ron and Jennifer provide an in-depth example of how performance feedback in the form of text could be used in conjunction with quantitative ratings. They discuss the example in the context of the data analytic process. First, what problem are you trying to solve? Second, what kind of data do you have to answer the question? Third, they discuss some of the NLP techniques. Finally, they provide recommendations on interpretation and communication to other key stakeholders. At the end of the episode, Jennifer and Ron recommend steps for folks just starting out in this space all the way to the more advanced HR professional. Related Links Millan Chicago Basics of Text Analysis for HR What is Natural Language Processing, and How is it Used in Workforce Analytics?
In this episode, co-hosts Ron Landis and Jennifer Miller deconstruct the concept of a flight risk model. They focus on how these types of models can be used to predict the degree to which employees are at risk of leaving an organization. In this episode, we had conversations around these questions: What is a flight risk model? Why are flight risk models important? How do you build a flight risk model? What are some clear steps that HR professionals can take to build a flight risk model? Key Takeaways: Employee attrition has a variety of consequences. Organizations want to predict who is most likely to leave so that they can better forecast future staffing needs, intervene as necessary to enhance retention, and/or estimate dollar costs associated with predicted attrition. A flight risk model determines the employee characteristics, job characteristics, and organizational characteristics that relate to whether an employee voluntarily leaves an organization. There are different analytic approaches to developing flight risk models (with differing strengths and weaknesses). It is essential to choose the approach that is most appropriate for the given situation and to assess the accuracy of the model for making predictions. At the end of the episode, Jennifer and Ron recommend steps for folks just starting out in this space all the way to the more advanced HR professional. Related LinksMillan ChicagoBureau of Labor Statistics Attrition CostsSHRM Study
In the third episode of a special three part mini-series about measurement, co-hosts Ron Landis and Jennifer Miller discuss how validity of measurement is critical in People Analytics. In this episode, we had conversations around these questions: What is validity? Why is it is important to consider the validity of measures? What are the different types of validity? How do I determine the validity of a measure? What are some of the steps that HR professionals can take to assess the validity of the assessments used in their organization? Key Takeaways: In order to measure what we intend to measure, we must ensure that our measures are valid. One of the most critical questions when selecting or developing a new measure or metric is whether that measure accurately represents what we intend to measure. But how do we make this judgement? There are three approaches to validity including content validity, construct validity and criterion-related validity. Different types of validity require different types of data collection efforts but the central idea is the same. Ron and Jennifer discuss ways in which we can assess the validity of any measure that we use. At the end of the episode, Jennifer and Ron recommend steps for folks just starting out in this space all the way to the more advanced HR professional. Related Links Millan Chicago
In the second episode of a special three part mini-series about measurement, co-hosts Ron Landis and Jennifer Miller discuss how reliability of measurement is critical in People Analytics. In this episode, we had conversations around these questions: What is reliability? Why is it is important to consider the reliability of measures? What are the different types of reliability? How do I determine the reliability of a measure? What are some of the steps that HR professionals can take to assess the reliability of the assessments used in their organization? Key Takeaways: In order to measure something accurately, we must ensure that our measures are first reliable. One of the most critical questions when selecting or developing a new measure or metric is whether that measure provides consistent scores. In the People Analytics field, the term reliability is defined as the consistency of a measure. In essence, we want to know whether what we are measuring is consistent across time (I.e., test-retest reliability), content (I.e., internal consistency), and/or raters (I.e., observer reliability). Different types of reliability evidence require different types of data collection efforts, but the central analysis is the same. Ron and Jennifer talked about ways by which we could operationally assess the reliability of any measures that we use. In addition, they also talked about some rules of thumb for what is "reliable enough." At the end of the episode, Jennifer and Ron recommend steps for folks just starting out in this space all the way to the more advanced HR professional. Related Links Millan Chicago Organizational Testing: Assessment Do's & Don'ts
In the first episode of a special three part mini-series about measurement, co-hosts Ron Landis and Jennifer Miller discuss how measures impact People Analytics. In this episode, we had conversations around these questions: What does it mean to measure employee behavior in an organizational context? Why is measurement foundational to People Analytics? What is the process to ensuring that measures are good? What are some of the challenges that organizations face when assessing employee behavior? What are some clear steps that HR professionals can take in the field of People Analytics to ensure that their assessments are effective? Key Takeaways: In the People Analytics field, the term measurement is used to assess or evaluate employee behavior in organizational contexts. While companies have access to large volumes of data, it is important to consider what is actually being measured. Without a strong foundation in measurement, any decision based on the data analytic process may lead to unintended consequences. Ron and Jennifer discuss the data analytic process in the context of measurement. First, the topic or area needs to be clearly defined. Second, the data collection process should be articulated including who, what, when and where data will be collected. Third, an analysis of the data should be conducted. Finally, we interpret our results and determine next steps in the analytic process. An organizational example related to the measurement of training program impact is presented. How do you measure ROI of training? Jennfier and Ron discuss this example in the context of the data analytic process. At the end of the episode, Jennifer and Ron recommend steps for folks just starting out in this space all the way to the more advanced HR professional. Related Links Millan Chicago
In another technically-focused episode, co-hosts Ron Landis and Jennifer Miller deconstruct a statistical technique called logistic regression. They focus on how logistic models can be used to predict the likelihood of a particular outcome. Given the numerous organizational outcomes that are binary in nature (for example, turnover, absence, or promotion), logistic models can provide important insights as to the drivers of such variables. In this episode, we had conversations around these questions: What is logistic regression? How is logistic regression used in organizational contexts? How can logistic regression be used to drive optimal business decisions? What are some steps an organization can take to more effectively utilize logistic regression models? Key Takeaways: Logistic Regression is a technique used to model relations between variables of interest and predict the probability of an outcome. The focus in this episode is on outcomes that take on one of two possibilities. For example, let's say we're interested in predicting whether an individual leaves an organization. Our outcome variable is turnover which we can define as either someone leaving or staying with the company. We also have characteristics about those individuals that we can include in the model as predictors to predict the outcome variable. The model will give information on the likelihood of an individual either staying or leaving the organization. At the end of the episode, Jennifer and Ron recommend steps for folks just starting out in this space all the way to the more advanced HR professional. Related Links Logistic Regression Resource Millan Chicago
In this episode, co-hosts Ron Landis and Jennifer Miller deconstruct organizational network analysis or sometimes referred to as ONA. Social interactions are becoming increasingly important to understand in the context of organizational success. While many have access to data related to interactions (i.e., communication patterns including email, chat) little has been done to analyze those patterns. Using ONA to understand and quantify such relational data provides organizations with a means for identifying whether individuals (or groups of individuals) have similar or different employee experiences. In this episode, we had conversations around these questions: What is organizational network analysis? How can organizational network analysis be used? What kind of data do I need for an organizational network analysis? Key Takeaways: Network analysis is a field that studies the relations among a set of actors. Everyday examples include social media platforms (i.e., connections between individuals) and the electric grid. The goal of network analysis is to examine the nature and patterns of those connections. To establish a network, you need certain kinds of data. Networks have components such as actors, nodes, or vertices. The interactions between the components are sometimes referred to as links or edges. Organizational network analysis examines the patterns of interactions between the components. At the end of the episode, Jennifer and Ron recommend steps for folks just starting out in this space all the way to the more advanced HR professional. Related Links Millan Chicago
In this episode, co-hosts Ron Landis and Jennifer Miller deconstruct data literacy. Data is increasingly important to driving important decisions. While many organizations have access to even more data than before, most organizations could gain significant benefits better using their data to its fullest potential. One of the primary reasons for not maximizing the use of data is that many employees do not have key data literacy skills. In this episode, we had conversations around these questions: What is data literacy? How can we assess data literacy? How can organizations consider data literacy in the context of so many different roles and positions? What are some examples across users in which aspects of data literacy would be useful? Key Takeaways: Organizations must have a robust data culture to make efficient and effective use of their data. Fundamental to data culture is ensuring that everyone is data literate. Gartner defines data literacy as "the ability to read, write and communicate data in context, including an understanding of data sources and constructs, analytical methods and techniques applied, and the ability to describe the use case, application, and result value." Recent studies have demonstrated that data-driven organizations have a higher enterprise value of 3-5%. In our view, data literacy differs based on the role of the employee in the organization from general users to data scientists to executives. At the end of the episode, Jennifer and Ron recommend steps for folks just starting out in this space all the way to the more advanced HR professional. Related Links Millan Chicago Data literacy training: What is it and why do you need it? A Data and Analytics Leader's Guide to Data Literacy The Human Impact of Data Literacy
In this episode, co-hosts Ron Landis and Jennifer Miller deconstruct employee experience. While many organizations have historically focused on employee engagement, there has been a shift to focus on the broader set of experiences employees have within their organization. This increased focus on the experience is in part due to the ongoing pandemic and the significant number of individuals leaving the workforce. In this episode, we had conversations around these questions: What is the employee experience? Why is the employee experience important? How do you measure employee experience within an organization? What are some clear steps that HR professionals can take to measure their organization's employee experience? Key Takeaways: The employee experience, sometimes abbreviated as EX, refers to all the ways an employee interacts with an organization —including both work-related tasks (“on tasks”) and non-work-related tasks (“off tasks”). The employee experience is a key factor in driving employee well-being and productivity as well as overall successful business performance. A recent survey by Willis Towers Watson found that more than 9 in 10 organizations stated enhancing the employee experience will be a priority in the next three years. The employee experience can be measured and assessed using our four step data analytic process with the Employee Journey Mapping document as a guide. Check out our summary of the process here. At the end of the episode, Jennifer and Ron recommend steps for folks just starting out in this space all the way to the more advanced HR professional. Related Links Millan Chicago Willis Towers Watson Employee Experience Survey Employee Journey Mapping
In our first technically-focused episode, co-hosts Ron Landis and Jennifer Miller deconstruct a common statistical technique called linear regression. They focus on how regression can be used to better understand the relations between key drivers of important outcomes. In this episode, they had conversations around these questions: What is linear regression? How are regression analyses used in organizational contexts? How can linear regression be used to drive optimal business decisions? What are some steps an organization can take to more effectively utilize linear regression models? Key Takeaways: Linear Regression is a technique used to model relations between variables of interest and to use these relations to forecast future states. For example, in a simple linear regression, we might be interested in predicting a key outcome variable such as sales from other predictor variables such as number of customers. This kind of statistical technique can be used when the underlying relation between the predictor and outcome is linear (I.e., when the predictor and outcome is plotted, it follows a relatively straight line). At the end of the episode, Jennifer and Ron recommend steps for folks just starting out in this space all the way to the more advanced HR professional. Related Links Millan Chicago
In this episode, co-hosts Ron Landis and Jennifer Miller discuss the importance of utilizing data to make data-driven decisions. While many organizations have turned to analytics to better understand customers, employees, and processes, many are struggling to effectively get the most out of their data. Often, companies fail to use data to its fullest potential because many often lack a strong data culture. In this episode, they had conversations around these questions: What is data culture? Why is data culture important? How do you assess an organization's data culture? What are some steps a company can take to improve their data culture? Key Takeaways: Data Culture refers to an organization's ability to utilize data to make decisions. Companies with a strong data culture consistently reinforce and facilitate the informed use of data in decision making. Data culture is driven by four dimensions: human resource capabilities, human resource processes, technological capabilities, and technological processes. To understand how your organization is currently doing with data culture, take the Data Culture Readiness Assessment for an initial snapshot. At the end of the episode, Jennifer and Ron recommend steps for folks just starting out in this space all the way to the more advanced HR professional. Related Links Millan Chicago Data Culture Readiness Assessment
In this special episode, co-hosts Ron Landis and Jennifer Miller discuss what's on the horizon for the field of people analytics in 2022. They first set the backdrop by discussing recent events, including the impact of the ongoing pandemic and recent exodus of people from the workforce. Next, they discuss what to watch for in people analytic for 2022. In this episode, we had conversations around these questions: How does the ongoing pandemic contribute to changes in the field of people analytics? What impact will the 'Great Resignation' have on people analytics? What trends can we look forward to in 2022? Key Takeaways: There have been several changes in the business landscape in 2021 that contribute to the trends we will see in people analytics. First, companies transitioned to new ways of working. Some went completely remote while others tried a hybrid approach by reopening the office. The pandemic and ongoing variants of COVID-19 continue to challenge ways in which work gets done. Towards the end of 2021, another challenge presented itself with The Great Resignation. While the Great Resignation is not impacting all industries, it will have a last impacting in the broader world of work. Ron and Jennifer identify and discuss four people analytics trends for 2022. First, data-driven decision making will continue to be in the spotlight. Second, there will be a greater emphasis and focus on defining and assessing the employee experience. Third, there will be continued focus on measuring diversity, equity and inclusion efforts. Finally, the unknowns of work environments will continue to challenge people analytic functions. Related Links Millan Chicago Harnessing the power of analytics and technology
In the inaugural episode, co-hosts Ron Landis and Jennifer Miller introduce their podcast, People Analytics Deconstructed. They discuss the ever-growing field of People Analytics, sometimes also commonly referred to as HR Analytics or Workforce Analytics. In this episode, we had conversations around these questions: What is People Analytics? What are some of the challenges that organizations face in the area of People AnalyticsWhat are some clear steps that HR professionals can take in the field of People Analytics to make the most out of their data? Key Takeaways: The term 'People Analytics' has been growing in popularity since its arrival around 2005 and has joined other terms like HR Analytics, Workforce Analytics, and AI Data to represent the use of data driven insights about an organization's workforce. The term 'People Analytics' can be broken down into two components. First, analytics refers to the four step process of using data including determining what we want to know, collecting the right data to answer the question, using an appropriate technique to analyze the data, interpreting the analysis, and making a decision based on the question that was asked. Second, people refers to any aspect of the employee's experience at an organization from interviewing to leaving the organization. Ron and Jennifer identify and discuss four challenges that organizations face in the field of People Analytics. First, organizations need to ask the right question. Ensuring that the question is specific enough and will lead to insights and next steps is key to ensuring success. Second, organizations need to ensure what they're measuring is meaningful, replicable, and valid. Third, organizations have struggled obtaining talent to complete the 'People Analytics' function; individuals need to have skills in both human behavior and analytics. Finally, organizations need a robust data culture. At the end of the episode, Jennifer and Ron recommend steps for folks just starting out in this space all the way to the more advanced HR professional. Related Links Millan Chicago What is People Analytics