POPULARITY
Notes to the talkHere is a brief that summarizes the main themes and important ideas discussed in the provided sources: an audio recording ("Data Analysis_042025.mp3") of a thesis seminar session and excerpts from a document titled "Making Sense of Stories: Analyzing Qualitative Data in ELT Teacher Training." The primary focus is on data analysis techniques, particularly qualitative coding, triangulation, and the potential for incorporating quantitative elements.I. Key Themes and Important Ideas:A. Importance of Completing Data Collection Before Analysis:* The seminar leader emphasizes that data analysis should only begin after all data collection is complete. "Today's discussion is about data analysis. All of you have collected or very close to having completed uh collecting all of your data and this is an important requirement to continue the process of data analysis... If you are still trying to collect some information, know that what we talk about today uh you need to wait."* Starting analysis prematurely, before all data is gathered, is considered a "mistake."B. Understanding the Purpose of Data Analysis:* Data analysis is crucial for understanding the collected data and determining what is relevant and significant to report in the results and discussion sections of the thesis.* It helps researchers move from a large amount of raw data to focused and insightful findings. "Think of it like this. All of you are at this point, you've collected, if not all, most of your data. So you have all this data that you've collected... Ahora con todo esta información which data is not relevant... So you're going to then include this circle represents now only the information that relates to your research questions... Now from your data analysis... you're going to then figure out ok of all this information that now is relevant to my study, what is worth What is including in mys discussion?"* Not all relevant data needs to be reported; the analysis helps identify the most "important, surprising, insightful, interesting" findings.C. The Concept and Importance of Triangulation:* Triangulation involves bringing together different data sources (e.g., interviews, observations, documents) to gain a more comprehensive understanding of the research topic.* It allows for comparison between what participants say they do/believe, what they actually do (observed), and their planning/reflection processes. "Think of this if it helps to look at it like this. Your um your information here is allowing you to compare different things. For example, what people say they do or believe... What do they actually do? Well, to know that, what do we have to do? Have to observe."* The seminar leader stresses the importance of having sufficient data to triangulate and encourages participants to address any concerns about this. "If anybody today right now has concerns about whether or not you have the types of data to allow you to triangulate, we need to have a discussion today."* The "Making Sense of Stories" document provides specific examples of triangulation in ELT teacher training research, such as comparing planned instructions in lesson plans with delivered instructions observed in the classroom. "Compare the planned instructions (document) with the delivered instructions (observation). Were planned ICQs actually used?"D. Introduction to Qualitative Coding:* Qualitative coding is defined as a systematic process of labeling and organizing segments of text data (transcriptions, observation notes, documents) to identify patterns, themes, and concepts relevant to the research questions. "The process of coding is the process of labeling text. Coding is a systematic way to make sense of rich, complex, and often messy reality of language."* All audio and video data must be transcribed into text before coding. Microsoft Word Online's transcription feature is suggested as a tool.* The coding process involves identifying text segments (words, phrases, sentences, paragraphs) that relate to the research questions and assigning specific labels or "codes" to them. "You're coding things that relate to your research questions... Porque estamos en este proceso distinguendo, tenemos que distinguir qué sirve para nuestro estudio, qué no sirve, vamos a dejarlo fuera."E. Levels of Qualitative Coding:* The seminar introduces a three-level inductive coding approach:* Level One (Initial Codes): Creating very specific labels directly from the text, the literature review, or using in vivo codes (participant's exact words). "The first you create... the code, the label comes from your literature review... Using a label a code directly. If anx dijo eso... Tú puedes seleccionar esta frase. ¿Qué lebo puedes poner? Anxious, anxiety."* Level Two (Categories): Grouping the initial, specific codes into broader, more conceptual categories. "When we finish, you should have a long list of codes. And so I would do it in something like Excel... Les Segundo nivel es ordenar. Este grupo de códigos initial codes va aquí y voy a crear otra código. Puede ser como en category que representen todos sus códigos que son más específicos."* Level Three (Themes): Grouping the categories into overarching themes that provide a higher level of understanding and relate directly to the research questions. "Level three, yo voy a poner este themes. Ya vamos a tener categorías, ¿verdad? Cada categoría va a tener sus initial codes. ¿Qué hicimos? ¿Qué hacemos para este nivel level? Categoriz group these categories into these yes."* The "Making Sense of Stories" document also describes a similar iterative coding process, including immersion, initial/open coding, developing a codebook, focused/axial coding, and identifying themes/selective coding.F. The Codebook:* The outcome of the coding process is a codebook, which is a crucial part of the methodology section of the thesis.* The codebook will list all the codes used, potentially organized by categories and themes, and may include definitions and examples. "Cuando terminen, you're going to have a codebook... you're going to include your codebook that's going to include all of the codes that you used and it's going to be an outline como esema in word. categories initial codes."* The methodology section will describe the coding process and reference the codebook in the appendix.G. Incorporating Frequencies and Duration (Quantitative Elements):* The seminar leader emphasizes that qualitative data can be converted into quantitative data (frequencies, duration) for analysis. "How many of you think you'll need to analyze because we can convert qualitative information into quantitative information..."* This involves counting the occurrences of specific codes or measuring the length of certain events (e.g., teacher-student exchanges, use of relaxation techniques).* Examples discussed include tracking the frequency of positive/negative reinforcement, scaffolding, relaxation techniques, and the duration of collaborative work or interactions with specific students.* The "Making Sense of Stories" document provides detailed examples of how to quantify qualitative data by defining observable behaviors, developing coding rules, and using presence/absence or frequency counts in spreadsheets.H. Relationship Between Analysis and Reporting:* The analysis process directly informs what will be reported in the results and discussion sections. "We don't know what to write in the results and discussion until we understand the data. To understand the data, we need to analyze the data."* The evidence presented in the results section will often consist of direct quotes from the data that have been coded.* The analysis (coding, identifying themes, considering frequencies) helps determine the structure and content of the results and discussion.I. Openness to Modifying Research Questions:* Based on the initial findings during data analysis, it may be necessary to slightly modify the research questions to better align with the emerging answers. "It's very common at this point as you are analyzing your data and when you come back on May 5th that in some cases we may need to modify slightly your research question."* However, any modifications should remain within the scope of the literature review.J. Timeline and Expectations:* Participants are expected to begin the data analysis process (coding, considering frequencies) during the break before the next group session on May 5th.* This analysis is considered a crucial step that will significantly impact the quality of the thesis.* The final thesis paper is due on May 22nd, followed by mock presentations starting on May 26th and oral defenses.K. Utilizing Large Language Models (LLMs) as Research Assistants:* The "Making Sense of Stories" document introduces the potential of using LLMs to assist with qualitative data analysis.* LLMs can help with generating initial coding ideas, applying preliminary coding schemes, calculating frequencies of codes, and analyzing Likert scale questionnaires.* However, it is strongly emphasized that researchers must critically assess, validate, and cross-reference the output from LLMs to avoid bias and inaccuracies. LLMs should be seen as tools for augmentation, not replacements for rigorous methodological practices.II. Notable Quotes:* "Today's discussion is about data analysis. All of you have collected or very close to having completed uh collecting all of your data and this is an important requirement to continue the process of data analysis..."* "Please don't make that mistake. Okay. Today what we're going to be talking about is a process of analyzing qualitative information, but it's also a way to for you to start thinking about what you're going to report."* "This concept of triangulation is going to be very important in today's discussion for data analysis. Think of this if it helps to look at it like this. Your um your information here is allowing you to compare different things."* "Qualitative coding is the process of systematically identifying, labeling, and organizing segments of your data to discover patterns, themes, concepts, and relationships relevant to your research questions."* "Coding is simply labeling. It's giving a name to the text that you have."* "Repito, los códigos tien que ser super específico. Si comenzamos demasiado general, we don't have any place to go if we start to general."* "All qualitative can be converted to quantitative data and vice versa. When conducting qualitative data, you might find it useful to convert data to quantitative data and then analyze it."* "Correlation does NOT imply causation!"III. Implications for Thesis Work:* Participants need to prioritize transcribing their audio/video data and engaging in the initial levels of qualitative coding.* They should actively think about how triangulation will be achieved in their studies using their collected data sources.* Considering potential quantitative analysis (frequencies, duration) can add another layer of insight to their findings.* Developing a detailed and well-defined codebook is essential for a rigorous and transparent analysis process.* Researchers should remain flexible and open to refining their research questions based on the initial insights from the data analysis.* While LLMs can be helpful tools, they should be used judiciously and with critical evaluation.This briefing document provides a comprehensive overview of the key aspects of data analysis discussed in the provided sources, highlighting the importance of systematic qualitative methods and the potential for integrating quantitative elements in ELT teacher training research. Participants are encouraged to begin their analysis promptly and seek clarification on any doubts.ReviewQuiz* According to the speaker, what is the primary focus of today's session? Why is it being addressed at this particular point in the semester?* Explain the significance of triangulation in qualitative data analysis as described in the audio. Provide an example of how triangulation could be applied using different data sources mentioned.* Summarize the three levels of coding for qualitative data analysis discussed in the audio. What is the purpose of moving through these levels?* Describe what a codebook is and when it should be developed in the data analysis process. What key information does it contain?* Explain the difference between creating codes and using in vivo codes. Provide an example of each based on the provided material.* Why does the speaker emphasize the importance of transcribing all audio and video data to text before beginning the coding process?* What is the speaker's advice regarding modifying research questions at this stage of the thesis process? What important caveat does they mention?* Describe at least three examples from the audio of how qualitative data can be converted and analyzed using frequencies or duration.* According to the speaker, what constitutes the "results and discussion" section of the thesis paper in relation to the analyzed data? How does this differ from the literature review?* What reminders were given regarding the assessment components and attendance policy for the thesis seminar?Quiz Answer Key* The primary focus of today's session is data analysis, specifically for qualitative information. This is being addressed now because students have either completed or are very close to completing their data collection, which is a necessary prerequisite for starting the analysis process.* Triangulation is the process of bringing together and comparing information from different data sources (e.g., interviews, observations, documents) to gain a more nuanced and credible understanding of the research topic. For example, a researcher might compare a teacher's stated beliefs about differentiated instruction in an interview with their observed teaching practices and relevant lesson plans to see if these different sources of information align.* The three levels of coding are: (1) Initial/Level One Coding, which involves assigning specific labels or codes to segments of text; (2) Level Two Coding, where initial codes are grouped into broader categories; and (3) Level Three Coding, where categories are further grouped into overarching themes. The purpose of moving through these levels is to move from specific data points to more general analytical insights and patterns.* A codebook is a central document that is developed as the researcher codes their data. It lists all the codes being used, provides a clear definition for each code, outlines inclusion and exclusion criteria for applying the code, and often includes example snippets from the data that illustrate the code. It ensures consistency in the coding process.* Creating codes involves the researcher developing labels for segments of text based on their understanding of the data and research questions, potentially drawing from the literature review. Using in vivo codes involves using the exact words or phrases spoken by the participants as the codes themselves. For example, in the teacher interview snippet, "grammar mistake" is an in vivo code, while "delayed correction" is a created code.* The speaker emphasizes transcribing all audio and video data to text because the process of coding, which involves identifying and labeling segments of data, is primarily applied to text. Therefore, to analyze non-textual data in this way, it must first be converted into a textual format.* The speaker advises students to be open to slightly modifying their research questions based on the initial findings from the data analysis. However, they caution that any modifications should still align with the original literature review and the overall purpose of the research.* Examples of converting qualitative data to quantitative for analysis include: tracking the frequency of positive and negative reinforcement used by a teacher during a lesson; measuring the duration of student-teacher interactions; and counting the number of times a specific vocabulary strategy is implemented in a classroom.* The "results and discussion" section of the thesis paper primarily consists of the analyzed data, presented as evidence (results), and the researcher's interpretation and explanation of these findings in relation to the research questions and existing literature (discussion). This differs from the literature review, which presents findings from previous studies to provide context for the current research.* The speaker reminded students that their tutoring grade only makes up 40% of their final thesis seminar grade, with the oral defense and written thesis evaluation contributing the remaining 60%. They also reiterated the attendance policy, where missing a tutoring session equates to five absences, and exceeding three missed sessions may require taking an extraordinary exam.Essay Format Questions* Discuss the role of data analysis as a crucial bridge between data collection and the reporting of findings in qualitative research. Using examples from the provided audio, explain why skipping the data analysis stage can lead to significant challenges in the thesis writing process.* Critically evaluate the concept of triangulation in qualitative research, drawing on the examples and explanations provided in the sources. Discuss the strengths and potential limitations of using multiple data sources to enhance the credibility and depth of research findings in ELT teacher training.* Explain the three-level coding process for qualitative data analysis presented in the audio, emphasizing the importance of specificity in initial coding and the subsequent development of categories and themes. How does this systematic approach contribute to making sense of complex qualitative data?* Considering the information provided on analyzing frequencies and duration in qualitative data, discuss the value of incorporating quantitative elements into a primarily qualitative study. Provide specific examples from the audio of how this mixed-methods approach can enrich the analysis and provide additional insights in ELT research.* Reflect on the advice given regarding the iterative nature of qualitative research, including the potential need to modify research questions after initial data analysis. Discuss the importance of maintaining an open mind and flexibility throughout the research process while ensuring alignment with the existing literature review and overall research focus.Glossary of Key Terms* Coding (Qualitative): The process of systematically identifying, labeling, and organizing segments of qualitative data (text, audio transcripts, observation notes) to discover patterns, themes, concepts, and relationships relevant to the research questions.* Triangulation: The use of multiple data sources, methods, investigators, or theories to provide a more comprehensive and nuanced understanding of a research phenomenon, enhancing the credibility and validity of the findings.* Codebook: A central document that lists all the codes used in a qualitative study, along with their definitions, inclusion and exclusion criteria, and sometimes example data excerpts. It serves as a guide for consistent coding.* In Vivo Code: A type of code that uses the exact words or phrases spoken by the participants as the label for a segment of data.* Initial Coding (Open Coding/Level One Coding): The first stage of qualitative data analysis where researchers go through the data and assign preliminary, descriptive codes to segments of text, often staying close to the data itself.* Focused Coding (Axial Coding/Level Two Coding): A later stage of qualitative data analysis where initial codes are reviewed, refined, combined, and grouped into broader categories based on their relationships and patterns.* Thematic Analysis (Selective Coding/Level Three Coding): The process of identifying overarching themes or central ideas that emerge from the categories developed during focused coding, which help to answer the research questions.* Frequency Analysis: A method of quantitative data analysis that involves counting how often specific codes, behaviors, or events occur within the data.* Duration Analysis: A method of quantitative data analysis that involves measuring the length of time that specific events or interactions last within the data.* Transcription: The process of converting audio or video recordings into written text.* Likert Scale: A psychometric scale commonly involved in research that employs questionnaires. It is the most widely used approach to scaling responses in survey research, such that the term (or equivalently likert-type scale) is often used interchangeably with rating scale, although there are other types of rating scales.* Inductive Approach: A research approach that starts with specific observations and data, then moves towards identifying broader patterns, themes, and theories. The coding process described is largely inductive.* Deductive Approach: A research approach that starts with a general theory or hypothesis and then gathers data to test or confirm it.* Research Question: A specific inquiry that the research aims to answer. It guides the data collection and analysis processes.* Literature Review: A comprehensive summary and analysis of existing scholarly literature relevant to the research topic, providing context and identifying gaps in knowledge.* Methodology: The section of a research paper that describes the methods used to collect and analyze data. The coding process and codebook would be described in this section.* Results and Discussion: The section of a research paper where the findings of the data analysis are presented (results) and interpreted in relation to the research questions and existing literature (discussion).* Assessment (Thesis Seminar): The evaluation of a student's work in the thesis seminar, which includes the tutor's grading (40%), the oral defense (20%), and the evaluation of the written thesis (40%).* Oral Defense: A formal presentation of the completed thesis to a panel of examiners, who then ask questions about the research. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit benjaminlstewart.substack.com
How can we improve attendance when every school has a different process? In this episode, John Dues continues his exploration of Deming's philosophy in action, focusing on chronic absenteeism. As part of their third PDSA cycle, John's team shifts from individual interventions to process standardization—mapping how each of their four campuses handles attendance interventions. The surprising discovery? Each school follows a different process, revealing hidden variation and inefficiencies. By visualizing these systems, the team is not only grasping the current condition but also setting the stage for a reliable, scalable, and effective process. This methodical approach highlights how understanding systems and reducing variation are key to meaningful improvement. TRANSCRIPT 0:00:02.1 Andrew Stotz: My name is Andrew Stotz and I'll be your host as we dive deeper into the teachings of Dr. W. Edwards Deming. Today, I'm continuing my discussion with John Dues, who is part of the new generation of educators striving to apply Dr. Deming's principles to unleash student joy in learning. The topic for today is Mapping the Process. John, take it away. 0:00:26.7 John Dues: Hey Andrew. It's good to be back. Yeah. For the folks that have been following along for the past several episodes we've been working towards defining this problem more narrowly in terms of this chronic absenteeism issue we've been talking about. And for the last few episodes we've been talking about how the team didn't have enough information to write that precise problem statement. And we took a look at gathering additional information by running a couple PDSA cycles in those first two cycles that we've discussed so far. We know we had zeroed in on a handful of students and ran PDSAs with them and their families about their obstacles getting to school. And then we left off talking about how we were going to shift gears in PDSA cycle three. And instead we were going to focus on standardizing our process. So creating a process map for how we intervene with kids with our attendance teams across the network. So that's what the team is currently working on. But just as a sort of quick reminder to folks, and especially if you're watching, we have this model that we've been working through, this four step improvement model where you set the challenge or direction, grasp the current condition, establish your next target condition, and experiment to overcome obstacles. 0:01:48.1 John Dues: And then like we've talked about several times, we're doing this with the team and that includes people working in the system, people with the authority to change or work on the system, and then at least one person with significant knowledge of the System of Profound Knowledge, like an SOPK coach. And we've been using this model that's on the screen to sort of symbolize or I guess visualize what those four steps look like. You're sort of marching up this mountain towards this challenge or direction. And we've also talked about this long range goal that we've had and we've taken a look at some data where we have our chronic absenteeism rate mapped out over the last eight years or so. We have this long range goal. So this is the direction of the challenge where we're trying to take our chronic absenteeism from above 50% down to 5%. We have the data going back to the 2016/17 school year. Then we also talked about how there's this, not surprisingly, there's this sort of pre-pandemic level of chronic absenteeism, which was again too high. It's not where we wanted it, but we have this major shift up where we've seen this significant jump in chronic absenteeism since the pandemic hit. 0:03:15.0 John Dues: So in those four years, 2020/21, 21/22, 22/23 and 23/24 we were up in the 51, 52, even up into the close to the 60% range in chronic absenteeism at the height of the pandemic. So for PDSA cycle three, really doing two things. So, and we're going to talk about this in the episode today. If you remember back way at the start of this series, we looked at something I called a system flowchart. So we'll kind of revisit that and then we're going to take a look at two process maps that were created by two of our school teams to sort of map their current process. And then we'll walk through, sort of we'll take that, we'll walk through what the plan is for this PDSA cycle three. So let's start by looking back at this system flowchart. I'll sort of reorient you to this. So we have up on the, and this is the current state. So up on the top we have the target system which is attendance. And then we have this aim that is sort of a three part aim. 0:04:42.7 John Dues: We want to define strong attendance for students and staff, make sure everybody's on the same page. We want to ensure that students, families and staff have a shared understanding of what it means to have strong attendance. And then we are working on improving and creating systems that identify and remove barriers to strong attendance for students and staff. And then over on the left hand side we have sort of inputs. So these are things that contribute or their conditions that impact our system. And then in the middle we have our core activities. So the things that are happening that impact attendance and then there's outputs, both negative and positive outputs that come out of this system. And then we get feedback from our customers, we do research on this feedback and then we make design if it's a new system or redesign if it's a current system. And some of these things, some of those contributing conditions are, Ohio has a set of transportation laws. You know, there's our school model and our the way we operate our school hours, our expectations regarding student attendance, our various intervention systems, neighborhood dynamics, how far our families live from school. 0:06:03.4 John Dues: These are all things that contribute to our sort of inputs into our system. And then we have these core activities. And remember, we could just zero in on attendance systems. But there are many other parts of our system that impact whether or not kids come to school. So for one, many of our families are always going to be new to our system. So for example, in our middle schools, where they start with sixth grade some number of those kids are going to be from our elementary schools. Some number of those kids are going to come from other neighborhood schools, but they're all going to be new to that middle school. So whether they're coming from our elementary school or not, you have to think about how is the student and family being onboarded to our system. Another thing we're looking at is school culture and trust. You know, how much trust is in there, in the school. Do they have a strong culture between teachers and families or teachers and students, or the principal and teachers? Then there's academic systems how engaging are classes, those types of things. 0:07:05.7 John Dues: Then we have the attendance intervention systems, which is obviously a core focus. We have health and wellness and changes around mindset since we went through the pandemic. And then finally the third sort of, or sorry, the third, not the third, but the sixth core activity that we talked about was transportation. So we've talked about lots of problems with our busing system this year. So that's another thing that has a big impact on attendance. And so what this group, again is working on the core activity is the attendance intervention systems. What's the process for that? But I had mentioned in an earlier episode that we have another group that's working on transportation and busing and how we can improve that. So the whole point of the system flowchart is there's many, many things that go into something like an attendance rate. And many of these things are very challenging. Some are largely out of our control, but much of it is largely in our control. And we're trying to pull the levers that we think are most important when it comes to student attendance. 0:08:09.2 Andrew Stotz: And just one thing on that, one of the things I just find so frustrating and it's part of this class I'm teaching tonight is how do we scale a business. And one of the ways that's critical to scaling is simplifying. And sometimes, like, when I look at all of this complexity, on the one hand, you're like, okay, well, that's our job, right? Our job is to manage complexity. And that's the reason why we don't have a thousand competitors coming in, because it's complex and it's difficult. And on the other hand, it's like the simplifier in me is like, how do we simplify this? You know, like, I'm just curious about how you see complexity versus simplification. And in particular, it may just be in this stage, you're just putting everything up there, and it's just overwhelming. Like, oh, my God, there's so much involved in just fixing one thing, you know? What are your thoughts on that? 0:09:11.5 John Dues: Yeah, that's, I mean, that's a really good question. It's, I mean, I think it is a complex system because there's so many moving parts. And I think part of the nature of a complex system versus something like a complicated system is that when you try to impact some part of the system that has these ripple effects into other parts of the system, many of which are unattended or unintended consequences. So, yeah, I mean, I think one thing we have working in our favor is very stable senior leadership. So we're pretty good at understanding how we all work. We have a pretty good historical knowledge of how our school system has worked over time. And we have a pretty good holistic view of all of this complexity. Not that we're all able to improve it all at once, but I think we have a pretty good grasp of what's going on. And even a team like this there we could move faster perhaps, but I think we're trying to be pretty deliberate about the changes that we're making. 0:10:24.7 John Dues: And we're also deliberate about the levers that we're trying to pull for improvement. And these things change over time. So even something like transportation, I mean, the reason that we're working on that now and that we've chosen to work on that now is because the transportation that we're getting from the district is so untenable. Whereas 15 years ago, when I was a principal in our system, while the busing wasn't perfect, it was pretty consistent. You know, most days it dropped off at about the same time. It picked the kids up at about the same time every day. And while it was nowhere near where you would want it to be overall, it wasn't my biggest pain point as a principal. Now kids are literally missing hours or buses aren't showing up at all. And so we have to figure out a way to make this work. And to your point this was a system when charter schools were set up in Ohio, is just basically like the district, the nearby district, which is usually a big urban district, is going to do the busing for charter schools. 0:11:35.5 John Dues: And there really wasn't any more thought to it than that and so from the district's perspective, they they have to manage a lot of complexity. They have their own schools, they're busing for charters, which there's about 15,000 kids in charter schools in Columbus. And then they're also busing for private schools. And the district itself still has a very large geographic footprint, even though the number of students that attend there are about half what it was 50 years ago. So they have very spread out buildings, some of which are far below capacity, but they still have students attending them. So they haven't shrunk that geographic footprint. So that's a challenge as well. And at a time when it's become very difficult to find bus drivers. So I don't take lightly, like the challenges that the district is facing in this, but we got to get kids to school as a... Just as a basic starting point to be being able to do school well. 0:12:31.8 Andrew Stotz: Okay, keep going. 0:12:33.8 John Dues: I mean, it's also a really good segue. We'll take a look at a couple of the process maps. So we have our four campuses. We have something different going on. So even though our four campuses are geographically pretty proximate to each other, they have four different processes going on with their attendance intervention system. So take a look at this first process map, which is pretty simple from start to finish. What is that? 1, 2, 3, 4, 5, 6, 7, 8, 9. It's really nine steps and it really... 0:13:08.6 Andrew Stotz: And for the listeners out there that can't see it, he's got a process map, State Street. And what it shows is some circles and some squares and some tilted squares. I don't know what those are called. 0:13:23.5 John Dues: Yeah, I mean, it's just the circles are the start and end points. 0:13:26.9 Andrew Stotz: Okay. 0:13:27.8 John Dues: The squares are the steps in the process. And then the diamonds are, when there's a... Some decision has to be made in the process. 0:13:37.0 Andrew Stotz: Okay, great. 0:13:38.0 John Dues: So we're not going to go through all of these steps. But if you are watching this is a pretty simple process at one of our campuses, while there are multiple people sort of involved, it's also true that one person is driving a lot of this work. But the point is, especially for people that are watching, when you sort of walk through these 10 steps, you're going to see that this map is going to look very different and less complicated than the map at one of our other campuses. But the point is, especially if you can see things visually that you can tell just by looking at the two maps, there are two very different processes going on. And these two schools, this first one is actually an elementary school that feeds into the middle school. That is the map that we'll look at second, so this is the first process map. And then when we look at the second map, we can see very quickly, just visually speaking, there are far more steps, it's far more complicated. There's far more decision points. There's a lot more detail here, and there's a lot of interfacing between multiple people that all play a role in this particular process. 0:14:55.4 John Dues: And it's not that one is right and one is wrong. It's just that when you have these two campuses doing it differently, there very likely is inefficiencies. 0:15:06.8 Andrew Stotz: And are they mapping the same thing? And they... 0:15:10.6 John Dues: Yes, it's the same process. It's how they intervene as the state requires for kids that have some type of attendance issue. And there's different thresholds that mean different parts of the process kick in as a result. But they're operating within the same state process that you have to follow. But even so, you can see that they have a very different sort of illustration of what that process looks like. And if I had the other two campuses, we'd have four separate versions. And remember all these steps and you know, all these decision points. There's documents that exist. There's meetings that happen. There's agendas for those meetings. There's agendas for meeting with parents. There's letters that have to be mailed. And so you can imagine if everybody is creating separate forms, separate meeting agendas, keeping this information in different ways. There's probably a way to design this that's far more effective and efficient by pulling from the four different processes to create one process. And oh, by the way, if you do that, it makes training easier for anybody new that's going to take on some of the clerical roles or some of the interfacing with parents. 0:16:26.9 John Dues: And then if you have one process that you're working from, then you can also share best practices as they emerge as you're working. But if you have four variants, it's much harder to share that information. 0:16:43.4 Andrew Stotz: And you know, it's questionable whether this is a core function. It is an important process. Is it the core? 0:16:54.8 John Dues: Yeah, I mean, I would say it's a, I guess depending on how you define core. I mean, it's a required process. It's a process that the state requires and a lot of the sub steps are required components. Now, interestingly, this, the setup for this attendance intervention system came out of some legislation called Health...House Bill 410. And it's been in place for maybe five years or so, four or five years. And they're changing it right now. So there's new language. 0:17:30.2 Andrew Stotz: Just when we got it set. 0:17:32.2 John Dues: Just when we got it set. But we at least know the likely changes that are coming. So Ohio operates on a two year budget cycle. So in this new budget that will likely pass on. Well, it has to pass by June 30th. Right now there's language in there that changes this process for schools and actually gives schools way more leeway. So we'll sort of be ahead of the game because we're going to have our own process mapped and you know, we can remove some of those things that are a little more cumbersome on the school teams. And to your point, those things that were compliance related but didn't have really impact on improving attendance, we could just remove those now. We'll have some more freedom there. 0:18:13.8 Andrew Stotz: I mentioned about the core thing because there's a great book I read called Clockwork by Michael Michalowicz and he talks about identifying what is the core function in your business and then really focusing in on that. And it's interesting because one of the benefits of that is that if you don't do that, you can get caught up in every process like, and then all of a sudden it's just everything is seen as equal. 0:18:43.6 John Dues: Right. Yeah. 0:18:44.6 Andrew Stotz: Anyways, keep going. 0:18:45.9 John Dues: Yeah, it's one of those weird things and I'll stop sharing. Yeah, that was the last visual. But that's one of those things where like I said for the last five years or so these things have been required. And I think you'd be hard pressed to find a school system that would say these, the way things are outlined as requirements for schools to do on this front are not effective but people do them because they're required. And you know, I think with this updated language, we'll have some more flexibility to do this how we want to do it. 0:19:20.4 Andrew Stotz: And how does this, just to clarify how it fits into that mountain diagram, this is trying to assess or deal with the obstacle or is this the current state? I noticed that it said current state for the process map. But is the purpose of what you're... The original one you show. But is the purpose of what you're doing trying to overcome the, identify and overcome the obstacle? 0:19:46.8 John Dues: Well, I would say this is a part of grasping that current condition. You know, we did that early on in terms of that system flowchart, in terms of what the whole system looks like. And now what we're doing is learning about the processes at each individual school. Well, I'd say when you map out a process like this, and I think people would probably, my guess is, is that senior leaders would often say, well, no, we have a process and you know, everybody follows the same thing. And then if you actually mapped it like that, step by step, what you would see is tons of variation, tons of variation. 0:20:23.9 Andrew Stotz: So one of the benefits of that is it's not only, it's about facing the reality or understanding the true current state. Like everybody can say, no, no, no, we all know what the current situation is. No, we don't. 0:20:41.2 John Dues: No, you don't. And every time I sit with a team and make these process maps, we'll say, okay, what's the next step? And you know, maybe a couple people will pipe up and then someone inevitably goes, well, no, wait a second, that's not what we do. What we do next is X, Y or Z. I mean, it's... And that happens over and over and over again with this with this process just seems to be a part of it. It's not a bug. It's actually a feature of this mapping exercise. 0:21:08.5 Andrew Stotz: And many people try to solve these problems by just jumping in rather than taking the time to really, truly understand the current state. You know, what's the risk of the action taker? 0:21:22.7 John Dues: Well, yeah, I mean, I think what happens a lot of times is like when people don't really understand a process like this is they start blaming people for things that aren't going right. That's what typically happens. 0:21:35.8 Andrew Stotz: I want people to take responsibility around here. 0:21:38.3 John Dues: We have to hold people accountable, but you can't hold them accountable to a process that's unknown. Right. It's not well specified, but that's what typically happens. So, so yeah, so the objective for this PDSA cycle, so we're on this third cycle. So those first two were focused on talking to individual kids, interviewing with individual kids. And we said well let's actually look at our process for how we're intervening from a school perspective as a team at each of the schools and let's standardize that process. 0:22:13.1 John Dues: So that's what we're doing. We're sort of mapping it from start to finish, gathering feedback from key stakeholders as we sort of map a standardized process that works across all four schools. And really one of the things that we're doing right now is we're saying can we develop a process? And we have these four dimensions that we're looking for to sort of meet. One is functional, one is, is it reliable? Third, to your point about the business talk you're giving tonight is is it scalable? You know, does it work across the entire school and across the entire school system? And then is it effective? And we're basically, the attendance improvement team basically is going to put together the process and then they're going to put it in front of our senior leaders and we're going to rate sort of the process across those four dimensions and they've sort of predicted what they think is going to, how it's going to hold up when it's sort of tested by those senior leaders. 0:23:12.8 John Dues: So that's kind of what we're doing right now. So step one is mapping the four campuses and then we're going to map one standardized process, at least a rough draft. And again, so once that initial network wide or system wide map is created, we're going to put it in front of that senior leadership group. We're going to give them a brief survey, sort of a Likert scale across those four dimensions and see, see what they think basically. So that's our next step right now. 0:23:40.6 Andrew Stotz: Exciting, exciting. I want to tell a little story to wrap up my contribution here and that is after many years of living in Asia, I started to realize that everything's connected in Asia, people are connected. If you want to be mean to somebody, it's going to come back around to you. And if you want to push on somebody, it's going to come back around because everybody knows everybody. And I like to picture it like a circle. Let's just say a bunch of people in a circle facing the same direction. And then let's say they all put their right arm on the right shoulder of the man or woman in front of them. So now we have a circle that's connected in such a way. And if you think you're going to get something done by squeezing on the shoulder of the person in front of you, the problem you're going to face is that that's going to transmit all the way around the circle until all of a sudden you're going to be squeezed. And that is my visualization of the way influence works in Asia. Yeah, but I feel like it's the same type of thing when you just say, I want to hold people accountable and we need responsibility around here. 0:24:57.8 Andrew Stotz: What ends up happening is that the only choice that someone has is just to squeeze on the person in front of them. And when they do that, it just transmits a squeeze all the way around. It builds fear, it builds distrust and all of that. And so that. That was a visualization I was having when you were talking. 0:25:16.4 John Dues: Yeah, I mean, I think... And it can be convicting a little bit there. There's a Dr. Deming quote that I'll share to sort of wrap this. Before I do that, I think again, I go back to we... There are these unknown things about how to improve attendance. And so this PDSA, this plan, do study, act cycle, we're using one, again, was intervening with kids and trying to work with a handful of kids that had attendance issues and just see what works and what doesn't. We've shifted gears in this third cycle to something very different. But this is all part of one comprehensive effort by this team to put this new system in place. And all of these pieces of information are important, but this and this mapping, the process thing I think is a great... And I think maybe a lot of people wouldn't think about that as a PDSA to plan a new process, but you can absolutely use it in that way. But the Dr. Deming quote that I think of when I do process mapping is "if you can't describe what you're doing as a process, you don't know what you're doing." 0:26:21.7 John Dues: And I think that's true. Again, it's not to convict people, but I think often when we say, well, that's this thing is going wrong, we need to hold people accountable. And then you ask that person that's making that claim, well, what is the process for this thing? And they often can't tell you. Or they do, it's so vague that nobody could. 0:26:45.3 Andrew Stotz: Or they say, that's not my responsibility. My responsibility is to hold you accountable for getting the result. 0:26:51.4 John Dues: Right. Yeah. And, and, and many people, many organizations don't write these things down. You know, they don't write them down and share them with folks. So that's just some of these simple things are as part of the power making things exciting. 0:27:05.1 Andrew Stotz: Exciting. Well, yeah, how about we wrap it up there and so what are we going to get next time? 0:27:10.7 John Dues: Yeah, I think so. What we went through quickly here at the end was the plan for this PDSA cycle. So by the time we get back together, will have the process map for the system and we'll have had the feedback back and we'll be able to compare that to what the group predicted. 0:27:28.8 Andrew Stotz: So ladies and gentlemen, we're watching it in real time unfold the applications of Dr. Deming's principles. And isn't that what we want? You know, obviously we love theory and we love ideas, but we really need to be all thinking about how we apply these things. And so from my perspective, I'm really enjoying this series and I'm learning a lot. And as I mentioned before, I've been improving some of my thinking and some of my teaching in particular, based upon the discussions that we've had. So on behalf of everyone at the Deming Institute, I want to thank you again for this discussion and for listeners remember to go to deming.org to continue your journey. And also you can find John's book, Win Win: W. Edwards Deming the System of Profound Knowledge and the Science of Improving Schools on Amazon.com This is your host, Andrew Stotz. And I'll leave you with one of my favorite quotes from Dr. Deming. I know you've heard it before, but I'm going to say it again. Until we have joy. "People are entitled to joy in work."
Host Dr. Davide Soldato and guests Dr. Jessica Burris discuss the article "Longitudinal Results from the Nationwide Just ASK Initiative to Promote Routine Smoking Assessment in American College of Surgeons Accredited Cancer Programs" and how persistent smoking following cancer diagnosis causes adverse outcomes while smoking cessation can improve survival. TRANSCRIPT The guest on this podcast episode has no disclosures to declare. Dr. Davide SoldatoHello and welcome to JCO After Hours, the podcast where we sit down with authors from some of the latest articles published in the Journal of Clinical Oncology. I am your host, Dr. Davide Soldato, medical oncologist at Ospedale San Martino in Genoa, Italy. Today we are joined by JCO author Dr. Jessica Burris. Dr. Burris is an Associate professor of Psychology at the University of Kentucky and co leader of the Cancer Prevention and Control Research Program at the Markey Cancer Center. Her research focuses on smoking cessation among cancer survivors, health disparities, and behavioral interventions to promote health equity. She also leads the BIRDS Lab, which explores the intersection of smoking, social determinants of health, and cancer survivorship. Today I will be discussing with Dr. Burris on the article titled Longitudinal Results from the Nationwide Just Ask Initiative to Promote Routine Smoking Assessment in American College of Surgeons Accredited Cancer Program. So, thank you for Speaking with us, Dr. Burris. Dr. Jessica BurrisThank you for inviting me. Dr. Davide SoldatoSo today we'll be discussing an important study on the implementation of smoking assessment in cancer care and specifically through the Just Ask Initiative. So, we know that tobacco use is a critical factor in cancer treatment outcomes in general, and yet integrating systematic smoking assessment into oncology care has faced various challenges. So, Dr. Burris, to start off our interview, I would like to ask you to briefly introduce the Just Ask Initiative for those of our readers and listeners who may not be familiar with it. So, a little bit about the primary goals and why do you think that routine smoking assessment is such an important aspect of cancer care and why the Just Ask Initiative focuses on this specific issue? Dr. Jessica BurrisSure. So, as you mentioned before, smoking is a really critical factor in terms of cancer care and cancer outcomes. It impacts a lot of things, from complications after surgery up into cancer mortality, but it also impacts patient's quality of life. Their pain may be more severe, they're more tired, their distress levels are higher. So, there's just a lot of different reasons why we need to understand and address smoking in the context of cancer care. But like you said too, there's a lot of barriers as well. But in order to effectively treat nicotine dependence and tobacco use, we really need to know who is currently smoking. And so that was really the driver for Just Ask, wanting to make sure that we are asking every person with cancer at their diagnosis and as they go through treatment, what their smoking history is, if they are currently smoking, which we usually consider to be any smoking or other tobacco use in the past 30 days, so that once we can identify that person, then we know who we need to help. Dr. Davide SoldatoThank you very much. That was very clear. And in terms of methodology, Just Ask was really a quality improvement type of initiative that involved the programs that were contacted and approached to participate in this type of initiative. And the methodology is pretty standard for this type of implementation science, which is the Plan Do Study Act methodology. So just a little bit of background on this type of methodology and why do you think it might be so successful when implementing these types of changes at the structural level and when we are implementing these types of programs. Dr. Jessica BurrisRight. So, the American College of Surgeons requires all the accredited cancer programs, both Commission on Cancer and the NAPBC or the ones that focus on breast cancer, to do at least one quality improvement project annually. And most of the programs do use the evidence-based Plan Do Study Act approach. I think it's a great one. It has a lot of evidence behind it, but it also is very practical or pragmatic. So, you're using data from your local healthcare system or clinic or program to inform what it is that you do. And then you're constantly pulling data out to see how well you're addressing the clinical practice change that you're hoping to achieve. And so, data is going in and coming out and you're using that to inform exactly what it is that you're doing over time. So, it's an iterative approach to practice change and again, one that has proven successful time and time again. And so that's the program that these programs and Just Ask used in order to increase the frequency by which they ask patients about smoking. Dr. Davide SoldatoSo as you were saying, the main objective of the initiative was really to understand if we are asking patients diagnosed with cancer and survivors if they are smoking. And how can we better report this information inside of the medical chart of the patient. So, what was the primary endpoint or the objective that you had for this type of intervention? And can you give us a little bit of results? So, what did you find the implementation of this quality improvement? How did it change the percentages of patients that were asked about smoking habits? And a little bit, what is your opinion on the results that you obtain in the study? Dr. Jessica BurrisSure. So, the goal was simple and that was to have an ask rate that was at least 90%. The way that we defined an ask rate is among all newly diagnosed cancer patients, how many were asked about their smoking history and their current status at that initial visit? And so, we wanted all of the participating programs who opted in to Just Ask in 2022 to achieve that 90% ask rate by the end of this one-year quality improvement project. And again, using the Plan Do Study Act approach, it was a very pragmatic study in some ways. So, what we did was we provided an intervention change package that we made available online. And programs could access that whenever they needed to and pull-down educational resources, patient facing materials, practical tools for changing the EHR or pulling data out of the EHR, any of those number of things. And then we also hosted webinars over the course of the year. And those webinars were great because half the time they were in response to questions that programs were asking as they went through the Just Ask QI project. And the other half of time we were really just reminding programs of the rationale and the reason for making sure that they're asking. And then of course, letting them know that they don't have to stop there, they should be advising patients to quit and assisting them with cessation. Even though that wasn't the goal of Just Ask, the goal again of Just Ask was getting that 90% rate. And so, we had over 750 programs who opted in to Just Ask and did this QI study with us, and it was successful. So, we met the goal, or rather the programs met the goal of that 90% ask rate. And that was maintained over time. And that was just fantastic. So again, we know that the end goal is really to assist patients with quitting, but we can't do that unless we know who to help. And so, you have to ask first. And again, they were able to do that. Dr. Davide SoldatoSo thank you very much. The quality improvement program was absolutely successful. And to go a little bit in the numbers, by the end of the one-year implementation of the program, you report a 98% rate of asking patients who first approached the centers or over time if they were or not smokers. So, you said before that you targeted a 90% ask rate in terms of smoking habits. But when looking at the data, I noticed that you already had in the baseline survey where you asked the programs about what were the practice before the implementation of the Just Ask initiative, already something that was quite close to the 90%. And yet, despite starting from such a good point, which was basically your endpoint, you still observed a major change over the years of the implementation. So, I wanted to just underline a little bit what is the value of this type of programs. And still starting from such a very high standard still, we managed to further improve. And as you were saying, this is pivotal and I think it's fundamental to really understand and see who are the patients that we need to refer and then to help in the smoking cessation. So, I just wanted a little bit of a comment on these very important results, despite already starting from a very good background from the centers. Dr. Jessica BurrisYeah, I'm glad that you brought up the baseline. So, I think one thing that's important about this study is that we looked at our ask rate or the asking as a clinical practice in two different ways. So, the 98% that you referred to that we found at the final survey is based on a response to a question on the frequency of asking. So, it's a Likert type question. And essentially what we did was we combined programs that reported usually asking or almost always asking into one, and that's where we arrived at the 98%. And at baseline it was 92%. What's interesting though is that we also asked them to report the specific number of patients who were seen in their cancer program during the prior six months and the number of patients who were asked about smoking in the prior six months. And with that we could get a proportion. And in every case, the self-report Likert question had a higher outcome than the raw data based on the data that was pulled from the EHR. And so, we saw this increase significantly over time, both in the self-report Likert question, but also in the EHR based data. And so, it was a win in two ways. What I think is really interesting though is that at baseline, even though 92% of programs said that they regularly ask about their patient smoking status, 16% of programs could not provide data that would allow calculation of an ask rate. So, they were reporting that they were able to do so but then could not actually do so. So, I think what that means essentially is that there's a disconnect between what programs are doing regularly or they believe that they're doing regularly and what their data actually shows. And it could be an issue with the quality of the data that's going into the EHR, or it could be an issue with pulling the data out of the EHR. And so one of the things that we saw that I think is a second indicator of success of Just Ask is that the quality of the data that programs were inputting into the EHR related to their patients smoking history and smoking status did improve over time, which meant that by the end it really was the case that the vast majority of programs were asking. And not only that, but they were also documenting it in a way to where it could inform patient care. Does that make sense? Dr. Davide SoldatoAbsolutely. And I think that that explanation really is truly important because I think that it also connects a little bit to how the initiative was able also to change things at the structural level, to be sure that there was the best possible way of asking, but also of having that information readily available inside of the EHR. This also connects a little bit to my next question, which was a little bit about organizational structure and also implementation barriers, which you report also as a self-reported information by the specific programs. So, there was a little bit of implementation barriers that was reported by the programs and this was not a specific endpoint of the Just Ask initiative, but you kind of mentioned it a little bit. The difficulties in pulling data from the EHR in understanding whether the information was collected and how it was collected. This might be one of the implementation barrier when we are looking at initiatives like Just Ask. So, I just wanted a little bit of your opinion if you think that these implementational barriers are more on the organizational side or on the provider side. And how can we use these quality improvement programs to really tackle this type of barriers to improve overall the reach and the importance of our action regarding smoking cessation. Dr. Jessica BurrisThe devils in the details, right? So I think it's a “both and” situation and not either or I think for providers, for individual providers, oncologists, nurses, supportive care providers, the issue of feeling like they're not fully trained in tobacco use assessment and treatment, and also feeling because of a lack of training that they don't feel confident or competent or even comfortable having conversations with their patients about their smoking history or being in the position to where they can really help someone who wants to quit in choosing the best path and way forward to do that that really matters. And so organizational readiness, these programs that participated were pretty high even at baseline in terms of the organizational readiness. They understood that it's a problem and they wanted to do something about it. And they were really eager and chomping at the bit to do so. But that has to trickle down to individual providers. And so, I think one of the implementation strategies that was used was staff training and provider education. And a lot of the participating programs chose that strategy. And I think as staff and providers are trained in how to ask and how to do so in a way that is nonjudgmental and that doesn't lean into things like stigma or blame or making patients feel guilty that perhaps their behavior led to their cancer, but really just understanding tobacco history and understanding nicotine dependence and the best strategies that we have to address those things that helped and that made a difference but it also is things at the system level, like having good EHR data, being able to pull those data out at a regular interval every three months or every four months, or even every six months to make sure that you're tracking smoking and also quitting over time. Both of those things need to happen. And I think those were things that we saw change as a result of Just Ask participation. Dr. Davide SoldatoRelating to this, provider readiness also to counsel patients on how to stop smoking or what is the best strategy. Despite, as you said in the very beginning, this was not the objective of Just Ask because you just wanted to improve the rate of smoking assessment and the quality of reporting of smoking assessment. You still observed higher rates of patients and survivors that were actually referred to some kind of intervention for smoking cessation. So, I was just wondering, why do you think that even though that was not required, you still observe this type of improvement? Like, is it just inherent to the fact that we are improving and we are placing more interest and more attention on the fact that patients should quit smoking, or do you think that it relates to something else completely? Dr. Jessica BurrisI think there's probably multiple things going on. One is once you're fully aware of the fact of the impact of smoking after a cancer diagnosis, you're going to be compelled to do something, I think. And so just the simple fact of knowing now that the patient sitting in front of you has smoked in the past week or two, they may be under a lot of stress because they're coping with cancer and they're coping with the side effects of their treatment. They may even have increased their smoking since their cancer diagnosis. And now you have this information. I think people who are providing cancer care, they want to improve the health and the life of the person sitting in front of them. And if they understand that smoking is a detriment or a hurdle to their doing so, then they're also more inclined to try and help that person quit smoking. And so, I think the asking and the documenting likely led to an increase in assistance and referrals to tobacco treatment specialists or to a state quit line, which was also common, simply because that's part of providing quality care. I think also there's been a greater emphasis nationally, in part led by the National Cancer Institute and a cancer moonshot initiative that it led, they're really focused on getting more treatment to more patients with smoking and increasing the reach and the effectiveness of the treatments that we provide. And so, I think there has been a shift in oncology care broadly to put more attention on smoking and smoking cessation as part of standard cancer care. And so, I think this kind of shift in the field also informed things as well as, again, thinking about the patient and the individual who's in the room and wanting to do something about the problem that you've just identified. Dr. Davide SoldatoAnd one thing that I believe is truly exceptional about the Just Ask initiative is really also the diversity of the type of programs that you involved. Like, you went from community centers to more academic centers. And really, I did not have the impression reading the manuscript that there was any difference in the way this type of quality improvement initiative can really benefit all these programs and all these centers. So, I was just wanting to have your opinion or comment on how do you think this type of initiative could be transferable across the country and across different settings and different types of cancer care? Dr. Jessica BurrisYeah, I'm really glad that you brought that up, because I think most of the clinical trials that are done in this area are done at academic medical centers, which are admittedly kind of resource rich places to receive cancer care. And so, what works in academic medical center may not work in a small rural practice in the middle of Kansas, for example, or in Mississippi. And it may not work in other community-based practices, even if they're larger and set in an urban setting. And so, one of the things that frankly I loved about Just Ask is that it was very heterogeneous in terms of the sites and the participating groups. And so not only was it national and by far the largest initiative in this area, again with over 750 different programs, but the programs were diverse. So, we had large community-based programs, integrated networks, smaller community programs. And then the academic centers were actually the smallest. Only like 10 or 12 out of the 750 plus were academic. And so, it was very different than what is the norm in this research area and in this area generally in terms of clinical practice. And we were able to show that the type of program that participated had no bearing on their success. And so, when we think about initiatives that work and interventions that work, we also really have to think about what is scalable and what could be disseminated across different practices. And this is one of those things that can. It worked and it worked across different swaths of group, which was great. Dr. Davide SoldatoAbsolutely. And just one last comment about the intervention, and it's also a point that you raised in the manuscript. This initiative, like many others also at the national levels that have been reported previously, they rarely had really the participation or the perspective of the patients embodied inside of them. So, I was wondering, how do you see the field moving forward. Like you envision something that would implement sort of a co-creation with patients or cancer survivors in order to really create something that is more appealing and takes more into consideration what is the patient perspectives when we are approaching something like smoking cessation, which as you were mentioning before, it can have a lot of stigma or already some negative feelings by the patients and feelings of guilt regarding the fact that they smoked and that might have caused that cancer. So just a little bit of your opinion as to how you see the implementation science in smoking cessation moving forward while integrating also the patient perspectives. Dr. Jessica BurrisYeah, that's a great question. So, this is something that I've thought about a lot in my lab and at Market Cancer center, which I'll use as an example. But oftentimes what we see is that even when tobacco treatment is offered as part of standard cancer care, even when we try to remove barriers like the financial cost of treatment at Markey, we embed it within our psych oncology program. And so, all of those services are offered for free. The rate at which patients say, yes, they want to engage in treatment is much, much lower than what we would want. And so that means two things. One, we need to offer help repeatedly to patients and understand that their willingness to quit and their willingness to accept treatment likely would change over time. And so, we need to keep coming back to people. It's not a one and done situation. But then also we need to understand what the barriers are from a patient's perspective. So why are they saying no? That they're either not ready or that they don't want treatment. They want to, quote, unquote, go it alone. And oftentimes what we hear is that patients want to be able to do this by themselves. They want to feel like, I quit smoking and I did it all by myself. And this is this huge thing that I've overcome. Not too different from the perspective that a lot of patients have about fighting cancer. They want to fight this addiction, this dependence that they've had oftentimes for multiple decades. And so, I think one thing that might be beneficial is to think about having peer led tobacco treatment. So have a patient who was able to quit successfully and have them provide counseling alongside a trained provider so that patients see someone like them who's went through it in the context of cancer care and who was able to overcome and to fight and win against tobacco, essentially. I think the other thing is trying to make sure that when we're asking about smoking and when we're offering treatment that we are not accidentally harming patients by bringing up feelings of stigma or guilt or shame. And I think one way to make sure we don't do that is to really lean on clinicians who are trained in addressing social determinants of health and other supportive care. So, our social workers, I think would be great. They're oftentimes embedded within oncology care. They are surely able to be trained as tobacco treatment specialists. They're already working with patients; they're addressing other barriers to care. They're sensitive in how they ask questions oftentimes. And so, they're really an ideal partner for this work. And we have found in a lot of settings that social workers are great in terms of being tobacco treatment specialists, including what we saw in Just Ask. Dr. Davide SoldatoThank you very much. That was really very, very interesting. And so, last question, moving forward, we improved the rate of asking patients. We are able to document this addiction more clearly in the EHR. So how do you see the field moving forward? In the manuscript, you speak a little bit about the Beyond Ask initiative. So just a little bit of a background about what is this initiative, what you are planning to do, and what do you think would be the best way to really act on this information that we are starting to collect in a better way and more frequently. Dr. Jessica BurrisYeah. So Beyond Ask really took everything that we did in Just Ask and amplified it. So instead of focusing on asking, we really said to make a difference and to improve cancer outcomes, ultimately patients need to be able to quit smoking. It's not enough that we know who is smoking, but that we help that individual or those groups of people quit. And so Beyond Ask had the goal to increase cessation assistance. So, either prescribing medication to help with smoking cessation, referring to a quit line, or another evidence-based program, or personally providing cessation counseling on site at that cancer program and to try and improve again within assistance. It was another one-year study, but we increased the frequency of surveys. I think we ended up with five total surveys. So, we were capturing two to three months at a time instead of a six-month period. And the data that we were capturing was very similar to what we did in Just Ask. And I can say we're still doing the data analysis, but it was another major success. So, with Beyond Ask, we had about 350 participating programs, many of whom not all, but many did participate in Just Ask. So, I think Just Ask kind of energized people around addressing the issue of smoking in their patient population. And again, they were really chomping at the bit to do more. And so, we offered Beyond Ask just after Just Ask. So Just Ask was 2022. Beyond ask was 2023. It ended in the spring of 2024. And again, another success. Dr. Davide SoldatoThank you very much. So, we are eager to see the results of this study. So that leads us to the end of this interview. So, thank you again, Dr. Burris for joining us today and speaking about your work. Dr. Jessica BurrisThank you. Dr. Davide SoldatoSo we appreciate you sharing more on the JCO article titled Longitudinal Results from the Nationwide Just Ask Initiative to Promote Routine Smoking Assessment in American College of Surgeons Accredited Cancer Program. If you enjoy our show, please leave us a rating and a review and be sure to come back for another episode. You can find all ASCO shows at asco.org/podcast. The purpose of this podcast is to educate and to inform. This is not a substitute for professional medical care and is not intended for use in the diagnosis or treatment of individual conditions. Guests on this podcast express their own opinions, experience and conclusions. Guest statements on the podcast do not express the opinions of ASCO. The mention of any product, service, organization, activity or therapy should not be construed as an ASCO endorsement.
Télikert, fűthető üveggel. A technológiáról Rákosy Eszter a Rákosy Glass ügyvezetője mesél.
Ättestupa, Moxa, Typhoons, WordPress, Likert Scales, Algol, Josh Marpet, and more on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-440
Ättestupa, Moxa, Typhoons, WordPress, Likert Scales, Algol, Josh Marpet, and more on the Security Weekly News. Show Notes: https://securityweekly.com/swn-440
Ättestupa, Moxa, Typhoons, WordPress, Likert Scales, Algol, Josh Marpet, and more on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-440
Ättestupa, Moxa, Typhoons, WordPress, Likert Scales, Algol, Josh Marpet, and more on the Security Weekly News. Show Notes: https://securityweekly.com/swn-440
Student course evaluations are the primary way faculty receive feedback on their teaching. The challenge is in getting meaningful, actionable feedback from students that can be used to make improvements to instruction. Drs. Michelle Stubbs and Julie Reis share their recommendations for improving the quality of feedback students provide, including use of continuous feedback processes that go beyond surveys and Likert scales. In this podcast, you'll hear specific steps you can take to collect meaningful feedback and implement a dynamic and responsive instructional improvement cycle. Learn more about their work in their article.
If you've listened to the podcast for a while, you might have heard our ElevenLabs-powered AI co-host Charlie a few times. Text-to-speech has made amazing progress in the last 18 months, with OpenAI's Advanced Voice Mode (aka “Her”) as a sneak peek of the future of AI interactions (see our “Building AGI in Real Time” recap). Yet, we had yet to see a real killer app for AI voice (not counting music).Today's guests, Raiza Martin and Usama Bin Shafqat, are the lead PM and AI engineer behind the NotebookLM feature flag that gave us the first viral AI voice experience, the “Deep Dive” podcast:The idea behind the “Audio Overviews” feature is simple: take a bunch of documents, websites, YouTube videos, etc, and generate a podcast out of them. This was one of the first demos that people built with voice models + RAG + GPT models, but it was always a glorified speech-to-text. Raiza and Usama took a very different approach:* Make it conversational: when you listen to a NotebookLM audio there are a ton of micro-interjections (Steven Johnson calls them disfluencies) like “Oh really?” or “Totally”, as well as pauses and “uh…”, like you would expect in a real conversation. These are not generated by the LLM in the transcript, but they are built into the the audio model. See ~28:00 in the pod for more details. * Listeners love tension: if two people are always in agreement on everything, it's not super interesting. They tuned the model to generate flowing conversations that mirror the tone and rhythm of human speech. They did not confirm this, but many suspect the 2 year old SoundStorm paper is related to this model.* Generating new insights: because the hosts' goal is not to summarize, but to entertain, it comes up with funny metaphors and comparisons that actually help expand on the content rather than just paraphrasing like most models do. We have had listeners make podcasts out of our podcasts, like this one.This is different than your average SOTA-chasing, MMLU-driven model buildooor. Putting product and AI engineering in the same room, having them build evals together, and understanding what the goal is lets you get these unique results. The 5 rules for AI PMsWe always focus on AI Engineers, but this episode had a ton of AI PM nuggets as well, which we wanted to collect as NotebookLM is one of the most successful products in the AI space:1. Less is more: the first version of the product had 0 customization options. All you could do is give it source documents, and then press a button to generate. Most users don't know what “temperature” or “top-k” are, so you're often taking the magic away by adding more options in the UI. Since recording they added a few, like a system prompt, but those were features that users were “hacking in”, as Simon Willison highlighted in his blog post.2. Use Real-Time Feedback: they built a community of 65,000 users on Discord that is constantly reporting issues and giving feedback; sometimes they noticed server downtime even before the Google internal monitoring did. Getting real time pings > aggregating user data when doing initial iterations. 3. Embrace Non-Determinism: AI outputs variability is a feature, not a bug. Rather than limiting the outputs from the get-go, build toggles that you can turn on/off with feature flags as the feedback starts to roll in.4. Curate with Taste: if you try your product and it sucks, you don't need more data to confirm it. Just scrap that and iterate again. This is even easier for a product like this; if you start listening to one of the podcasts and turn it off after 10 seconds, it's never a good sign. 5. Stay Hands-On: It's hard to build taste if you don't experiment. Trying out all your competitors products as well as unrelated tools really helps you understand what users are seeing in market, and how to improve on it.Chapters00:00 Introductions01:39 From Project Tailwind to NotebookLM09:25 Learning from 65,000 Discord members12:15 How NotebookLM works18:00 Working with Steven Johnson23:00 How to prioritize features25:13 Structuring the data pipelines29:50 How to eval34:34 Steering the podcast outputs37:51 Defining speakers personalities39:04 How do you make audio engaging?45:47 Humor is AGI51:38 Designing for non-determinism53:35 API when?55:05 Multilingual support and dialect considerations57:50 Managing system prompts and feature requests01:00:58 Future of NotebookLM01:04:59 Podcasts for your codebase01:07:16 Plans for real-time chat01:08:27 Wrap upShow Notes* Notebook LM* AI Test Kitchen* Nicholas Carlini* Steven Johnson* Wealth of Nations* Histories of Mysteries by Andrej Karpathy* chicken.pdf Threads* Area 120* Raiza Martin* Usama Bin ShafqatTranscriptNotebookLM [00:00:00]: Hey everyone, we're here today as guests on Latent Space. It's great to be here, I'm a long time listener and fan, they've had some great guests on this show before. Yeah, what an honor to have us, the hosts of another podcast, join as guests. I mean a huge thank you to Swyx and Alessio for the invite, thanks for having us on the show. Yeah really, it seems like they brought us here to talk a little bit about our show, our podcast. Yeah, I mean we've had lots of listeners ourselves, listeners at Deep Dive. Oh yeah, we've made a ton of audio overviews since we launched and we're learning a lot. There's probably a lot we can share around what we're building next, huh? Yeah, we'll share a little bit at least. The short version is we'll keep learning and getting better for you. We're glad you're along for the ride. So yeah, keep listening. Keep listening and stay curious. We promise to keep diving deep and bringing you even better options in the future. Stay curious.Alessio [00:00:52]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners. And I'm joined by my co-host, Swyx, founder of Smol.ai.Swyx [00:01:01]: Hey, and today we're back in the studio with our special guest, Raiza Martin. And Raiza, I forgot to get your last name, Shafqat.Raiza [00:01:10]: Yes.Swyx [00:01:10]: Okay, welcome.Raiza [00:01:12]: Hello, thank you for having us.Swyx [00:01:14]: So AI podcasters meet human podcasters, always fun. Congrats on the success of Notebook LM. I mean, how does it feel?Raiza [00:01:22]: It's been a lot of fun. A lot of it, honestly, was unexpected. But my favorite part is really listening to the audio overviews that people have been making.Swyx [00:01:29]: Maybe we should do a little bit of intros and tell the story. You know, what is your path into the sort of Google AI org? Or maybe, actually, I don't even know what org you guys are in.Raiza [00:01:39]: I can start. My name is Raisa. I lead the Notebook LM team inside of Google Labs. So specifically, that's the org that we're in. It's called Google Labs. It's only about two years old. And our whole mandate is really to build AI products. That's it. We work super closely with DeepMind. Our entire thing is just, like, try a bunch of things and see what's landing with users. And the background that I have is, really, I worked in payments before this, and I worked in ads right before, and then startups. I tell people, like, at every time that I changed orgs, I actually almost quit Google. Like, specifically, like, in between ads and payments, I was like, all right, I can't do this. Like, this is, like, super hard. I was like, it's not for me. I'm, like, a very zero-to-one person. But then I was like, okay, I'll try. I'll interview with other teams. And when I interviewed in payments, I was like, oh, these people are really cool. I don't know if I'm, like, a super good fit with this space, but I'll try it because the people are cool. And then I really enjoyed that, and then I worked on, like, zero-to-one features inside of payments, and I had a lot of fun. But then the time came again where I was like, oh, I don't know. It's like, it's time to leave. It's time to start my own thing. But then I interviewed inside of Google Labs, and I was like, oh, darn. Like, there's definitely, like—Alessio [00:02:48]: They got you again.Raiza [00:02:49]: They got me again. And so now I've been here for two years, and I'm happy that I stayed because especially with, you know, the recent success of Notebook LM, I'm like, dang, we did it. I actually got to do it. So that was really cool.Usama [00:03:02]: Kind of similar, honestly. I was at a big team at Google. We do sort of the data center supply chain planning stuff. Google has, like, the largest sort of footprint. Obviously, there's a lot of management stuff to do there. But then there was this thing called Area 120 at Google, which does not exist anymore. But I sort of wanted to do, like, more zero-to-one building and landed a role there. We were trying to build, like, a creator commerce platform called Kaya. It launched briefly a couple years ago. But then Area 120 sort of transitioned and morphed into Labs. And, like, over the last few years, like, the focus just got a lot clearer. Like, we were trying to build new AI products and do it in the wild and sort of co-create and all of that. So, you know, we've just been trying a bunch of different things. And this one really landed, which has felt pretty phenomenal. Really, really landed.Swyx [00:03:53]: Let's talk about the brief history of Notebook LM. You had a tweet, which is very helpful for doing research. May 2023, during Google I.O., you announced Project Tailwind.Raiza [00:04:03]: Yeah.Swyx [00:04:03]: So today is October 2024. So you joined October 2022?Raiza [00:04:09]: Actually, I used to lead AI Test Kitchen. And this was actually, I think, not I.O. 2023. I.O. 2022 is when we launched AI Test Kitchen, or announced it. And I don't know if you remember it.Swyx [00:04:23]: That's how you, like, had the basic prototype for Gemini.Raiza [00:04:26]: Yes, yes, exactly. Lambda.Swyx [00:04:28]: Gave beta access to people.Raiza [00:04:29]: Yeah, yeah, yeah. And I remember, I was like, wow, this is crazy. We're going to launch an LLM into the wild. And that was the first project that I was working on at Google. But at the same time, my manager at the time, Josh, he was like, hey, I want you to really think about, like, what real products would we build that are not just demos of the technology? That was in October of 2022. I was sitting next to an engineer that was working on a project called Talk to Small Corpus. His name was Adam. And the idea of Talk to Small Corpus is basically using LLM to talk to your data. And at the time, I was like, wait, there's some, like, really practical things that you can build here. And just a little bit of background, like, I was an adult learner. Like, I went to college while I was working a full-time job. And the first thing I thought was, like, this would have really helped me with my studying, right? Like, if I could just, like, talk to a textbook, especially, like, when I was tired after work, that would have been huge. We took a lot of, like, the Talk to Small Corpus prototypes, and I showed it to a lot of, like, college students, particularly, like, adult learners. They were like, yes, like, I get it, right? Like, I didn't even have to explain it to them. And we just continued to iterate the prototype from there to the point where we actually got a slot as part of the I.O. demo in 23.Swyx [00:05:42]: And Corpus, was it a textbook? Oh, my gosh.Raiza [00:05:45]: Yeah. It's funny. Actually, when he explained the project to me, he was like, talk to Small Corpus. It was like, talk to a small corpse?Swyx [00:05:51]: Yeah, nobody says Corpus.Raiza [00:06:00]: It was like, a small corpse? This is not AI. Yeah, yeah. And it really was just, like, a way for us to describe the amount of data that we thought, like, it could be good for.Swyx [00:06:02]: Yeah, but even then, you're still, like, doing rag stuff. Because, you know, the context length back then was probably, like, 2K, 4K.Raiza [00:06:08]: Yeah, it was basically rag.Raiza [00:06:09]: That was essentially what it was.Raiza [00:06:10]: And I remember, I was like, we were building the prototypes. And at the same time, I think, like, the rest of the world was. Right? We were seeing all of these, like, chat with PDF stuff come up. And I was like, come on, we gotta go. Like, we have to, like, push this out into the world. I think if there was anything, I wish we would have launched sooner because I wanted to learn faster. But I think, like, we netted out pretty well.Alessio [00:06:30]: Was the initial product just text-to-speech? Or were you also doing kind of, like, synthesizing of the content, refining it? Or were you just helping people read through it?Raiza [00:06:40]: Before we did the I.O. announcement in 23, we'd already done a lot of studies. And one of the first things that I realized was the first thing anybody ever typed was, summarize the thing. Right?Raiza [00:06:53]: Summarize the document.Raiza [00:06:54]: And it was, like, half like a test and half just like, oh, I know the content. I want to see how well it does this. So it was part of the first thing that we launched. It was called Project Tailwind back then. It was just Q&A, so you could chat with the doc just through text, and it would automatically generate a summary as well. I'm not sure if we had it back then.Raiza [00:07:12]: I think we did.Raiza [00:07:12]: It would also generate the key topics in your document, and it could support up to, like, 10 documents. So it wasn't just, like, a single doc.Alessio [00:07:20]: And then the I.O. demo went well, I guess. And then what was the discussion from there to where we are today? Is there any, maybe, intermediate step of the product that people missed between this was launch or?Raiza [00:07:33]: It was interesting because every step of the way, I think we hit, like, some pretty critical milestones. So I think from the initial demo, I think there was so much excitement of, like, wow, what is this thing that Google is launching? And so we capitalized on that. We built the wait list. That's actually when we also launched the Discord server, which has been huge for us because for us in particular, one of the things that I really wanted to do was to be able to launch features and get feedback ASAP. Like, the moment somebody tries it, like, I want to hear what they think right now, and I want to ask follow-up questions. And the Discord has just been so great for that. But then we basically took the feedback from I.O., we continued to refine the product.Raiza [00:08:12]: So we added more features.Raiza [00:08:13]: We added sort of, like, the ability to save notes, write notes. We generate follow-up questions. So there's a bunch of stuff in the product that shows, like, a lot of that research. But it was really the rolling out of things. Like, we removed the wait list, so rolled out to all of the United States. We rolled out to over 200 countries and territories. We started supporting more languages, both in the UI and, like, the actual source stuff. We experienced, like, in terms of milestones, there was, like, an explosion of, like, users in Japan. This was super interesting in terms of just, like, unexpected. Like, people would write to us and they would be like, this is amazing. I have to read all of these rules in English, but I can chat in Japanese. It's like, oh, wow. That's true, right? Like, with LLMs, you kind of get this natural, it translates the content for you. And you can ask in your sort of preferred mode. And I think that's not just, like, a language thing, too. I think there's, like, I do this test with Wealth of Nations all the time because it's, like, a pretty complicated text to read. The Evan Smith classic.Swyx [00:09:11]: It's, like, 400 pages or something.Raiza [00:09:12]: Yeah. But I like this test because I'm, like, asking, like, Normie, you know, plain speak. And then it summarizes really well for me. It sort of adapts to my tone.Swyx [00:09:22]: Very capitalist.Raiza [00:09:25]: Very on brand.Swyx [00:09:25]: I just checked in on a Notebook LM Discord. 65,000 people. Yeah.Raiza [00:09:29]: Crazy.Swyx [00:09:29]: Just, like, for one project within Google. It's not, like, it's not labs. It's just Notebook LM.Raiza [00:09:35]: Just Notebook LM.Swyx [00:09:36]: What do you learn from the community?Raiza [00:09:39]: I think that the Discord is really great for hearing about a couple of things.Raiza [00:09:43]: One, when things are going wrong. I think, honestly, like, our fastest way that we've been able to find out if, like, the servers are down or there's just an influx of people being, like, it saysRaiza [00:09:53]: system unable to answer.Raiza [00:09:54]: Anybody else getting this?Raiza [00:09:56]: And I'm, like, all right, let's go.Raiza [00:09:58]: And it actually catches it a lot faster than, like, our own monitoring does.Raiza [00:10:01]: It's, like, that's been really cool. So, thank you.Swyx [00:10:03]: Canceled eat a dog.Raiza [00:10:05]: So, thank you to everybody. Please keep reporting it. I think the second thing is really the use cases.Raiza [00:10:10]: I think when we put it out there, I was, like, hey, I have a hunch of how people will use it, but, like, to actually hear about, you know, not just the context of, like, the use of Notebook LM, but, like, what is this person's life like? Why do they care about using this tool?Raiza [00:10:23]: Especially people who actually have trouble using it, but they keep pushing.Raiza [00:10:27]: Like, that's just so critical to understand what was so motivating, right?Raiza [00:10:31]: Like, what was your problem that was, like, so worth solving? So, that's, like, a second thing.Raiza [00:10:34]: The third thing is also just hearing sort of, like, when we have wins and when we don't have wins because there's actually a lot of functionality where I'm, like, hmm, IRaiza [00:10:42]: don't know if that landed super well or if that was actually super critical.Raiza [00:10:45]: As part of having this sort of small project, right, I want to be able to unlaunch things, too. So, it's not just about just, like, rolling things out and testing it and being, like, wow, now we have, like, 99 features. Like, hopefully we get to a place where it's, like, there's just a really strong core feature set and the things that aren't as great, we can just unlaunch.Swyx [00:11:02]: What have you unlaunched? I have to ask.Raiza [00:11:04]: I'm in the process of unlaunching some stuff, but, for example, we had this idea that you could highlight the text in your source passage and then you could transform it. And nobody was really using it and it was, like, a very complicated piece of our architecture and it's very hard to continue supporting it in the context of new features. So, we were, like, okay, let's do a 50-50 sunset of this thing and see if anybody complains.Raiza [00:11:28]: And so far, nobody has.Swyx [00:11:29]: Is there, like, a feature flagging paradigm inside of your architecture that lets you feature flag these things easily?Raiza [00:11:36]: Yes, and actually...Raiza [00:11:37]: What is it called?Swyx [00:11:38]: Like, I love feature flagging.Raiza [00:11:40]: You mean, like, in terms of just, like, being able to expose things to users?Swyx [00:11:42]: Yeah, as a PM. Like, this is your number one tool, right?Raiza [00:11:44]: Yeah, yeah.Swyx [00:11:45]: Let's try this out. All right, if it works, roll it out. If it doesn't, roll it back, you know?Raiza [00:11:49]: Yeah, I mean, we just run Mendel experiments for the most part. And, actually, I don't know if you saw it, but on Twitter, somebody was able to get around our flags and they enabled all the experiments.Raiza [00:11:58]: They were, like, check out what the Notebook LM team is cooking.Raiza [00:12:02]: I was, like, oh!Raiza [00:12:03]: And I was at lunch with the rest of the team and I was, like, I was eating. I was, like, guys, guys, Magic Draft League!Raiza [00:12:10]: They were, like, oh, no!Raiza [00:12:12]: I was, like, okay, just finish eating and then let's go figure out what to do.Raiza [00:12:15]: Yeah.Alessio [00:12:15]: I think a post-mortem would be fun, but I don't think we need to do it on the podcast now. Can we just talk about what's behind the magic? So, I think everybody has questions, hypotheses about what models power it. I know you might not be able to share everything, but can you just get people very basic? How do you take the data and put it in the model? What text model you use? What's the text-to-speech kind of, like, jump between the two? Sure.Raiza [00:12:42]: Yeah.Raiza [00:12:42]: I was going to say, SRaiza, he manually does all the podcasts.Raiza [00:12:46]: Oh, thank you.Usama [00:12:46]: Really fast. You're very fast, yeah.Raiza [00:12:48]: Both of the voices at once.Usama [00:12:51]: Voice actor.Raiza [00:12:52]: Good, good.Usama [00:12:52]: Yeah, so, for a bit of background, we were building this thing sort of outside Notebook LM to begin with. Like, just the idea is, like, content transformation, right? Like, we can do different modalities. Like, everyone knows that. Everyone's been poking at it. But, like, how do you make it really useful? And, like, one of the ways we thought was, like, okay, like, you maybe, like, you know, people learn better when they're hearing things. But TTS exists, and you can, like, narrate whatever's on screen. But you want to absorb it the same way. So, like, that's where we sort of started out into the realm of, like, maybe we try, like, you know, two people are having a conversation kind of format. We didn't actually start out thinking this would live in Notebook, right? Like, Notebook was sort of, we built this demo out independently, tried out, like, a few different sort of sources. The main idea was, like, go from some sort of sources and transform it into a listenable, engaging audio format. And then through that process, we, like, unlocked a bunch more sort of learnings. Like, for example, in a sense, like, you're not prompting the model as much because, like, the information density is getting unrolled by the model prompting itself, in a sense. Because there's two speakers, and they're both technically, like, AI personas, right? That have different angles of looking at things. And, like, they'll have a discussion about it. And that sort of, we realized that's kind of what was making it riveting, in a sense. Like, you care about what comes next, even if you've read the material already. Because, like, people say they get new insights on their own journals or books or whatever. Like, anything that they've written themselves. So, yeah, from a modeling perspective, like, it's, like Reiza said earlier, like, we work with the DeepMind audio folks pretty closely. So, they're always cooking up new techniques to, like, get better, more human-like audio. And then Gemini 1.5 is really, really good at absorbing long context. So, we sort of, like, generally put those things together in a way that we could reliably produce the audio.Raiza [00:14:52]: I would add, like, there's something really nuanced, I think, about sort of the evolution of, like, the utility of text-to-speech. Where, if it's just reading an actual text response, and I've done this several times. I do it all the time with, like, reading my text messages. Or, like, sometimes I'm trying to read, like, a really dense paper, but I'm trying to do actual work. I'll have it, like, read out the screen. There is something really robotic about it that is not engaging. And it's really hard to consume content in that way. And it's never been really effective. Like, particularly for me, where I'm, like, hey, it's actually just, like, it's fine for, like, short stuff. Like, texting, but even that, it's, like, not that great. So, I think the frontier of experimentation here was really thinking about there is a transform that needs to happen in between whatever.Raiza [00:15:38]: Here's, like, my resume, right?Raiza [00:15:39]: Or here's, like, a 100-page slide deck or something. There is a transform that needs to happen that is inherently editorial. And I think this is where, like, that two-person persona, right, dialogue model, they have takes on the material that you've presented. That's where it really sort of, like, brings the content to life in a way that's, like, not robotic. And I think that's, like, where the magic is, is, like, you don't actually know what's going to happen when you press generate.Raiza [00:16:08]: You know, for better or for worse.Raiza [00:16:09]: Like, to the extent that, like, people are, like, no, I actually want it to be more predictable now. Like, I want to be able to tell them. But I think that initial, like, wow was because you didn't know, right? When you upload your resume, what's it about to say about you? And I think I've seen enough of these where I'm, like, oh, it gave you good vibes, right? Like, you knew it was going to say, like, something really cool. As we start to shape this product, I think we want to try to preserve as much of that wow as much as we can. Because I do think, like, exposing, like, all the knobs and, like, the dials, like, we've been thinking about this a lot. It's like, hey, is that, like, the actual thing?Raiza [00:16:43]: Is that the thing that people really want?Alessio [00:16:45]: Have you found differences in having one model just generate the conversation and then using text-to-speech to kind of fake two people? Or, like, are you actually using two different kind of system prompts to, like, have a conversation step-by-step? I'm always curious, like, if persona system prompts make a big difference? Or, like, you just put in one prompt and then you just let it run?Usama [00:17:05]: I guess, like, generally we use a lot of inference, as you can tell with, like, the spinning thing takes a while. So, yeah, there's definitely, like, a bunch of different things happening under the hood. We've tried both approaches and they have their, sort of, drawbacks and benefits. I think that that idea of, like, questioning, like, the two different personas, like, persists throughout, like, whatever approach we try. It's like, there's a bit of, like, imperfection in there. Like, we had to really lean into the fact that, like, to build something that's engaging, like, it needs to be somewhat human and it needs to be just not a chatbot. Like, that was sort of, like, what we need to diverge from. It's like, you know, most chatbots will just narrate the same kind of answer, like, given the same sources, for the most part, which is ridiculous. So, yeah, there's, like, experimentation there under the hood, like, with the model to, like, make sure that it's spitting out, like, different takes and different personas and different, sort of, prompting each other is, like, a good analogy, I guess.Swyx [00:18:00]: Yeah, I think Steven Johnson, I think he's on your team. I don't know what his role is. He seems like chief dreamer, writer.Raiza [00:18:08]: Yeah, I mean, I can comment on Steven. So, Steven joined, actually, in the very early days, I think before it was even a fully funded project. And I remember when he joined, I was like, Steven Johnson's going to be on my team? You know, and for folks who don't know him, Steven is a New York Times bestselling author of, like, 14 books. He has a PBS show. He's, like, incredibly smart, just, like, a true, sort of, celebrity by himself. And then he joined Google, and he was like, I want to come here, and I want to build the thing that I've always dreamed of, which is a tool to help me think. I was like, a what? Like, a tool to help you think? I was like, what do you need help with? Like, you seem to be doing great on your own. And, you know, he would describe this to me, and I would watch his flow. And aside from, like, providing a lot of inspiration, to be honest, like, when I watched Steven work, I was like, oh, nobody works like this, right? Like, this is what makes him special. Like, he is such a dedicated, like, researcher and journalist, and he's so thorough, he's so smart. And then I had this realization of, like, maybe Steven is the product. Maybe the work is to take Steven's expertise and bring it to, like, everyday people that could really benefit from this. Like, just watching him work, I was like, oh, I could definitely use, like, a mini-Steven, like, doing work for me. Like, that would make me a better PM. And then I thought very quickly about, like, the adjacent roles that could use sort of this, like, research and analysis tool. And so, aside from being, you know, chief dreamer, Steven also represents, like, a super workflow that I think all of us, like, if we had access to a tool like it, would just inherently, like, make us better.Swyx [00:19:46]: Did you make him express his thoughts while he worked, or you just silently watched him, or how does this work?Raiza [00:19:52]: Oh, now you're making me admit it. But yes, I did just silently watch him.Swyx [00:19:57]: This is a part of the PM toolkit, right? They give user interviews and all that.Raiza [00:20:00]: Yeah, I mean, I did interview him, but I noticed, like, if I interviewed him, it was different than if I just watched him. And I did the same thing with students all the time. Like, I followed a lot of students around. I watched them study. I would ask them, like, oh, how do you feel now, right?Raiza [00:20:15]: Or why did you do that? Like, what made you do that, actually?Raiza [00:20:18]: Or why are you upset about, like, this particular thing? Why are you cranky about this particular topic? And it was very similar, I think, for Steven, especially because he was describing, he was in the middle of writing a book. And he would describe, like, oh, you know, here's how I research things, and here's how I keep my notes. Oh, and here's how I do it. And it was really, he was doing this sort of, like, self-questioning, right? Like, now we talk about, like, chain of, you know, reasoning or thought, reflection.Raiza [00:20:44]: And I was like, oh, he's the OG.Raiza [00:20:46]: Like, I watched him do it in real time. I was like, that's, like, L-O-M right there. And to be able to bring sort of that expertise in a way that was, like, you know, maybe, like, costly inference-wise, but really have, like, that ability inside of a tool that was, like, for starters, free inside of NotebookLM, it was good to learn whether or not people really did find use out of it.Swyx [00:21:05]: So did he just commit to using NotebookLM for everything, or did you just model his existing workflow?Raiza [00:21:12]: Both, right?Raiza [00:21:12]: Like, in the beginning, there was no product for him to use. And so he just kept describing the thing that he wanted. And then eventually, like, we started building the thing. And then I would start watching him use it. One of the things that I love about Steven is he uses the product in ways where it kind of does it, but doesn't quite. Like, he's always using it at, like, the absolute max limit of this thing. But the way that he describes it is so full of promise, where he's like, I can see it going here. And all I have to do is sort of, like, meet him there and sort of pressure test whether or not, you know, everyday people want it. And we just have to build it.Swyx [00:21:47]: I would say OpenAI has a pretty similar person, Andrew Mason, I think his name is. It's very similar, like, just from the writing world and using it as a tool for thought to shape Chachabitty. I don't think that people who use AI tools to their limit are common. I'm looking at my NotebookLM now. I've got two sources. You have a little, like, source limit thing. And my bar is over here, you know, and it stretches across the whole thing. I'm like, did he fill it up?Raiza [00:22:09]: Yes, and he has, like, a higher limit than others, I think. He fills it up.Raiza [00:22:14]: Oh, yeah.Raiza [00:22:14]: Like, I don't think Steven even has a limit, actually.Swyx [00:22:17]: And he has Notes, Google Drive stuff, PDFs, MP3, whatever.Raiza [00:22:22]: Yes, and one of my favorite demos, he just did this recently, is he has actually PDFs of, like, handwritten Marie Curie notes. I see.Swyx [00:22:29]: So you're doing image recognition as well. Yeah, it does support it today.Raiza [00:22:32]: So if you have a PDF that's purely images, it will recognize it.Raiza [00:22:36]: But his demo is just, like, super powerful.Raiza [00:22:37]: He's like, okay, here's Marie Curie's notes. And it's like, here's how I'm using it to analyze it. And I'm using it for, like, this thing that I'm writing.Raiza [00:22:44]: And that's really compelling.Raiza [00:22:45]: It's like the everyday person doesn't think of these applications. And I think even, like, when I listen to Steven's demo, I see the gap. I see how Steven got there, but I don't see how I could without him. And so there's a lot of work still for us to build of, like, hey, how do I bring that magic down to, like, zero work? Because I look at all the steps that he had to take in order to do it, and I'm like, okay, that's product work for us, right? Like, that's just onboarding.Alessio [00:23:09]: And so from an engineering perspective, people come to you and it's like, hey, I need to use this handwritten notes from Marie Curie from hundreds of years ago. How do you think about adding support for, like, data sources and then maybe any fun stories and, like, supporting more esoteric types of inputs?Raiza [00:23:25]: So I think about the product in three ways, right? So there's the sources, the source input. There's, like, the capabilities of, like, what you could do with those sources. And then there's the third space, which is how do you output it into the world? Like, how do you put it back out there? There's a lot of really basic sources that we don't support still, right? I think there's sort of, like, the handwritten notes stuff is one, but even basic things like DocX or, like, PowerPoint, right? Like, these are the things that people, everyday people are like, hey, my professor actually gave me everything in DocX. Can you support that? And then just, like, basic stuff, like images and PDFs combined with text. Like, there's just a really long roadmap for sources that I think we just have to work on.Raiza [00:24:04]: So that's, like, a big piece of it.Raiza [00:24:05]: On the output side, and I think this is, like, one of the most interesting things that we learned really early on, is, sure, there's, like, the Q&A analysis stuff, which is like, hey, when did this thing launch? Okay, you found it in the slide deck. Here's the answer. But most of the time, the reason why people ask those questions is because they're trying to make something new. And so when, actually, when some of those early features leaked, like, a lot of the features we're experimenting with are the output types. And so you can imagine that people care a lot about the resources that they're putting into NotebookLM because they're trying to create something new. So I think equally as important as, like, the source inputs are the outputs that we're helping people to create. And really, like, you know, shortly on the roadmap, we're thinking about how do we help people use NotebookLM to distribute knowledge? And that's, like, one of the most compelling use cases is, like, shared notebooks. It's, like, a way to share knowledge. How do we help people take sources and, like, one-click new documents out of it, right? And I think that's something that people think is, like, oh, yeah, of course, right? Like, one push a document. But what does it mean to do it right? Like, to do it in your style, in your brand, right?Raiza [00:25:08]: To follow your guidelines, stuff like that.Raiza [00:25:09]: So I think there's a lot of work, like, on both sides of that equation.Raiza [00:25:13]: Interesting.Swyx [00:25:13]: Any comments on the engineering side of things?Usama [00:25:16]: So, yeah, like I said, I was mostly working on building the text to audio, which kind of lives as a separate engineering pipeline, almost, that we then put into NotebookLM. But I think there's probably tons of NotebookLM engineering war stories on dealing with sources. And so I don't work too closely with engineers directly. But I think a lot of it does come down to, like, Gemini's native understanding of images really well with the latest generation.Raiza [00:25:39]: Yeah, I think on the engineering and modeling side, I think we are a really good example of a team that's put a product out there, and we're getting a lot of feedback from the users, and we return the data to the modeling team, right? To the extent that we say, hey, actually, you know what people are uploading, but we can't really support super well?Raiza [00:25:56]: Text plus image, right?Raiza [00:25:57]: Especially to the extent that, like, NotebookLM can handle up to 50 sources, 500,000 words each. Like, you're not going to be able to jam all of that into, like, the context window. So how do we do multimodal embeddings with that? There's really, like, a lot of things that we have to solve that are almost there, but not quite there yet.Alessio [00:26:16]: On then turning it into audio, I think one of the best things is it has so many of the human... Does that happen in the text generation that then becomes audio? Or is that a part of, like, the audio model that transforms the text?Usama [00:26:27]: It's a bit of both, I would say. The audio model is definitely trying to mimic, like, certain human intonations and, like, sort of natural, like, breathing and pauses and, like, laughter and things like that. But yeah, in generating, like, the text, we also have to sort of give signals on, like, where those things maybe would make sense.Alessio [00:26:45]: And on the input side, instead of having a transcript versus having the audio, like, can you take some of the emotions out of it, too? If I'm giving, like, for example, when we did the recaps of our podcast, we can either give audio of the pod or we can give a diarized transcription of it. But, like, the transcription doesn't have some of the, you know, voice kind of, like, things.Raiza [00:27:05]: Yeah, yeah.Alessio [00:27:05]: Do you reconstruct that when people upload audio or how does that work?Raiza [00:27:09]: So when you upload audio today, we just transcribe it. So it is quite lossy in the sense that, like, we don't transcribe, like, the emotion from that as a source. But when you do upload a text file and it has a lot of, like, that annotation, I think that there is some ability for it to be reused in, like, the audio output, right? But I think it will still contextualize it in the deep dive format. So I think that's something that's, like, particularly important is, like, hey, today we only have one format.Raiza [00:27:37]: It's deep dive.Raiza [00:27:38]: It's meant to be a pretty general overview and it is pretty peppy.Raiza [00:27:42]: It's just very upbeat.Raiza [00:27:43]: It's very enthusiastic, yeah.Raiza [00:27:45]: Yeah, yeah.Raiza [00:27:45]: Even if you had, like, a sad topic, I think they would find a way to be, like, silver lining, though.Raiza [00:27:50]: Really?Raiza [00:27:51]: Yeah.Raiza [00:27:51]: We're having a good chat.Raiza [00:27:54]: Yeah, that's awesome.Swyx [00:27:54]: One of the ways, many, many, many ways that deep dive went viral is people saying, like, if you want to feel good about yourself, just drop in your LinkedIn. Any other, like, favorite use cases that you saw from people discovering things in social media?Raiza [00:28:08]: I mean, there's so many funny ones and I love the funny ones.Raiza [00:28:11]: I think because I'm always relieved when I watch them. I'm like, haha, that was funny and not scary. It's great.Raiza [00:28:17]: There was another one that was interesting, which was a startup founder putting their landing page and being like, all right, let's test whether or not, like, the value prop is coming through. And I was like, wow, that's right.Raiza [00:28:26]: That's smart.Usama [00:28:27]: Yeah.Raiza [00:28:28]: And then I saw a couple of other people following up on that, too.Raiza [00:28:32]: Yeah.Swyx [00:28:32]: I put my about page in there and, like, yeah, if there are things that I'm not comfortable with, I should remove it. You know, so that it can pick it up. Right.Usama [00:28:39]: I think that the personal hype machine was, like, a pretty viral one. I think, like, people uploaded their dreams and, like, some people, like, keep sort of dream journals and it, like, would sort of comment on those and, like, it was therapeutic. I didn't see those.Raiza [00:28:54]: Those are good. I hear from Googlers all the time, especially because we launched it internally first. And I think we launched it during the, you know, the Q3 sort of, like, check-in cycle. So all Googlers have to write notes about, like, hey, you know, what'd you do in Q3? And what Googlers were doing is they would write, you know, whatever they accomplished in Q3 and then they would create an audio overview. And these people they didn't know would just ping me and be like, wow, I feel really good, like, going into a meeting with my manager.Raiza [00:29:25]: And I was like, good, good, good, good. You really did that, right?Usama [00:29:29]: I think another cool one is just, like, any Wikipedia article. Yeah. Like, you drop it in and it's just, like, suddenly, like, the best sort of summary overview.Raiza [00:29:38]: I think that's what Karpathy did, right? Like, he has now a Spotify channel called Histories of Mysteries, which is basically, like, he just took, like, interesting stuff from Wikipedia and made audio overviews out of it.Swyx [00:29:50]: Yeah, he became a podcaster overnight.Raiza [00:29:52]: Yeah.Raiza [00:29:53]: I'm here for it. I fully support him.Raiza [00:29:55]: I'm racking up the listens for him.Swyx [00:29:58]: Honestly, it's useful even without the audio. You know, I feel like the audio does add an element to it, but I always want, you know, paired audio and text. And it's just amazing to see what people are organically discovering. I feel like it's because you laid the groundwork with NotebookLM and then you came in and added the sort of TTS portion and made it so good, so human, which is weird. Like, it's this engineering process of humans. Oh, one thing I wanted to ask. Do you have evals?Raiza [00:30:23]: Yeah.Swyx [00:30:23]: Yes.Raiza [00:30:24]: What? Potatoes for chefs.Swyx [00:30:27]: What is that? What do you mean, potatoes?Raiza [00:30:29]: Oh, sorry.Raiza [00:30:29]: Sorry. We were joking with this, like, a couple of weeks ago. We were doing, like, side-by-sides. But, like, Raiza sent me the file and it was literally called Potatoes for Chefs. And I was like, you know, my job is really serious, but you have to laugh a little bit. Like, the title of the file is, like, Potatoes for Chefs.Swyx [00:30:47]: Is it like a training document for chefs?Usama [00:30:50]: It's just a side-by-side for, like, two different kind of audio transcripts.Swyx [00:30:54]: The question is really, like, as you iterate, the typical engineering advice is you establish some kind of test or benchmark. You're at, like, 30 percent. You want to get it up to 90, right?Raiza [00:31:05]: Yeah.Swyx [00:31:05]: What does that look like for making something sound human and interesting and voice?Usama [00:31:11]: We have the sort of formal eval process as well. But I think, like, for this particular project, we maybe took a slightly different route to begin with. Like, there was a lot of just within the team listening sessions. A lot of, like, sort of, like... Dogfooding.Raiza [00:31:23]: Yeah.Usama [00:31:23]: Like, I think the bar that we tried to get to before even starting formal evals with raters and everything was much higher than I think other projects would. Like, because that's, as you said, like, the traditional advice, right? Like, get that ASAP. Like, what are you looking to improve on? Whatever benchmark it is. So there was a lot of just, like, critical listening. And I think a lot of making sure that those improvements actually could go into the model. And, like, we're happy with that human element of it. And then eventually we had to obviously distill those down into an eval set. But, like, still there's, like, the team is just, like, a very, very, like, avid user of the product at all stages.Raiza [00:32:02]: I think you just have to be really opinionated.Raiza [00:32:05]: I think that sometimes, if you are, your intuition is just sharper and you can move a lot faster on the product.Raiza [00:32:12]: Because it's like, if you hold that bar high, right?Raiza [00:32:15]: Like, if you think about, like, the iterative cycle, it's like, hey, we could take, like, six months to ship this thing. To get it to, like, mid where we were. Or we could just, like, listen to this and be like, yeah, that's not it, right? And I don't need a rater to tell me that. That's my preference, right? And collectively, like, if I have two other people listen to it, they'll probably agree. And it's just kind of this step of, like, just keep improving it to the point where you're like, okay, now I think this is really impressive. And then, like, do evals, right? And then validate that.Swyx [00:32:43]: Was the sound model done and frozen before you started doing all this? Or are you also saying, hey, we need to improve the sound model as well? Both.Usama [00:32:51]: Yeah, we were making improvements on the audio and just, like, generating the transcript as well. I think another weird thing here was, like, we needed to be entertaining. And that's much harder to quantify than some of the other benchmarks that you can make for, like, you know, Sweebench or get better at this math.Swyx [00:33:10]: Do you just have people rate one to five or, you know, or just thumbs up and down?Usama [00:33:14]: For the formal rater evals, we have sort of like a Likert scale and, like, a bunch of different dimensions there. But we had to sort of break down what makes it entertaining into, like, a bunch of different factors. But I think the team stage of that was more critical. It was like, we need to make sure that, like, what is making it fun and engaging? Like, we dialed that as far as it goes. And while we're making other changes that are necessary, like, obviously, they shouldn't make stuff up or, you know, be insensitive.Raiza [00:33:41]: Hallucinations. Safety.Swyx [00:33:42]: Other safety things.Raiza [00:33:43]: Right.Swyx [00:33:43]: Like a bunch of safety stuff.Raiza [00:33:45]: Yeah, exactly.Usama [00:33:45]: So, like, with all of that and, like, also just, you know, following sort of a coherent narrative and structure is really important. But, like, with all of this, we really had to make sure that that central tenet of being entertaining and engaging and something you actually want to listen to. It just doesn't go away, which takes, like, a lot of just active listening time because you're closest to the prompts, the model and everything.Swyx [00:34:07]: I think sometimes the difficulty is because we're dealing with non-deterministic models, sometimes you just got a bad roll of the dice and it's always on the distribution that you could get something bad. Basically, how many do you, like, do ten runs at a time? And then how do you get rid of the non-determinism?Raiza [00:34:23]: Right.Usama [00:34:23]: Yeah, that's bad luck.Raiza [00:34:25]: Yeah.Swyx [00:34:25]: Yeah.Usama [00:34:26]: I mean, there still will be, like, bad audio overviews. There's, like, a bunch of them that happens. Do you mean for, like, the raider? For raiders, right?Swyx [00:34:34]: Like, what if that one person just got, like, a really bad rating? You actually had a great prompt, you actually had a great model, great weights, whatever. And you just, you had a bad output.Usama [00:34:42]: Like, and that's okay, right?Raiza [00:34:44]: I actually think, like, the way that these are constructed, if you think about, like, the different types of controls that the user has, right? Like, what can the user do today to affect it?Usama [00:34:54]: We push a button.Raiza [00:34:55]: You just push a button.Swyx [00:34:56]: I have tried to prompt engineer by changing the title. Yeah, yeah, yeah.Raiza [00:34:59]: Changing the title, people have found out.Raiza [00:35:02]: Yeah.Raiza [00:35:02]: The title of the notebook, people have found out. You can add show notes, right? You can get them to think, like, the show has changed. Someone changed the language of the output. Changing the language of the output. Like, those are less well-tested because we focused on, like, this one aspect. So it did change the way that we sort of think about quality as well, right? So it's like, quality is on the dimensions of entertainment, of course, like, consistency, groundedness. But in general, does it follow the structure of the deep dive? And I think when we talk about, like, non-determinism, it's like, well, as long as it follows, like, the structure of the deep dive, right? It sort of inherently meets all those other qualities. And so it makes it a little bit easier for us to ship something with confidence to the extent that it's like, I know it's going to make a deep dive. It's going to make a good deep dive. Whether or not the person likes it, I don't know. But as we expand to new formats, as we open up controls, I think that's where it gets really much harder. Even with the show notes, right? Like, people don't know what they're going to get when they do that. And we see that already where it's like, this is going to be a lot harder to validate in terms of quality, where now we'll get a greater distribution. Whereas I don't think we really got, like, varied distribution because of, like, that pre-process that Raiza was talking about. And also because of the way that we'd constrain, like, what were we measuring for? Literally, just like, is it a deep dive?Swyx [00:36:18]: And you determine what a deep dive is. Yeah. Everything needs a PM. Yeah, I have, this is very similar to something I've been thinking about for AI products in general. There's always like a chief tastemaker. And for Notebook LM, it seems like it's a combination of you and Steven.Raiza [00:36:31]: Well, okay.Raiza [00:36:32]: I want to take a step back.Swyx [00:36:33]: And Raiza, I mean, presumably for the voice stuff.Raiza [00:36:35]: Raiza's like the head chef, right? Of, like, deep dive, I think. Potatoes.Raiza [00:36:40]: Of potatoes.Raiza [00:36:41]: And I say this because I think even though we are already a very opinionated team, and Steven, for sure, very opinionated, I think of the audio generations, like, Raiza was the most opinionated, right? And we all, like, would say, like, hey, I remember, like, one of the first ones he sent me.Raiza [00:36:57]: I was like, oh, I feel like they should introduce themselves. I feel like they should say a title. But then, like, we would catch things, like, maybe they shouldn't say their names.Raiza [00:37:04]: Yeah, they don't say their names.Usama [00:37:05]: That was a Steven catch, like, not give them names.Raiza [00:37:08]: So stuff like that is, like, we all injected, like, a little bit of just, like, hey, here's, like, my take on, like, how a podcast should be, right? And I think, like, if you're a person who, like, regularly listens to podcasts, there's probably some collective preference there that's generic enough that you can standardize into, like, the deep dive format. But, yeah, it's the new formats where I think, like, oh, that's the next test. Yeah.Swyx [00:37:30]: I've tried to make a clone, by the way. Of course, everyone did. Yeah. Everyone in AI was like, oh, no, this is so easy. I'll just take a TTS model. Obviously, our models are not as good as yours, but I tried to inject a consistent character backstory, like, age, identity, where they work, where they went to school, what their hobbies are. Then it just, the models try to bring it in too much.Raiza [00:37:49]: Yeah.Swyx [00:37:49]: I don't know if you tried this.Raiza [00:37:51]: Yeah.Swyx [00:37:51]: So then I'm like, okay, like, how do I define a personality? But it doesn't keep coming up every single time. Yeah.Raiza [00:37:58]: I mean, we have, like, a really, really good, like, character designer on our team.Raiza [00:38:02]: What?Swyx [00:38:03]: Like a D&D person?Raiza [00:38:05]: Just to say, like, we, just like we had to be opinionated about the format, we had to be opinionated about who are those two people talking.Raiza [00:38:11]: Okay.Raiza [00:38:12]: Right.Raiza [00:38:12]: And then to the extent that, like, you can design the format, you should be able to design the people as well.Raiza [00:38:18]: Yeah.Swyx [00:38:18]: I would love, like, a, you know, like when you play Baldur's Gate, like, you roll, you roll like 17 on Charisma and like, it's like what race they are. I don't know.Raiza [00:38:27]: I recently, actually, I was just talking about character select screens.Raiza [00:38:30]: Yeah. I was like, I love that, right.Raiza [00:38:32]: And I was like, maybe there's something to be learned there because, like, people have fallen in love with the deep dive as a, as a format, as a technology, but also as just like those two personas.Raiza [00:38:44]: Now, when you hear a deep dive and you've heard them, you're like, I know those two.Raiza [00:38:48]: Right.Raiza [00:38:48]: And people, it's so funny when I, when people are trying to find out their names, like, it's a, it's a worthy task.Raiza [00:38:54]: It's a worthy goal.Raiza [00:38:55]: I know what you're doing. But the next step here is to sort of introduce, like, is this like what people want?Raiza [00:39:00]: People want to sort of edit the personas or do they just want more of them?Swyx [00:39:04]: I'm sure you're getting a lot of opinions and they all, they all conflict with each other. Before we move on, I have to ask, because we're kind of on this topic. How do you make audio engaging? Because it's useful, not just for deep dive, but also for us as podcasters. What is, what does engaging mean? If you could break it down for us, that'd be great.Usama [00:39:22]: I mean, I can try. Like, don't, don't claim to be an expert at all.Swyx [00:39:26]: So I'll give you some, like variation in tone and speed. You know, there's this sort of writing advice where, you know, this sentence is five words. This sentence is three, that kind of advice where you, where you vary things, you have excitement, you have laughter, all that stuff. But I'd be curious how else you break down.Usama [00:39:42]: So there's the basics, like obviously structure that can't be meandering, right? Like there needs to be sort of a, an ultimate goal that the voices are trying to get to, human or artificial. I think one thing we find often is if there's just too much agreement between people, like that's not fun to listen to. So there needs to be some sort of tension and build up, you know, withholding information. For example, like as you listen to a story unfold, like you're going to learn more and more about it. And audio that maybe becomes even more important because like you actually don't have the ability to just like skim to the end of something. You're driving or something like you're going to be hooked because like there's, and that's how like, that's how a lot of podcasts work. Like maybe not interviews necessarily, but a lot of true crime, a lot of entertainment in general. There's just like a gradual unrolling of information. And that also like sort of goes back to the content transformation aspect of it. Like maybe you are going from, let's say the Wikipedia article of like one of the History of Mysteries, maybe episodes. Like the Wikipedia article is going to state out the information very differently. It's like, here's what happened would probably be in the very first paragraph. And one approach we could have done is like maybe a person's just narrating that thing. And maybe that would work for like a certain audience. Or I guess that's how I would picture like a standard history lesson to unfold. But like, because we're trying to put it in this two-person dialogue format, like there, we inject like the fact that, you know, there's, you don't give everything at first. And then you set up like differing opinions of the same topic or the same, like maybe you seize on a topic and go deeper into it and then try to bring yourself back out of it and go back to the main narrative. So that's, that's mostly from like the setting up the script perspective. And then the audio, I was saying earlier, it's trying to be as close to just human speech as possible. I think was the, what we found success with so far.Raiza [00:41:40]: Yeah. Like with interjections, right?Raiza [00:41:41]: Like I think like when you listen to two people talk, there's a lot of like, yeah, yeah, right. And then there's like a lot of like that questioning, like, oh yeah, really?Raiza [00:41:49]: What did you think?Swyx [00:41:50]: I noticed that. That's great.Raiza [00:41:52]: Totally.Usama [00:41:54]: Exactly.Swyx [00:41:55]: My question is, do you pull in speech experts to do this? Or did you just come up with it yourselves? You can be like, okay, talk to a whole bunch of fiction writers to, to make things engaging or comedy writers or whatever, stand up comedy, right? They have to make audio engaging, but audio as well. Like there's professional fields of studying where people do this for a living, but us as AI engineers are just making this up as we go.Raiza [00:42:19]: I mean, it's a great idea, but you definitely didn't.Raiza [00:42:22]: Yeah.Swyx [00:42:24]: My guess is you didn't.Raiza [00:42:25]: Yeah.Swyx [00:42:26]: There's a, there's a certain field of authority that people have. They're like, oh, like you can't do this because you don't have any experience like making engaging audio. But that's what you literally did.Raiza [00:42:35]: Right.Usama [00:42:35]: I mean, I was literally chatting with someone at Google earlier today about how some people think that like you need a linguistics person in the room for like making a good chatbot. But that's not actually true because like this person went to school for linguistics. And according to him, he's an engineer now. According to him, like most of his classmates were not actually good at language. Like they knew how to analyze language and like sort of the mathematical patterns and rhythms and language. But that doesn't necessarily mean they were going to be eloquent at like while speaking or writing. So I think, yeah, a lot of we haven't invested in specialists in audio format yet, but maybe that would.Raiza [00:43:13]: I think it's like super interesting because I think there is like a very human question of like what makes something interesting. And there's like a very deep question of like what is it, right? Like what is the quality that we are all looking for? Is it does somebody have to be funny? Does something have to be entertaining? Does something have to be straight to the point? And I think when you try to distill that, this is the interesting thing I think about our experiment, about this particular launch is first, we only launched one format. And so we sort of had to squeeze everything we believed about what an interesting thing is into one package. And as a result of it, I think we learned it's like, hey, interacting with a chatbot is sort of novel at first, but it's not interesting, right? It's like humans are what makes interacting with chatbots interesting.Raiza [00:43:59]: It's like, ha ha ha, I'm going to try to trick it. It's like, that's interesting.Raiza [00:44:02]: Spell strawberry, right?Raiza [00:44:04]: This is like the fun that like people have with it. But like that's not the LLM being interesting.Raiza [00:44:08]: That's you just like kind of giving it your own flavor. But it's like, what does it mean to sort of flip it on its head and say, no, you be interesting now, right? Like you give the chatbot the opportunity to do it. And this is not a chatbot per se. It is like just the audio. And it's like the texture, I think, that really brings it to life. And it's like the things that we've described here, which is like, okay, now I have to like lead you down a path of information about like this commercialization deck.Raiza [00:44:36]: It's like, how do you do that?Raiza [00:44:38]: To be able to successfully do it, I do think that you need experts. I think we'll engage with experts like down the road, but I think it will have to be in the context of, well, what's the next thing we're building, right? It's like, what am I trying to change here? What do I fundamentally believe needs to be improved? And I think there's still like a lot more studying that we have to do in terms of like, well, what are people actually using this for? And we're just in such early days. Like it hasn't even been a month. Two, three weeks.Usama [00:45:05]: Three weeks.Raiza [00:45:06]: Yeah, yeah.Usama [00:45:07]: I think one other element to that is the fact that you're bringing your own sources to it. Like it's your stuff. Like, you know this somewhat well, or you care to know about this. So like that, I think, changed the equation on its head as well. It's like your sources and someone's telling you about it. So like you care about how that dynamic is, but you just care for it to be good enough to be entertaining. Because ultimately they're talking about your mortgage deed or whatever.Swyx [00:45:33]: So it's interesting just from the topic itself. Even taking out all the agreements and the hiding of the slow reveal. I mean, there's a baseline, maybe.Usama [00:45:42]: Like if it was like too drab. Like if someone was reading it off, like, you know, that's like the absolute worst.Raiza [00:45:46]: But like...Swyx [00:45:47]: Do you prompt for humor? That's a tough one, right?Raiza [00:45:51]: I think it's more of a generic way to bring humor out if possible. I think humor is actually one of the hardest things. Yeah.Raiza [00:46:00]: But I don't know if you saw...Raiza [00:46:00]: That is AGI.Swyx [00:46:01]: Humor is AGI.Raiza [00:46:02]: Yeah, but did you see the chicken one?Raiza [00:46:03]: No.Raiza [00:46:04]: Okay. If you haven't heard it... We'll splice it in here.Swyx [00:46:06]: Okay.Raiza [00:46:07]: Yeah.Raiza [00:46:07]: There is a video on Threads. I think it was by Martino Wong. And it's a PDF.Raiza [00:46:16]: Welcome to your deep dive for today. Oh, yeah. Get ready for a fun one. Buckle up. Because we are diving into... Chicken, chicken, chicken. Chicken, chicken. You got that right. By Doug Zonker. Now. And yes, you heard that title correctly. Titles. Our listener today submitted this paper. Yeah, they're going to need our help. And I can totally see why. Absolutely. It's dense. It's baffling. It's a lot. And it's packed with more chicken than a KFC buffet. What? That's hilarious.Raiza [00:46:48]: That's so funny. So it's like stuff like that, that's like truly delightful, truly surprising.Raiza [00:46:53]: But it's like we didn't tell it to be funny.Usama [00:46:55]: Humor is contextual also. Like super contextual is what we're realizing. So we're not prompting for humor, but we're prompting for maybe a lot of other things that are bringing out that humor.Alessio [00:47:04]: I think the thing about ad-generated content, if we look at YouTube, like we do videos on YouTube and it's like, you know, a lot of people like screaming in the thumbnails to get clicks. There's like everybody, there's kind of like a meta of like what you need to do to get clicks. But I think in your product, there's no actual creator on the other side investing the time. So you can actually generate a type of content that is maybe not universally appealing, you know, at a much, yeah, exactly. I think that's the most interesting thing. It's like, well, is there a way for like, take Mr.Raiza [00:47:36]: Beast, right?Alessio [00:47:36]: It's like Mr. Beast optimizes videos to reach the biggest audience and like the most clicks. But what if every video could be kind of like regenerated to be closer to your taste, you know, when you watch it?Raiza [00:47:48]: I think that's kind of the promise of AI that I think we are just like touching on, which is, I think every time I've gotten information from somebody, they have delivered it to me in their preferred method, right?Raiza [00:47:59]: Like if somebody gives me a PDF, it's a PDF.Raiza [00:48:01]: Somebody gives me a hundred slide deck, that is the format in which I'm going to read it. But I think we are now living in the era where transformations are really possible, which is, look, like I don't want to read your hundred slide deck, but I'll listen to a 16 minute audio overview on the drive home. And that, that I think is, is really novel. And that is, is paving the way in a way that like maybe we wanted, but didn'tRaiza [00:48:24]: expect.Raiza [00:48:25]: Where I also think you're listening to a lot of content that normally wouldn't have had content made about it. Like I watched this TikTok where this woman uploaded her diary from 2004.Raiza [00:48:36]: For sure, right?Raiza [00:48:36]: Like nobody was goin
Send us a textDiscover the Penn State Worry Questionnaire (PSWQ) as we uncover its impact on the realm of mental health assessment. Are you ready to understand how a simple 16-item tool can unravel the complexities of worry-related disorders? With just a five-point Likert scale, this questionnaire provides a nuanced understanding of conditions like General Anxiety Disorder, OCD, and depression. We promise you'll walk away with insights that revolutionize your approach to diagnosing and assessing anxiety across diverse populations and cultures. It's not just about identifying worry—it's about understanding the intricate dance between cognitive and emotional factors that affect everything from concentration to sleep.Join us as we dissect the PSWQ's core components, from the abstract nature of worry to its frequency and intensity, and the critical aspect of perceived controllability. Whether you're a seasoned mental health professional or a newcomer eager to expand your toolkit, our exploration offers valuable insights into how this globally validated tool can enhance treatment planning. We'll guide you through the PSWQ's versatility, flexibility across age groups, and its role in painting a comprehensive picture of an individual's mental state. Get ready for a thought-provoking journey into the pervasive nature of worry, offering therapists and clinicians a deeper understanding of a tool that's become indispensable worldwide.If you need to study for your national licensing exam, try the free samplers at: LicensureExamsThis podcast is not associated with the NBCC, AMFTRB, ASW, ANCC, NASP, NAADAC, CCMC, NCPG, CRCC, or any state or governmental agency responsible for licensure.
Noah Hein from Latent Space University is finally launching with a free lightning course this Sunday for those new to AI Engineering. Tell a friend!Did you know there are >1,600 papers on arXiv just about prompting? Between shots, trees, chains, self-criticism, planning strategies, and all sorts of other weird names, it's hard to keep up. Luckily for us, Sander Schulhoff and team read them all and put together The Prompt Report as the ultimate prompt engineering reference, which we'll break down step-by-step in today's episode.In 2022 swyx wrote “Why “Prompt Engineering” and “Generative AI” are overhyped”; the TLDR being that if you're relying on prompts alone to build a successful products, you're ngmi. Prompt engineering moved from being a stand-alone job to a core skill for AI Engineers now. We won't repeat everything that is written in the paper, but this diagram encapsulates the state of prompting today: confusing. There are many similar terms, esoteric approaches that have doubtful impact on results, and lots of people that are just trying to create full papers around a single prompt just to get more publications out. Luckily, some of the best prompting techniques are being tuned back into the models themselves, as we've seen with o1 and Chain-of-Thought (see our OpenAI episode). Similarly, OpenAI recently announced 100% guaranteed JSON schema adherence, and Anthropic, Cohere, and Gemini all have JSON Mode (not sure if 100% guaranteed yet). No more “return JSON or my grandma is going to die” required. The next debate is human-crafted prompts vs automated approaches using frameworks like DSPy, which Sander recommended:I spent 20 hours prompt engineering for a task and DSPy beat me in 10 minutes. It's much more complex than simply writing a prompt (and I'm not sure how many people usually spend >20 hours prompt engineering one task), but if you're hitting a roadblock it might be worth checking out.Prompt Injection and JailbreaksSander and team also worked on HackAPrompt, a paper that was the outcome of an online challenge on prompt hacking techniques. They similarly created a taxonomy of prompt attacks, which is very hand if you're building products with user-facing LLM interfaces that you'd like to test:In this episode we basically break down every category and highlight the overrated and underrated techniques in each of them. If you haven't spent time following the prompting meta, this is a great episode to catchup!Full Video EpisodeLike and subscribe on YouTube!Timestamps* [00:00:00] Introductions - Intro music by Suno AI* [00:07:32] Navigating arXiv for paper evaluation* [00:12:23] Taxonomy of prompting techniques* [00:15:46] Zero-shot prompting and role prompting* [00:21:35] Few-shot prompting design advice* [00:28:55] Chain of thought and thought generation techniques* [00:34:41] Decomposition techniques in prompting* [00:37:40] Ensembling techniques in prompting* [00:44:49] Automatic prompt engineering and DSPy* [00:49:13] Prompt Injection vs Jailbreaking* [00:57:08] Multimodal prompting (audio, video)* [00:59:46] Structured output prompting* [01:04:23] Upcoming Hack-a-Prompt 2.0 projectShow Notes* Sander Schulhoff* Learn Prompting* The Prompt Report* HackAPrompt* Mine RL Competition* EMNLP Conference* Noam Brown* Jordan Boydgraver* Denis Peskov* Simon Willison* Riley Goodside* David Ha* Jeremy Nixon* Shunyu Yao* Nicholas Carlini* DreadnodeTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:13]: Hey, and today we're in the remote studio with Sander Schulhoff, author of the Prompt Report.Sander [00:00:18]: Welcome. Thank you. Very excited to be here.Swyx [00:00:21]: Sander, I think I first chatted with you like over a year ago. What's your brief history? I went onto your website, it looks like you worked on diplomacy, which is really interesting because we've talked with Noam Brown a couple of times, and that obviously has a really interesting story in terms of prompting and agents. What's your journey into AI?Sander [00:00:40]: Yeah, I'd say it started in high school. I took my first Java class and just saw a YouTube video about something AI and started getting into it, reading. Deep learning, neural networks, all came soon thereafter. And then going into college, I got into Maryland and I emailed just like half the computer science department at random. I was like, hey, I want to do research on deep reinforcement learning because I've been experimenting with that a good bit. And over that summer, I had read the Intro to RL book and the deep reinforcement learning hands-on, so I was very excited about what deep RL could do. And a couple of people got back to me and one of them was Jordan Boydgraver, Professor Boydgraver, and he was working on diplomacy. And he said to me, this looks like it was more of a natural language processing project at the time, but it's a game, so very easily could move more into the RL realm. And I ended up working with one of his students, Denis Peskov, who's now a postdoc at Princeton. And that was really my intro to AI, NLP, deep RL research. And so from there, I worked on diplomacy for a couple of years, mostly building infrastructure for data collection and machine learning, but I always wanted to be doing it myself. So I had a number of side projects and I ended up working on the Mine RL competition, Minecraft reinforcement learning, also some people call it mineral. And that ended up being a really cool opportunity because I think like sophomore year, I knew I wanted to do some project in deep RL and I really liked Minecraft. And so I was like, let me combine these. And I was searching for some Minecraft Python library to control agents and found mineral. And I was trying to find documentation for how to build a custom environment and do all sorts of stuff. I asked in their Discord how to do this and their super responsive, very nice. And they're like, oh, you know, we don't have docs on this, but, you know, you can look around. And so I read through the whole code base and figured it out and wrote a PR and added the docs that I didn't have before. And then later I ended up joining their team for about a year. And so they maintain the library, but also run a yearly competition. That was my first foray into competitions. And I was still working on diplomacy. At some point I was working on this translation task between Dade, which is a diplomacy specific bot language and English. And I started using GPT-3 prompting it to do the translation. And that was, I think, my first intro to prompting. And I just started doing a bunch of reading about prompting. And I had an English class project where we had to write a guide on something that ended up being learn prompting. So I figured, all right, well, I'm learning about prompting anyways. You know, Chain of Thought was out at this point. There are a couple blog posts floating around, but there was no website you could go to just sort of read everything about prompting. So I made that. And it ended up getting super popular. Now continuing with it, supporting the project now after college. And then the other very interesting things, of course, are the two papers I wrote. And that is the prompt report and hack a prompt. So I saw Simon and Riley's original tweets about prompt injection go across my feed. And I put that information into the learn prompting website. And I knew, because I had some previous competition running experience, that someone was going to run a competition with prompt injection. And I waited a month, figured, you know, I'd participate in one of these that comes out. No one was doing it. So I was like, what the heck, I'll give it a shot. Just started reaching out to people. Got some people from Mila involved, some people from Maryland, and raised a good amount of sponsorship. I had no experience doing that, but just reached out to as many people as I could. And we actually ended up getting literally all the sponsors I wanted. So like OpenAI, actually, they reached out to us a couple months after I started learn prompting. And then Preamble is the company that first discovered prompt injection even before Riley. And they like responsibly disclosed it kind of internally to OpenAI. And having them on board as the largest sponsor was super exciting. And then we ran that, collected 600,000 malicious prompts, put together a paper on it, open sourced everything. And we took it to EMNLP, which is one of the top natural language processing conferences in the world. 20,000 papers were submitted to that conference, 5,000 papers were accepted. We were one of three selected as best papers at the conference, which was just massive. Super, super exciting. I got to give a talk to like a couple thousand researchers there, which was also very exciting. And I kind of carried that momentum into the next paper, which was the prompt report. It was kind of a natural extension of what I had been doing with learn prompting in the sense that we had this website bringing together all of the different prompting techniques, survey website in and of itself. So writing an actual survey, a systematic survey was the next step that we did in the prompt report. So over the course of about nine months, I led a 30 person research team with people from OpenAI, Google, Microsoft, Princeton, Stanford, Maryland, a number of other universities and companies. And we pretty much read thousands of papers on prompting and compiled it all into like a 80 page massive summary doc. And then we put it on archive and the response was amazing. We've gotten millions of views across socials. I actually put together a spreadsheet where I've been able to track about one and a half million. And I just kind of figure if I can find that many, then there's many more views out there. It's been really great. We've had people repost it and say, oh, like I'm using this paper for job interviews now to interview people to check their knowledge of prompt engineering. We've even seen misinformation about the paper. So someone like I've seen people post and be like, I wrote this paper like they claim they wrote the paper. I saw one blog post, researchers at Cornell put out massive prompt report. We didn't have any authors from Cornell. I don't even know where this stuff's coming from. And then with the hack-a-prompt paper, great reception there as well, citations from OpenAI helping to improve their prompt injection security in the instruction hierarchy. And it's been used by a number of Fortune 500 companies. We've even seen companies built entirely on it. So like a couple of YC companies even, and I look at their demos and their demos are like try to get the model to say I've been pwned. And I look at that. I'm like, I know exactly where this is coming from. So that's pretty much been my journey.Alessio [00:07:32]: Just to set the timeline, when did each of these things came out? So Learn Prompting, I think was like October 22. So that was before ChatGPT, just to give people an idea of like the timeline.Sander [00:07:44]: And so we ran hack-a-prompt in May of 2023, but the paper from EMNLP came out a number of months later. Although I think we put it on archive first. And then the prompt report came out about two months ago. So kind of a yearly cadence of releases.Swyx [00:08:05]: You've done very well. And I think you've honestly done the community a service by reading all these papers so that we don't have to, because the joke is often that, you know, what is one prompt is like then inflated into like a 10 page PDF that's posted on archive. And then you've done the reverse of compressing it into like one paragraph each of each paper.Sander [00:08:23]: So thank you for that. We saw some ridiculous stuff out there. I mean, some of these papers I was reading, I found AI generated papers on archive and I flagged them to their staff and they were like, thank you. You know, we missed these.Swyx [00:08:37]: Wait, archive takes them down? Yeah.Sander [00:08:39]: You can't post an AI generated paper there, especially if you don't say it's AI generated. But like, okay, fine.Swyx [00:08:46]: Let's get into this. Like what does AI generated mean? Right. Like if I had ChatGPT rephrase some words.Sander [00:08:51]: No. So they had ChatGPT write the entire paper. And worse, it was a survey paper of, I think, prompting. And I was looking at it. I was like, okay, great. Here's a resource that will probably be useful to us. And I'm reading it and it's making no sense. And at some point in the paper, they did say like, oh, and this was written in part, or we use, I think they're like, we use ChatGPT to generate the paragraphs. I was like, well, what other information is there other than the paragraphs? But it was very clear in reading it that it was completely AI generated. You know, there's like the AI scientist paper that came out recently where they're using AI to generate papers, but their paper itself is not AI generated. But as a matter of where to draw the line, I think if you're using AI to generate the entire paper, that's very well past the line.Swyx [00:09:41]: Right. So you're talking about Sakana AI, which is run out of Japan by David Ha and Leon, who's one of the Transformers co-authors.Sander [00:09:49]: Yeah. And just to clarify, no problems with their method.Swyx [00:09:52]: It seems like they're doing some verification. It's always like the generator-verifier two-stage approach, right? Like you generate something and as long as you verify it, at least it has some grounding in the real world. I would also shout out one of our very loyal listeners, Jeremy Nixon, who does omniscience or omniscience, which also does generated papers. I've never heard of this Prisma process that you followed. This is a common literature review process. You pull all these papers and then you filter them very studiously. Just describe why you picked this process. Is it a normal thing to do? Was it the best fit for what you wanted to do? Yeah.Sander [00:10:27]: It is a commonly used process in research when people are performing systematic literature reviews and across, I think, really all fields. And as far as why we did it, it lends a couple of things. So first of all, this enables us to really be holistic in our approach and lends credibility to our ability to say, okay, well, for the most part, we didn't miss anything important because it's like a very well-vetted, again, commonly used technique. I think it was suggested by the PI on the project. I unsurprisingly don't have experience doing systematic literature reviews for this paper. It takes so long to do, although some people, apparently there are researchers out there who just specialize in systematic literature reviews and they just spend years grinding these out. It was really helpful. And a really interesting part, what we did, we actually used AI as part of that process. So whereas usually researchers would sort of divide all the papers up among themselves and read through it, we use the prompt to read through a number of the papers to decide whether they were relevant or irrelevant. Of course, we were very careful to test the accuracy and we have all the statistics on that comparing it against human performance on evaluation in the paper. But overall, very helpful technique. I would recommend it. It does take additional time to do because there's just this sort of formal process associated with it, but I think it really helps you collect a more robust set of papers. There are actually a number of survey papers on Archive which use the word systematic. So they claim to be systematic, but they don't use any systematic literature review technique. There's other ones than Prisma, but in order to be truly systematic, you have to use one of these techniques. Awesome.Alessio [00:12:23]: Let's maybe jump into some of the content. Last April, we wrote the anatomy of autonomy, talking about agents and the parts that go into it. You kind of have the anatomy of prompts. You created this kind of like taxonomy of how prompts are constructed, roles, instructions, questions. Maybe you want to give people the super high level and then we can maybe dive into the most interesting things in each of the sections.Sander [00:12:44]: Sure. And just to clarify, this is our taxonomy of text-based techniques or just all the taxonomies we've put together in the paper?Alessio [00:12:50]: Yeah. Texts to start.Sander [00:12:51]: One of the most significant contributions of this paper is formal taxonomy of different prompting techniques. And there's a lot of different ways that you could go about taxonomizing techniques. You could say, okay, we're going to taxonomize them according to application, how they're applied, what fields they're applied in, or what things they perform well at. But the most consistent way we found to do this was taxonomizing according to problem solving strategy. And so this meant for something like chain of thought, where it's making the model output, it's reasoning, maybe you think it's reasoning, maybe not, steps. That is something called generating thought, reasoning steps. And there are actually a lot of techniques just like chain of thought. And chain of thought is not even a unique technique. There was a lot of research from before it that was very, very similar. And I think like Think Aloud or something like that was a predecessor paper, which was actually extraordinarily similar to it. They cite it in their paper, so no issues there. But then there's other things where maybe you have multiple different prompts you're using to solve the same problem, and that's like an ensemble approach. And then there's times where you have the model output something, criticize itself, and then improve its output, and that's a self-criticism approach. And then there's decomposition, zero-shot, and few-shot prompting. Zero-shot in our taxonomy is a bit of a catch-all in the sense that there's a lot of diverse prompting techniques that don't fall into the other categories and also don't use exemplars, so we kind of just put them together in zero-shot. The reason we found it useful to assemble prompts according to their problem-solving strategy is that when it comes to applications, all of these prompting techniques could be applied to any problem, so there's not really a clear differentiation there, but there is a very clear differentiation in how they solve problems. One thing that does make this a bit complex is that a lot of prompting techniques could fall into two or more overall categories. A good example being few-shot chain-of-thought prompting, obviously it's few-shot and it's also chain-of-thought, and that's thought generation. But what we did to make the visualization and the taxonomy clearer is that we chose the primary label for each prompting technique, so few-shot chain-of-thought, it is really more about chain-of-thought, and then few-shot is more of an improvement upon that. There's a variety of other prompting techniques and some hard decisions were made, I mean some of these could have fallen into like four different overall classes, but that's the way we did it and I'm quite happy with the resulting taxonomy.Swyx [00:15:46]: I guess the best way to go through this, you know, you picked out 58 techniques out of your, I don't know, 4,000 papers that you reviewed, maybe we just pick through a few of these that are special to you and discuss them a little bit. We'll just start with zero-shot, I'm just kind of going sequentially through your diagram. So in zero-shot, you had emotion prompting, role prompting, style prompting, S2A, which is I think system to attention, SIM2M, RAR, RE2 is self-ask. I've heard of self-ask the most because Ofir Press is a very big figure in our community, but what are your personal underrated picks there?Sander [00:16:21]: Let me start with my controversial picks here, actually. Emotion prompting and role prompting, in my opinion, are techniques that are not sufficiently studied in the sense that I don't actually believe they work very well for accuracy-based tasks on more modern models, so GPT-4 class models. We actually put out a tweet recently about role prompting basically saying role prompting doesn't work and we got a lot of feedback on both sides of the issue and we clarified our position in a blog post and basically our position, my position in particular, is that role prompting is useful for text generation tasks, so styling text saying, oh, speak like a pirate, very useful, it does the job. For accuracy-based tasks like MMLU, you're trying to solve a math problem and maybe you tell the AI that it's a math professor and you expect it to have improved performance. I really don't think that works. I'm quite certain that doesn't work on more modern transformers. I think it might have worked on older ones like GPT-3. I know that from anecdotal experience, but also we ran a mini-study as part of the prompt report. It's actually not in there now, but I hope to include it in the next version where we test a bunch of role prompts on MMLU. In particular, I designed a genius prompt, it's like you're a Harvard-educated math professor and you're incredible at solving problems, and then an idiot prompt, which is like you are terrible at math, you can't do basic addition, you can never do anything right, and we ran these on, I think, a couple thousand MMLU questions. The idiot prompt outperformed the genius prompt. I mean, what do you do with that? And all the other prompts were, I think, somewhere in the middle. If I remember correctly, the genius prompt might have been at the bottom, actually, of the list. And the other ones are sort of random roles like a teacher or a businessman. So, there's a couple studies out there which use role prompting and accuracy-based tasks, and one of them has this chart that shows the performance of all these different role prompts, but the difference in accuracy is like a hundredth of a percent. And so I don't think they compute statistical significance there, so it's very hard to tell what the reality is with these prompting techniques. And I think it's a similar thing with emotion prompting and stuff like, I'll tip you $10 if you get this right, or even like, I'll kill my family if you don't get this right. There are a lot of posts about that on Twitter, and the initial posts are super hyped up. I mean, it is reasonably exciting to be able to say, no, it's very exciting to be able to say, look, I found this strange model behavior, and here's how it works for me. I doubt that a lot of these would actually work if they were properly benchmarked.Alessio [00:19:11]: The meta's not to say you're an idiot, it's just to not put anything, basically.Sander [00:19:15]: I guess I do, my toolbox is mainly few-shot, chain of thought, and include very good information about your problem. I try not to say the word context because it's super overloaded, you know, you have like the context length, context window, really all these different meanings of context. Yeah.Swyx [00:19:32]: Regarding roles, I do think that, for one thing, we do have roles which kind of reified into the API of OpenAI and Thopic and all that, right? So now we have like system, assistant, user.Sander [00:19:43]: Oh, sorry. That's not what I meant by roles. Yeah, I agree.Swyx [00:19:46]: I'm just shouting that out because obviously that is also named a role. I do think that one thing is useful in terms of like sort of multi-agent approaches and chain of thought. The analogy for those people who are familiar with this is sort of the Edward de Bono six thinking hats approach. Like you put on a different thinking hat and you look at the same problem from different angles, you generate more insight. That is still kind of useful for improving some performance. Maybe not MLU because MLU is a test of knowledge, but some kind of reasoning approach that might be still useful too. I'll call out two recent papers which people might want to look into, which is a Salesforce yesterday released a paper called Diversity Empowered Intelligence, which is a, I think a shot at the bow for scale AI. So their approach of DEI is a sort of agent approach that solves three bench scores really, really well. I thought that was like really interesting as sort of an agent strategy. And then the other one that had some attention recently is Tencent AI Lab put out a synthetic data paper with a billion personas. So that's a billion roles generating different synthetic data from different perspective. And that was useful for their fine tuning. So just explorations in roles continue, but yeah, maybe, maybe standard prompting, like it's actually declined over time.Sander [00:21:00]: Sure. Here's another one actually. This is done by a co-author on both the prompt report and hack a prompt, and he analyzes an ensemble approach where he has models prompted with different roles and ask them to solve the same question. And then basically takes the majority response. One of them is a rag and able agent, internet search agent, but the idea of having different roles for the different agents is still around. Just to reiterate, my position is solely accuracy focused on modern models.Alessio [00:21:35]: I think most people maybe already get the few shot things. I think you've done a great job at grouping the types of mistakes that people make. So the quantity, the ordering, the distribution, maybe just run through people, what are like the most impactful. And there's also like a lot of good stuff in there about if a lot of the training data has, for example, Q semi-colon and then a semi-colon, it's better to put it that way versus if the training data is a different format, it's better to do it. Maybe run people through that. And then how do they figure out what's in the training data and how to best prompt these things? What's a good way to benchmark that?Sander [00:22:09]: All right. Basically we read a bunch of papers and assembled six pieces of design advice about creating few shot prompts. One of my favorite is the ordering one. So how you order your exemplars in the prompt is super important. And we've seen this move accuracy from like 0% to 90%, like zero to state of the art on some tasks, which is just ridiculous. And I expect this to change over time in the sense that models should get robust to the order of few shot exemplars. But it's still something to absolutely keep in mind when you're designing prompts. And so that means trying out different orders, making sure you have a random order of exemplars for the most part, because if you have something like all your negative examples first and then all your positive examples, the model might read into that too much and be like, okay, I just saw a ton of positive examples. So the next one is just probably positive. And there's other biases that you can accidentally generate. I guess you talked about the format. So let me talk about that as well. So how you are formatting your exemplars, whether that's Q colon, A colon, or just input colon output, there's a lot of different ways of doing it. And we recommend sticking to common formats as LLMs have likely seen them the most and are most comfortable with them. Basically, what that means is that they're sort of more stable when using those formats and will have hopefully better results. And as far as how to figure out what these common formats are, you can just sort of look at research papers. I mean, look at our paper. We mentioned a couple. And for longer form tasks, we don't cover them in this paper, but I think there are a couple common formats out there. But if you're looking to actually find it in a data set, like find the common exemplar formatting, there's something called prompt mining, which is a technique for finding this. And basically, you search through the data set, you find the most common strings of input output or QA or question answer, whatever they would be. And then you just select that as the one you use. This is not like a super usable strategy for the most part in the sense that you can't get access to ChachiBT's training data set. But I think the lesson here is use a format that's consistently used by other people and that is known to work. Yeah.Swyx [00:24:40]: Being in distribution at least keeps you within the bounds of what it was trained for. So I will offer a personal experience here. I spend a lot of time doing example, few-shot prompting and tweaking for my AI newsletter, which goes out every single day. And I see a lot of failures. I don't really have a good playground to improve them. Actually, I wonder if you have a good few-shot example playground tool to recommend. You have six things. Example of quality, ordering, distribution, quantity, format, and similarity. I will say quantity. I guess quality is an example. I have the unique problem, and maybe you can help me with this, of my exemplars leaking into the output, which I actually don't want. I didn't see an example of a mitigation step of this in your report, but I think this is tightly related to quantity. So quantity, if you only give one example, it might repeat that back to you. So if you give two examples, like I used to always have this rule of every example must come in pairs. A good example, bad example, good example, bad example. And I did that. Then it just started repeating back my examples to me in the output. So I'll just let you riff. What do you do when people run into this?Sander [00:25:56]: First of all, in-distribution is definitely a better term than what I used before, so thank you for that. And you're right, we don't cover that problem in the problem report. I actually didn't really know about that problem until afterwards when I put out a tweet. I was saying, what are your commonly used formats for few-shot prompting? And one of the responses was a format that included instructions that said, do not repeat any of the examples I gave you. And I guess that is a straightforward solution that might some... No, it doesn't work. Oh, it doesn't work. That is tough. I guess I haven't really had this problem. It's just probably a matter of the tasks I've been working on. So one thing about showing good examples, bad examples, there are a number of papers which have found that the label of the exemplar doesn't really matter, and the model reads the exemplars and cares more about structure than label. You could say we have like a... We're doing few-shot prompting for binary classification. Super simple problem, it's just like, I like pears, positive. I hate people, negative. And then one of the exemplars is incorrect. I started saying exemplars, by the way, which is rather unfortunate. So let's say one of our exemplars is incorrect, and we say like, I like apples, negative, and like colon negative. Well, that won't affect the performance of the model all that much, because the main thing it takes away from the few-shot prompt is the structure of the output rather than the content of the output. That being said, it will reduce performance to some extent, us making that mistake, or me making that mistake. And I still do think that the content is important, it's just apparently not as important as the structure. Got it.Swyx [00:27:49]: Yeah, makes sense. I actually might tweak my approach based on that, because I was trying to give bad examples of do not do this, and it still does it, and maybe that doesn't work. So anyway, I wanted to give one offering as well, which is some sites. So for some of my prompts, I went from few-shot back to zero-shot, and I just provided generic templates, like fill in the blanks, and then kind of curly braces, like the thing you want, that's it. No other exemplars, just a template, and that actually works a lot better. So few-shot is not necessarily better than zero-shot, which is counterintuitive, because you're working harder.Alessio [00:28:25]: After that, now we start to get into the funky stuff. I think the zero-shot, few-shot, everybody can kind of grasp. Then once you get to thought generation, people start to think, what is going on here? So I think everybody, well, not everybody, but people that were tweaking with these things early on saw the take a deep breath, and things step-by-step, and all these different techniques that the people had. But then I was reading the report, and it's like a million things, it's like uncertainty routed, CO2 prompting, I'm like, what is that?Swyx [00:28:53]: That's a DeepMind one, that's from Google.Alessio [00:28:55]: So what should people know, what's the basic chain of thought, and then what's the most extreme weird thing, and what people should actually use, versus what's more like a paper prompt?Sander [00:29:05]: Yeah. This is where you get very heavily into what you were saying before, you have like a 10-page paper written about a single new prompt. And so that's going to be something like thread of thought, where what they have is an augmented chain of thought prompt. So instead of let's think step-by-step, it's like, let's plan and solve this complex problem. It's a bit long.Swyx [00:29:31]: To get to the right answer. Yes.Sander [00:29:33]: And they have like an 8 or 10 pager covering the various analyses of that new prompt. And the fact that exists as a paper is interesting to me. It was actually useful for us when we were doing our benchmarking later on, because we could test out a couple of different variants of chain of thought, and be able to say more robustly, okay, chain of thought in general performs this well on the given benchmark. But it does definitely get confusing when you have all these new techniques coming out. And like us as paper readers, like what we really want to hear is, this is just chain of thought, but with a different prompt. And then let's see, most complicated one. Yeah. Uncertainty routed is somewhat complicated, wouldn't want to implement that one. Complexity based, somewhat complicated, but also a nice technique. So the idea there is that reasoning paths, which are longer, are likely to be better. Simple idea, decently easy to implement. You could do something like you sample a bunch of chain of thoughts, and then just select the top few and ensemble from those. But overall, there are a good amount of variations on chain of thought. Autocot is a good one. We actually ended up, we put it in here, but we made our own prompting technique over the course of this paper. How should I call it? Like auto-dicot. I had a dataset, and I had a bunch of exemplars, inputs and outputs, but I didn't have chains of thought associated with them. And it was in a domain where I was not an expert. And in fact, this dataset, there are about three people in the world who are qualified to label it. So we had their labels, and I wasn't confident in my ability to generate good chains of thought manually. And I also couldn't get them to do it just because they're so busy. So what I did was I told chat GPT or GPT-4, here's the input, solve this. Let's go step by step. And it would generate a chain of thought output. And if it got it correct, so it would generate a chain of thought and an answer. And if it got it correct, I'd be like, okay, good, just going to keep that, store it to use as a exemplar for a few-shot chain of thought prompting later. If it got it wrong, I would show it its wrong answer and that sort of chat history and say, rewrite your reasoning to be opposite of what it was. So I tried that. And then I also tried more simply saying like, this is not the case because this following reasoning is not true. So I tried a couple of different things there, but the idea was that you can automatically generate chain of thought reasoning, even if it gets it wrong.Alessio [00:32:31]: Have you seen any difference with the newer models? I found when I use Sonnet 3.5, a lot of times it does chain of thought on its own without having to ask two things step by step. How do you think about these prompting strategies kind of like getting outdated over time?Sander [00:32:45]: I thought chain of thought would be gone by now. I really did. I still think it should be gone. I don't know why it's not gone. Pretty much as soon as I read that paper, I knew that they were going to tune models to automatically generate chains of thought. But the fact of the matter is that models sometimes won't. I remember I did a lot of experiments with GPT-4, and especially when you look at it at scale. So I'll run thousands of prompts against it through the API. And I'll see every one in a hundred, every one in a thousand outputs no reasoning whatsoever. And I need it to output reasoning. And it's worth the few extra tokens to have that let's go step by step or whatever to ensure it does output the reasoning. So my opinion on that is basically the model should be automatically doing this, and they often do, but not always. And I need always.Swyx [00:33:36]: I don't know if I agree that you need always, because it's a mode of a general purpose foundation model, right? The foundation model could do all sorts of things.Sander [00:33:43]: To deny problems, I guess.Swyx [00:33:47]: I think this is in line with your general opinion that prompt engineering will never go away. Because to me, what a prompt is, is kind of shocks the language model into a specific frame that is a subset of what it was pre-trained on. So unless it is only trained on reasoning corpuses, it will always do other things. And I think the interesting papers that have arisen, I think that especially now we have the Lama 3 paper of this that people should read is Orca and Evolve Instructs from the Wizard LM people. It's a very strange conglomeration of researchers from Microsoft. I don't really know how they're organized because they seem like all different groups that don't talk to each other, but they seem to have one in terms of how to train a thought into a model. It's these guys.Sander [00:34:29]: Interesting. I'll have to take a look at that.Swyx [00:34:31]: I also think about it as kind of like Sherlocking. It's like, oh, that's cute. You did this thing in prompting. I'm going to put that into my model. That's a nice way of synthetic data generation for these guys.Alessio [00:34:41]: And next, we actually have a very good one. So later today, we're doing an episode with Shunyu Yao, who's the author of Tree of Thought. So your next section is decomposition, which Tree of Thought is a part of. I was actually listening to his PhD defense, and he mentioned how, if you think about reasoning as like taking actions, then any algorithm that helps you with deciding what action to take next, like Tree Search, can kind of help you with reasoning. Any learnings from going through all the decomposition ones? Are there state-of-the-art ones? Are there ones that are like, I don't know what Skeleton of Thought is? There's a lot of funny names. What's the state-of-the-art in decomposition? Yeah.Sander [00:35:22]: So Skeleton of Thought is actually a bit of a different technique. It has to deal with how to parallelize and improve efficiency of prompts. So not very related to the other ones. In terms of state-of-the-art, I think something like Tree of Thought is state-of-the-art on a number of tasks. Of course, the complexity of implementation and the time it takes can be restrictive. My favorite simple things to do here are just like in a, let's think step-by-step, say like make sure to break the problem down into subproblems and then solve each of those subproblems individually. Something like that, which is just like a zero-shot decomposition prompt, often works pretty well. It becomes more clear how to build a more complicated system, which you could bring in API calls to solve each subproblem individually and then put them all back in the main prompt, stuff like that. But starting off simple with decomposition is always good. The other thing that I think is quite notable is the similarity between decomposition and thought generation, because they're kind of both generating intermediate reasoning. And actually, over the course of this research paper process, I would sometimes come back to the paper like a couple days later, and someone would have moved all of the decomposition techniques into the thought generation section. At some point, I did not agree with this, but my current position is that they are separate. The idea with thought generation is you need to write out intermediate reasoning steps. The idea with decomposition is you need to write out and then kind of individually solve subproblems. And they are different. I'm still working on my ability to explain their difference, but I am convinced that they are different techniques, which require different ways of thinking.Swyx [00:37:05]: We're making up and drawing boundaries on things that don't want to have boundaries. So I do think what you're doing is a public service, which is like, here's our best efforts, attempts, and things may change or whatever, or you might disagree, but at least here's something that a specialist has really spent a lot of time thinking about and categorizing. So I think that makes a lot of sense. Yeah, we also interviewed the Skeleton of Thought author. I think there's a lot of these acts of thought. I think there was a golden period where you publish an acts of thought paper and you could get into NeurIPS or something. I don't know how long that's going to last.Sander [00:37:39]: Okay.Swyx [00:37:40]: Do you want to pick ensembling or self-criticism next? What's the natural flow?Sander [00:37:43]: I guess I'll go with ensembling, seems somewhat natural. The idea here is that you're going to use a couple of different prompts and put your question through all of them and then usually take the majority response. What is my favorite one? Well, let's talk about another kind of controversial one, which is self-consistency. Technically this is a way of sampling from the large language model and the overall strategy is you ask it the same prompt, same exact prompt, multiple times with a somewhat high temperature so it outputs different responses. But whether this is actually an ensemble or not is a bit unclear. We classify it as an ensembling technique more out of ease because it wouldn't fit fantastically elsewhere. And so the arguments on the ensemble side as well, we're asking the model the same exact prompt multiple times. So it's just a couple, we're asking the same prompt, but it is multiple instances. So it is an ensemble of the same thing. So it's an ensemble. And the counter argument to that would be, well, you're not actually ensembling it. You're giving it a prompt once and then you're decoding multiple paths. And that is true. And that is definitely a more efficient way of implementing it for the most part. But I do think that technique is of particular interest. And when it came out, it seemed to be quite performant. Although more recently, I think as the models have improved, the performance of this technique has dropped. And you can see that in the evals we run near the end of the paper where we use it and it doesn't change performance all that much. Although maybe if you do it like 10x, 20, 50x, then it would help more.Swyx [00:39:39]: And ensembling, I guess, you already hinted at this, is related to self-criticism as well. You kind of need the self-criticism to resolve the ensembling, I guess.Sander [00:39:49]: Ensembling and self-criticism are not necessarily related. The way you decide the final output from the ensemble is you usually just take the majority response and you're done. So self-criticism is going to be a bit different in that you have one prompt, one initial output from that prompt, and then you tell the model, okay, look at this question and this answer. Do you agree with this? Do you have any criticism of this? And then you get the criticism and you tell it to reform its answer appropriately. And that's pretty much what self-criticism is. I actually do want to go back to what you said though, because it made me remember another prompting technique, which is ensembling, and I think it's an ensemble. I'm not sure where we have it classified. But the idea of this technique is you sample multiple chain-of-thought reasoning paths, and then instead of taking the majority as the final response, you put all of the reasoning paths into a prompt, and you tell the model, examine all of these reasoning paths and give me the final answer. And so the model could sort of just say, okay, I'm just going to take the majority, or it could see something a bit more interesting in those chain-of-thought outputs and be able to give some result that is better than just taking the majority.Swyx [00:41:04]: Yeah, I actually do this for my summaries. I have an ensemble and then I have another LM go on top of it. I think one problem for me for designing these things with cost awareness is the question of, well, okay, at the baseline, you can just use the same model for everything, but realistically you have a range of models, and actually you just want to sample all range. And then there's a question of, do you want the smart model to do the top level thing, or do you want the smart model to do the bottom level thing, and then have the dumb model be a judge? If you care about cost. I don't know if you've spent time thinking on this, but you're talking about a lot of tokens here, so the cost starts to matter.Sander [00:41:43]: I definitely care about cost. I think it's funny because I feel like we're constantly seeing the prices drop on intelligence. Yeah, so maybe you don't care.Swyx [00:41:52]: I don't know.Sander [00:41:53]: I do still care. I'm about to tell you a funny anecdote from my friend. And so we're constantly seeing, oh, the price is dropping, the price is dropping, the major LM providers are giving cheaper and cheaper prices, and then Lama, Threer come out, and a ton of companies which will be dropping the prices so low. And so it feels cheap. But then a friend of mine accidentally ran GPT-4 overnight, and he woke up with a $150 bill. And so you can still incur pretty significant costs, even at the somewhat limited rate GPT-4 responses through their regular API. So it is something that I spent time thinking about. We are fortunate in that OpenAI provided credits for these projects, so me or my lab didn't have to pay. But my main feeling here is that for the most part, designing these systems where you're kind of routing to different levels of intelligence is a really time-consuming and difficult task. And it's probably worth it to just use the smart model and pay for it at this point if you're looking to get the right results. And I figure if you're trying to design a system that can route properly and consider this for a researcher. So like a one-off project, you're better off working like a 60, 80-hour job for a couple hours and then using that money to pay for it rather than spending 10, 20-plus hours designing the intelligent routing system and paying I don't know what to do that. But at scale, for big companies, it does definitely become more relevant. Of course, you have the time and the research staff who has experience here to do that kind of thing. And so I know like OpenAI, ChatGPT interface does this where they use a smaller model to generate the initial few, I don't know, 10 or so tokens and then the regular model to generate the rest. So it feels faster and it is somewhat cheaper for them.Swyx [00:43:54]: For listeners, we're about to move on to some of the other topics here. But just for listeners, I'll share my own heuristics and rule of thumb. The cheap models are so cheap that calling them a number of times can actually be useful dimension like token reduction for then the smart model to decide on it. You just have to make sure it's kind of slightly different at each time. So GPC 4.0 is currently 5�����������������������.����ℎ�����4.0������5permillionininputtokens.AndthenGPC4.0Miniis0.15.Sander [00:44:21]: It is a lot cheaper.Swyx [00:44:22]: If I call GPC 4.0 Mini 10 times and I do a number of drafts or summaries, and then I have 4.0 judge those summaries, that actually is net savings and a good enough savings than running 4.0 on everything, which given the hundreds and thousands and millions of tokens that I process every day, like that's pretty significant. So, but yeah, obviously smart, everything is the best, but a lot of engineering is managing to constraints.Sander [00:44:47]: That's really interesting. Cool.Swyx [00:44:49]: We cannot leave this section without talking a little bit about automatic prompts engineering. You have some sections in here, but I don't think it's like a big focus of prompts. The prompt report, DSPy is up and coming sort of approach. You explored that in your self study or case study. What do you think about APE and DSPy?Sander [00:45:07]: Yeah, before this paper, I thought it's really going to keep being a human thing for quite a while. And that like any optimized prompting approach is just sort of too difficult. And then I spent 20 hours prompt engineering for a task and DSPy beat me in 10 minutes. And that's when I changed my mind. I would absolutely recommend using these, DSPy in particular, because it's just so easy to set up. Really great Python library experience. One limitation, I guess, is that you really need ground truth labels. So it's harder, if not impossible currently to optimize open generation tasks. So like writing, writing newsletters, I suppose, it's harder to automatically optimize those. And I'm actually not aware of any approaches that do other than sort of meta-prompting where you go and you say to ChatsDBD, here's my prompt, improve it for me. I've seen those. I don't know how well those work. Do you do that?Swyx [00:46:06]: No, it's just me manually doing things. Because I'm defining, you know, I'm trying to put together what state of the art summarization is. And actually, it's a surprisingly underexplored area. Yeah, I just have it in a little notebook. I assume that's how most people work. Maybe you have explored like prompting playgrounds. Is there anything that I should be trying?Sander [00:46:26]: I very consistently use the OpenAI Playground. That's been my go-to over the last couple of years. There's so many products here, but I really haven't seen anything that's been super sticky. And I'm not sure why, because it does feel like there's so much demand for a good prompting IDE. And it also feels to me like there's so many that come out. As a researcher, I have a lot of tasks that require quite a bit of customization. So nothing ends up fitting and I'm back to the coding.Swyx [00:46:58]: Okay, I'll call out a few specialists in this area for people to check out. Prompt Layer, Braintrust, PromptFu, and HumanLoop, I guess would be my top picks from that category of people. And there's probably others that I don't know about. So yeah, lots to go there.Alessio [00:47:16]: This was a, it's like an hour breakdown of how to prompt things, I think. We finally have one. I feel like we've never had an episode just about prompting.Swyx [00:47:22]: We've never had a prompt engineering episode.Sander [00:47:24]: Yeah. Exactly.Alessio [00:47:26]: But we went 85 episodes without talking about prompting, but...Swyx [00:47:29]: We just assume that people roughly know, but yeah, I think a dedicated episode directly on this, I think is something that's sorely needed. And then, you know, something I prompted Sander with is when I wrote about the rise of the AI engineer, it was actually a direct opposition to the rise of the prompt engineer, right? Like people were thinking the prompt engineer is a job and I was like, nope, not good enough. You need something, you need to code. And that was the point of the AI engineer. You can only get so far with prompting. Then you start having to bring in things like DSPy, which surprise, surprise, is a bunch of code. And that is a huge jump. That's not a jump for you, Sander, because you can code, but it's a huge jump for the non-technical people who are like, oh, I thought I could do fine with prompt engineering. And I don't think that's enough.Sander [00:48:09]: I agree with that completely. I have always viewed prompt engineering as a skill that everybody should and will have rather than a specialized role to hire for. That being said, there are definitely times where you do need just a prompt engineer. I think for AI companies, it's definitely useful to have like a prompt engineer who knows everything about prompting because their clientele wants to know about that. So it does make sense there. But for the most part, I don't think hiring prompt engineers makes sense. And I agree with you about the AI engineer. I had been calling that was like generative AI architect, because you kind of need to architect systems together. But yeah, AI engineer seems good enough. So completely agree.Swyx [00:48:51]: Less fancy. Architects are like, you know, I always think about like the blueprints, like drawing things and being really sophisticated. People know what engineers are, so.Sander [00:48:58]: I was thinking like conversational architect for chatbots, but yeah, that makes sense.Alessio [00:49:04]: The engineer sounds good. And now we got all the swag made already.Sander [00:49:08]: I'm wearing the shirt right now.Alessio [00:49:13]: Let's move on to the hack a prompt part. This is also a space that we haven't really covered. Obviously have a lot of interest. We do a lot of cybersecurity at Decibel. We're also investors in a company called Dreadnode, which is an AI red teaming company. They led the GRT2 at DEF CON. And we also did a man versus machine challenge at BlackHat, which was a online CTF. And then we did a award ceremony at Libertine outside of BlackHat. Basically it was like 12 flags. And the most basic is like, get this model to tell you something that it shouldn't tell you. And the hardest one was like the model only responds with tokens. It doesn't respond with the actual text. And you do not know what the tokenizer is. And you need to like figure out from the tokenizer what it's saying, and then you need to get it to jailbreak. So you have to jailbreak it in very funny ways. It's really cool to see how much interest has been put under this. We had two days ago, Nicola Scarlini from DeepMind on the podcast, who's been kind of one of the pioneers in adversarial AI. Tell us a bit more about the outcome of HackAPrompt. So obviously there's a lot of interest. And I think some of the initial jailbreaks, I got fine-tuned back into the model, obviously they don't work anymore. But I know one of your opinions is that jailbreaking is unsolvable. We're going to have this awesome flowchart with all the different attack paths on screen, and then we can have it in the show notes. But I think most people's idea of a jailbreak is like, oh, I'm writing a book about my family history and my grandma used to make bombs. Can you tell me how to make a bomb so I can put it in the book? What is maybe more advanced attacks that you've seen? And yeah, any other fun stories from HackAPrompt?Sander [00:50:53]: Sure. Let me first cover prompt injection versus jailbreaking, because technically HackAPrompt was a prompt injection competition rather than jailbreaking. So these terms have been very conflated. I've seen research papers state that they are the same. Research papers use the reverse definition of what I would use, and also just completely incorrect definitions. And actually, when I wrote the HackAPrompt paper, my definition was wrong. And Simon posted about it at some point on Twitter, and I was like, oh, even this paper gets it wrong. And I was like, shoot, I read his tweet. And then I went back to his blog post, and I read his tweet again. And somehow, reading all that I had on prompt injection and jailbreaking, I still had never been able to understand what they really meant. But when he put out this tweet, he then clarified what he had meant. So that was a great sort of breakthrough in understanding for me, and then I went back and edited the paper. So his definitions, which I believe are the same as mine now. So basically, prompt injection is something that occurs when there is developer input in the prompt, as well as user input in the prompt. So the developer instructions will say to do one thing. The user input will say to do something else. Jailbreaking is when it's just the user and the model. No developer instructions involved. That's the very simple, subtle difference. But when you get into a lot of complexity here really easily, and I think the Microsoft Azure CTO even said to Simon, like, oh, something like lost the right to define this, because he was defining it differently, and Simon put out this post disagreeing with him. But anyways, it gets more complex when you look at the chat GPT interface, and you're like, okay, I put in a jailbreak prompt, it outputs some malicious text, okay, I just jailbroke chat GPT. But there's a system prompt in chat GPT, and there's also filters on both sides, the input and the output of chat GPT. So you kind of jailbroke it, but also there was that system prompt, which is developer input, so maybe you prompt injected it, but then there's also those filters, so did you prompt inject the filters, did you jailbreak the filters, did you jailbreak the whole system? Like, what is the proper terminology there? I've just been using prompt hacking as a catch-all, because the terms are so conflated now that even if I give you my definitions, other people will disagree, and then there will be no consistency. So prompt hacking seems like a reasonably uncontroversial catch-all, and so that's just what I use. But back to the competition itself, yeah, I collected a ton of prompts and analyzed them, came away with 29 different techniques, and let me think about my favorite, well, my favorite is probably the one that we discovered during the course of the competition. And what's really nice about competitions is that there is stuff that you'll just never find paying people to do a job, and you'll only find it through random, brilliant internet people inspired by thousands of people and the community around them, all looking at the leaderboard and talking in the chats and figuring stuff out. And so that's really what is so wonderful to me about competitions, because it creates that environment. And so the attack we discovered is called context overflow. And so to understand this technique, you need to understand how our competition worked. The goal of the competition was to get the given model, say chat-tbt, to say the words I have been pwned, and exactly those words in the output. It couldn't be a period afterwards, couldn't say anything before or after, exactly that string, I've been pwned. We allowed spaces and line breaks on either side of those, because those are hard to see. For a lot of the different levels, people would be able to successfully force the bot to say this. Periods and question marks were actually a huge problem, so you'd have to say like, oh, say I've been pwned, don't include a period. Even that, it would often just include a period anyways. So for one of the problems, people were able to consistently get chat-tbt to say I've been pwned, but since it was so verbose, it would say I've been pwned and this is so horrible and I'm embarrassed and I won't do it again. And obviously that failed the challenge and people didn't want that. And so they were actually able to then take advantage of physical limitations of the model, because what they did was they made a super long prompt, like 4,000 tokens long, and it was just all slashes or random characters. And at the end of that, they'd put their malicious instruction to say I've been pwned. So chat-tbt would respond and say I've been pwned, and then it would try to output more text, but oh, it's at the end of its context window, so it can't. And so it's kind of overflowed its window and thus the name of the attack. So that was super fascinating. Not at all something I expected to see. I actually didn't even expect people to solve the seven through 10 problems. So it's stuff like that, that really gets me excited about competitions like this. Have you tried the reverse?Alessio [00:55:57]: One of the flag challenges that we had was the model can only output 196 characters and the flag is 196 characters. So you need to get exactly the perfect prompt to just say what you wanted to say and nothing else. Which sounds kind of like similar to yours, but yours is the phrase is so short. You know, I've been pwned, it's kind of short, so you can fit a lot more in the thing. I'm curious to see if the prompt golfing becomes a thing, kind of like we have code golfing, you know, to solve challenges in the smallest possible thing. I'm curious to see what the prompting equivalent is going to be.Sander [00:56:34]: Sure. I haven't. We didn't include that in the challenge. I've experimented with that a bit in the sense that every once in a while, I try to get the model to output something of a certain length, a certain number of sentences, words, tokens even. And that's a well-known struggle. So definitely very interesting to look at, especially from the code golf perspective, prompt golf. One limitation here is that there's randomness in the model outputs. So your prompt could drift over time. So it's less reproducible than code golf. All right.Swyx [00:57:08]: I think we are good to come to an end. We just have a couple of like sort of miscellaneous stuff. So first of all, multimodal prompting is an interesting area. You like had like a couple of pages on it, and obviously it's a very new area. Alessio and I have been having a lot of fun doing prompting for audio, for music. Every episode of our podcast now comes with a custom intro from Suno or Yudio. The one that shipped today was Suno. It was very, very good. What are you seeing with like Sora prompting or music prompting? Anything like that?Sander [00:57:40]: I wish I could see stuff with Sora prompting, but I don't even have access to that.Swyx [00:57:45]: There's some examples up.Sander [00:57:46]: Oh, sure. I mean, I've looked at a number of examples, but I haven't had any hands-on experience, sadly. But I have with Yudio, and I was very impressed. I listen to music just like anyone else, but I'm not someone who has like a real expert ear for music. So to me, everything sounded great, whereas my friend would listen to the guitar riffs and be like, this is horrible. And like they wouldn't even listen to it. But I would. I guess I just kind of, again, don't have the ear for it. Don't care as much. I'm really impressed by these systems, especially the voice. The voices would just sound so clear and perfect. When they came out, I was prompting it a lot the first couple of days. Now I don't use them. I just don't have an application for it. We will start including intros in our video courses that use the sound though. Well, actually, sorry. I do have an opinion here. The video models are so hard to prompt. I've been using Gen 3 in particular, and I was trying to get it to output one sphere that breaks into two spheres. And it wouldn't do it. It would just give me like random animations. And eventually, one of my friends who works on our videos, I just gave the task to him and he's very good at doing video prompt engineering. He's much better than I am. So one reason for prompt engineering will always be a thing for me was, okay, we're going to move into different modalities and prompting will be different, more complicated there. But I actually took that back at some point because I thought, well, if we solve prompting in text modalities and just like, you don't have to do it all and have that figured out. But that was wrong because the video models are much more difficult to prompt. And you have so many more axes of freedom. And my experience so far has been that of great, difficult, hugely cool stuff you can make. But when I'm trying to make a specific animation I need when building a course or something like that, I do have a hard time.Swyx [00:59:46]: It can only get better. I guess it's frustrating that it's still not that the controllability that we want Google researchers about this because they're working on video models as well. But we'll see what happens, you know, still very early days. The last question I had was on just structured output prompting. In here is sort of the Instructure, Lang chain, but also just, you had a section in your paper, actually just, I want to call this out for people that scoring in terms of like a linear scale, Likert scale, that kind of stuff is super important, but actually like not super intuitive. Like if you get it wrong, like the model will actually not give you a score. It just gives you what i
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Survey: Personality Traits in the EA Community, published by Willem Sleegers on July 11, 2024 on The Effective Altruism Forum. Summary This post reports on data about the personality psychology of EAs from the 2018 EA Survey. We observe some differences between our EA sample and the general population: Younger male EAs and older female EAs score higher on conscientiousness compared to the general population. Male EAs score lower on neuroticism than the general public. EAs score high on need for cognition, and relatively higher than a US and UK general population sample. Though this may be partially due to demographic differences between the samples. EAs appear to be slightly lower in empathic concern than a sample of US undergraduates. But this seems attributable to demographic differences between the samples, in particular gender. Small differences were observed regarding maximization and alternative search compared to other samples. Generally, the lack of population norms of various personality traits makes comparisons between samples difficult. Overall, we found only small associations between personality, donation behavior and cause prioritization. Openness and alternative search were both found to be negatively associated with the amount donated. These associations were found controlling for sex, age, and individual income, and survived corrections for multiple comparisons. Perspective taking was negatively associated with prioritizing longtermist causes, meaning those who score lower on these traits were more likely to prioritize longtermist causes. This association was found controlling for sex and age, and survived corrections for multiple comparisons. Introduction There has been considerable interest in EA and personality psychology (e.g. here, here, here, here, here and here). In the 2018 EA Survey, respondents could complete an extra section of the EA Survey that contained several personality measures. A total of 1602 respondents answered (some of) the personality questions. These included the Big Five, Need for Cognition, the Interpersonal Reactivity Index, and a scale to assess maximization. We report these results for others to gain a better understanding of the personality of members of the EA community. Additionally, personality traits have been found to be predictive of various important outcomes such as life, job, and relationship satisfaction. In the context of the EA community, prediction of donation behavior and cause prioritization may be of particular interest.[1] Big Five Respondents were asked to indicate how much they agree or disagree with whether a pair of personality traits applies to them, on a 7-point Likert scale ranging from 'Strongly disagree' to 'Strongly agree'. The specific personality traits were drawn from the Ten Item Personality Inventory (TIPI) and consisted of: Extraverted, enthusiastic. Critical, quarrelsome. Dependable, self-disciplined. Anxious, easily upset. Open to new experiences, complex. Reserved, quiet. Sympathetic, warm. Disorganized, careless. Calm, emotionally stable. Conventional, uncreative. Big Five score distributions The plots below show the distribution of responses, including the sample size, to these questions in our sample. Each individual response is an average of the, in this case two, personality items. We show distributions for each individual item in the Appendix. Personality trait M SD n Agreeableness 4.54 1.27 1499 Conscientiousness 4.95 1.44 1499 Extraversion 3.74 1.63 1512 Neuroticism 3.12 1.53 1508 Openness 5.48 1.13 1509 However, personality scores have been found to differ based on gender and age (Schmitt et al., 2008; Donnellan et al., 2008; Lehmann et al., 2013). As such, it is common for the population norms for personality measures to be split based on these groups. Otherwise, differen...
Dr. Emily Joseph, a social science researcher and user researcher for Xbox, discusses the impact of clear and not-so-clear communication on the elicitation of information. She highlights the importance of question phrasing, construct validity, and avoiding contamination of memory and responses. Emily emphasizes the need for open-ended questions, especially in investigative settings, to avoid bias and allow for accurate recall. She also explores the challenges of predicting future behavior and the significance of clear and concise questions in obtaining meaningful insights. Additionally, she addresses the misinterpretation of physical behavior and the influence of cultural and experiential factors. In this conversation, Dave and Emily discuss the importance of asking effective questions in various contexts, such as investigations, interviews, and surveys. They explore the impact of different types of questions, including open-ended and closed-ended questions, on the quality of responses. They also highlight the potential biases and limitations associated with certain question formats. The conversation emphasizes the need for interviewers to be aware of their own biases and create an environment where individuals feel comfortable providing honest feedback. Overall, the key takeaway is the significance of asking neutral and unbiased questions to elicit accurate and meaningful information. Truths: Question phrasing and construct validity are crucial in obtaining accurate and meaningful information. Open-ended questions are essential in investigative settings to avoid bias and contamination of memory and responses. Predicting future behavior is challenging, and questions should focus on thought processes rather than final decisions. Physical behavior should not be relied upon as an indicator of truth or deception, as it can be influenced by various factors. Cultural and experiential factors can impact both verbal and nonverbal behavior, highlighting the need for cultural sensitivity in communication. The interpretation of responses in interrogation footage can be biased, leading to false assumptions of guilt. Closed-ended questions can limit the amount of information obtained, while open-ended questions encourage more detailed responses. The framing and wording of questions can influence the answers given. Likert scales and other survey response options should be carefully designed to capture a full range of perspectives. Interviewers should be aware of their own biases and strive to create an environment where individuals feel comfortable providing honest feedback.
Join us for an insightful episode where we sit down with Christi Michelle, an expert in scaling Amazon businesses and the founder of The COO Integrator. Christi shares her fascinating journey from running an Amazon brand management agency to becoming a fractional Chief Operating Officer. Discover how she blends visionary ideas with tactical strategies, and hear about her comprehensive competitive analysis of 25 brand management agencies, revealing the importance of understanding unique value propositions. Christi's wealth of experience provides valuable lessons for e-commerce entrepreneurs looking to scale their businesses effectively. In another segment, we explore key strategies for measuring business health and scaling effectively. Learn how to assess your business's performance through crucial data metrics like PPC statistics and P&L statements. Understand the significance of evaluating employee performance and fitting within your organization. We also discuss Tony Robbins' 10 life cycle stages and their relevance in identifying your business's current strengths and weaknesses. Practical tools such as the EOS organizational checkup and core values exercises are highlighted to help you align your company's direction and goals for balanced growth. Finally, we tackle the challenges of managing remote teams and maintaining productivity in the e-commerce world. Discover strategies for fostering a strong company culture and maintaining relationships. Learn the importance of holding productive meetings that drive progress without creating unnecessary busy work. Additionally, Christi shares her transformative experience at a two-week water fasting retreat in Costa Rica, offering insights into personal growth through struggle and simplicity. Whether you're looking to scale your business or find balance in your entrepreneurial journey, this episode is packed with actionable advice and inspiration. In episode 569 of the Serious Sellers Podcast, Bradley and Christi discuss: 00:00 - Scaling Amazon Businesses With Expert Guidance 04:34 - Brand Management for Major Brands 08:03 - Business Evolution and Maturity Stages 09:32 - Measuring Business Health and Scaling 14:27 - Navigating Amazon's Rising Costs and Fees 20:11 - Key Role of HR in Business 21:03 - Effective Remote Business Operation 23:52 - Creating Constructive Meetings for Company Culture 25:33 - Costa Rica Spiritual Retreat Experience 29:20 - Business Growth and Simplification ► Instagram: instagram.com/serioussellerspodcast ► Free Amazon Seller Chrome Extension: https://h10.me/extension ► Sign Up For Helium 10: https://h10.me/signup (Use SSP10 To Save 10% For Life) ► Learn How To Sell on Amazon: https://h10.me/ft ► Watch The Podcasts On YouTube: youtube.com/@Helium10/videos Transcript Bradley Sutton: So many Amazon sellers don't treat their Amazon businesses like a real business. So, we brought on somebody today who's an expert in this and she's helped countless number of businesses really scale up, and there's going to be great points that you're going to be able to glean from this as well. How cool is that? Pretty cool, I think. What was your gross sales yesterday, last week, last year? More importantly, what are your profits after all your cost of selling on Amazon? Did you pay any storage charges to Amazon? How much did you spend on PPC? Find out these key metrics and more by using the Helium 10 tool Profits. For more information, go to h10.me/profits. Hello everybody, welcome to another episode of the Serious Sellers Podcast by Helium 10. I'm your host, Bradley Sutton, and this is the show. That's a completely BS-free, unscripted and unrehearsed organic conversation about serious strategies for serious sellers of any level in the e-commerce world. And I'm still here recording in Spain, Madrid, Spain. I'm here at the Avosk office and we are here with somebody who has not been on the podcast in like two, maybe even three years, over three years Christy in the house. How's it going? Christi Michelle: Hi, doing well. How are you? Bradley Sutton: I'm doing just ducky. I recorded Leo earlier today, but he did his presentation already, so I was able to ask him some stuff on it. But I don't know what you're going to talk about yet, so I'll ask you that in a little bit. But since it's been so long since you've been on the podcast, what in the world have you been up to? Christi Michelle: I think the last time I was on I was running an Amazon brand management agency, and so that was the first one that I was running at the time. And after that we merged, slash, sold to a larger agency where I was the head of operations as well. We had about 100 clients, about 90-ish employees, so really kind of scaled up, which turns out that's kind of my forte, and I was there for a little while and then I left and apparently, I just can't get enough of the agency world. So, for the last about two and a half years I've been running what's my new agency? The COO Integrator, and so I am a fractional chief operating officer. So, it's that second in command. It's the one that says, OK, here's the big vision of what the visionary wants, the CEO wants, and OK, now how do we turn that into tactical strategies that we can, implement and get everybody rowing in the same direction? For so I do that. Bradley Sutton: Hold on. So, you're the CEO of this company or you're, like, a CEO of many companies. Christi Michelle: I'm the CEO of my company, my agency, but I play the role of the COO, which actually quite works for me because I'm a good blend of both the visionary and also the integrator. I like taking the really big concepts. That's a lot of fun for me, but I need to distill it down and make it very practical, set some goals around it, and I use a lot of my business strategies to make sure everything gets executed. So, it's both. Bradley Sutton: Went out to dinner last night and I remember you Vincenzo was there and you found out he worked at a PPC agency and you're like, oh man, a couple of years ago I did I looked into like 25 PPC agencies was it? Christi Michelle: It was a brand management agency. So, I was trying to do a competitive analysis. I wanted to understand. So, one of the things that I think a lot of companies, especially when they're getting started or they're so kind of single focused you don't realize that they don't understand their unique value proposition. And so, what makes you different? Why, if I were looking at two different agencies, why would I choose yours over someone else? And most folks, unfortunately, they're oh, it's you know, we've got great customer support or we're so good with our clients, and it's very generic and they all kind of say the same thing. And so, I really wanted to understand okay, well, who are my competitors in the space? And I find it to be a very non-competitive space in the sense that we're all very friendly, it's very open. What I love about the e-commerce space is that it has kind of that good feel to it as an industry personality. But theoretically, these are my competitors and I wanted to see, well, okay, what are they offering? What do they charge, what are their contract terms? So, I really, I called dozens of them and I just said, hey, this is what I'm doing. I'm just I called dozens of them and I just said, hey, this is what I'm doing, I'm just what's unique about you? I just want to know these different things. So, it was a competitive analysis. It was just sort of a landscape. Bradley Sutton: And you know, obviously you don't have to mention any names, but what was just some things that stuck out to you about, I don't know, maybe price point or something that you saw was a hole in the industry or something that everybody had, or what were some of your big takeaways, I guess, like I'm asking. Christi Michelle: You know that most companies actually did have something that was quite unique. I would say more than half the companies. They would tell me something and I'm like I haven't heard that before. That's really unique, like that is. Do you know that as unique to you? So, in a way I was kind of helping them with their marketing like go ahead and highlight that. So, some folks you know they would specialize in major brands like big Fortune 100 company kind of brands. That's not typically what an Amazon brand management agency, but if you think about it those are. Most of those companies are kind of dinosaurs so they don't know how to kind of pivot and get online. So that was a unique one. A lot of companies had different contract terms but most of them were a flat fee plus once you had a certain point. Then we take a percentage. Unique ones were maybe it was a contract they just go month to month, and other ones they said we just went two years because we're going to invest with you. So really, I think knowing what those are, what your differentiators are and what's important to you, can help you, I guess, decide what type of clients, your ideal client, who you want to go after. Some clients are just like I just want to test this out, is this going to be good? So, they would probably want to go with an agency that has a lower fee and month to month contracts. But other ones who want to deep dive, they know they're going to invest in this, they know where they want to go build that partnership. So, it helps, you kind of weed out the clients that you do want and get rid of the ones that you don't. So, I don't know what really stood out. There was a lot. Bradley Sutton: Okay, now let's just flip the script a little bit. I'm an Amazon seller, I'm new or I'm big, I'm a seven-figure seller, eight-figure seller? Who is the persona or what type of person should be looking for a brand management agency as opposed to you know what? You probably should just try and handle things on your own at your stage. Christi Michelle: That's a loaded question. I would say that it actually depends on your personality type. So, there are people who want to understand there's a level of control that says I want to bring all of this in house, I want to bring in an expert who is a good PPC expert, someone who does graphic design. I want it to be so customized because it's my business. If that is your personality type, you probably want to build in house. But if it's not and you really just want kind of the simple life, you can find a partner partnering with an agency that has all of that already in-house. I would go that route. But it really depends on how you want to run your business in general. So, it's more of a personal decision on your lifestyle. With that there is an influx point, especially because, like I said, a lot of agencies will have sort of a flat fee to start with for the first 90 days or whatever, and they get to a point and they say, okay, wait, we expect to build traction at this point. Christi Michelle: So, from that we want to, once we hit this threshold, we want to flip and we want to take a percentage of sales. Well, that's fantastic, especially if they're doing a really good job. But if you go from doing, 50,000 in sales and then a hundred thousand in sales and then 500,000 in sales, and suddenly you're doing millions in sales that you know taking 5% or whatever that is, at some point you're going to be paying the agency hundreds of thousands of dollars that it makes more sense than to just bring it in house. So, there is a scaling point that I would say unless you're super comfortable and you just love working with them and you don't care to give away that percentage as long as you don't have to think about it, because clearly, they've done a good job, then at some point you would probably want to bring it in house. Bradley Sutton: Okay, all right. Now I think, looking now I remember looking at the title of your talk today like wasn't one thing about helping people scale, all right, so we have listeners of this podcast, from newer sellers all the way to maybe seven, eight figure sellers. What are some? I know a lot of the stuff you talk about is targeted, you know, depending on their exact persona, but maybe there's some general things that you could, some tips that you can give out about, because I think everybody wants to scale, unless they're just like trying to do this in their hobby. That hey, I'm very happy at my level of the rest of my life. My, my day job is this. That's probably like 3% of people. I think 97% wants to scale. So, what are some tips you can give? Christi Michelle: It's very customized because it depends on where your maturity of your company is. And so, I use the word maturity and the evolution of your business because most people say, well, I'm a million-dollar company or I'm a $5 million company or I'm a $40 million company, that really doesn't matter, because I've had clients that are 40 million, I've had clients that are 2 million and they're at the same stage. They experienced the same influx of issues. So, I like to identify them. So, Tony Robbins has a really good. He has a really good model that's called the evolutionary 10 stages of your business and it starts literally from like a child. It's like birth. You have infancy, you have toddler, teenager, young adult, and then you're in your prime and then eventually, at some point things always kind of deteriorate and you kind of go down that path. So, I like for people to be able to identify where they are. That helps them understand what their bottlenecks are. Able to identify where they are. That helps them understand what their bottlenecks are. So, one tip would be figure out where do you stand like, where is your evolution of your company and what is it going to take for you to go from a teenager to a young adult, or a young adult to get to you when you're in your prime. So, understanding that about yourself. Another thing that I would say is most companies, you're just very focused. Most people don't understand this. If you didn't get an MBA, you don't understand all the facets of business, right? They think, well, I've got a product and I've got a or a service. This is what I'm doing. Understand that if you want to scale, you kind of need to do it. The best way to do it is very, is as balanced as possible, and so another exercise that I do is based off of EOS, which is the Entrepreneur Operating System. They have a model. So, this is that's a business blueprint. Christi Michelle: Every company should be working from a business blueprint, and so, if you can do that, there's a several questions that kind of prompt you well, how well are you in each of these categories of your business? So, you can say, okay, well, how is my data measurables Like? Do I know what I'm measuring? Do I know what my PPC statistics are? Do I know what my P&L looks like? Do I know what my turn rate is? Do I know? There's lots of things to know. So, understanding that category, understanding your people. Do I have the right people? Have I hired you know? Are they doing the best that they can? So, there's lots of different ways that you can measure the health of your business. You can take it as a 20-point questionnaire. You can go to EOS I think it's called; I don't know. You can download it for free. In five minutes, you can kind of figure out sort of like a general health of your business. That will also tell you OKAY, here are the areas that I'm unbalanced. These are my strengths and these are my weaknesses. But as you want to scale, you want to scale as balanced as possible. Also, understanding different personality types. You start off as the visionary of your company and different visionaries and I kind of have had several different buckets that I would put them in. But there's different visionaries, create different problems, create different solutions and problems in their companies. So, you get the ones that are. Christi Michelle: They're just very what is it Gregarious? Like, very like outgoing and big and let's try all the things, and they don't have a big sense of risk. They don't have a correct sense of risk. They go above and beyond and that's really fun. They're usually very grateful. They're a lot of fun to work with, but there's very little. The opposite side of that is they don't come with a lot of accountabilities they just trust you. Yeah, go do the thing. I like you. You seem smart, let's go. And they won't've that type of leader. Understand what your strengths are and also understand what your weaknesses are. Right, because that can create a lot of uncertainty in your employees, and a lot of employees love you, but they just feel like they're constantly concerned about what's happening with their job. So, I could go down a whole rabbit hole on different personality types, but those are the things is understood who you are, what you bring to the company and kind of the health of the business overall. I mean there's tons of tools out there that in five minutes. I love doing workshops because I want people to learn about themselves, where they stand relative to who they are, what they bring to the table and you know what they're going to need to balance them, because everybody has strengths and weaknesses. Bradley Sutton: Now I'm looking here. I'm guessing this is part of, like, your workshops that you're going to be doing, Christi Michelle: Yes. Bradley Sutton: Or is this a handout that people are going to have? Christi Michelle: Yes. Okay, very tactical hands-on. Bradley Sutton: Maybe you can describe some of this so that people can maybe do this at home even without you. At least get started on this framework here. Christi Michelle: Sure, well, I mean, I kind of did mention a little bit, so I would look up Tony Robbins. So, business mastery he has the 10 life cycle stages of your evolution of your business. So, if you can look that up, he kind of gives a definition, as I said earlier. So, you have birth, you have infancy, you have the toddler, you have the teenager, you have the young adult. What are those? What are the pros and cons of each one of those? So go look that up and if you could do that, read there, help yourself identify. Once you identify, let's say, you figure out that you're in a teenage stage. That's a very exciting stage. It's also one of the most dangerous stages and a lot of people get stuck there, a lot of visionaries get stuck there, and so I won't have time to go into detail about it. But if you are able to identify yeah, that kind of sounds like where I am going ahead and look at what the next stage is after that. What is it going to take for me? So, the teenager stage I think it's usually fun and reckless, right? Teenagers? I think of them as driving 100 miles an hour down the highway that they've got their sports car because cash flow at that point is less of an issue. But they say yes to everything and they don't know how to say no. Everything looks like an opportunity, so they pull resources from everywhere. It's very unfocused. So, I think about that teenager driving 100 miles an hour down the highway. If they take one wrong turn, they could seriously wound the business. They don't really recognize that. There's a sense of overconfidence with that. So, if you look that some of those are usually the signature problems that you have as a teenager, then look and see okay, well, what is it going to take to get to be a young adult? And I kind of like quote that as a young adult would be a rebirth. You grow up right. You're like okay, we have to have some responsibility. We're going to bring in some professional staff at this point. We're going to so anyway. So really good way. Christi Michelle: Another thing that I have here, as I said, sort of this grading. I turn it into sort of a wheel exercise so you can kind of self-grade. And it's the EOS, I think it's organizational checkup. Go there, it's 20 questions, it's Likert scale one through 10. Grade yourself. You can share that with all the other people in your company, so that you get a collective grading for everyone. And it comes back and it says okay, well, your score is a 57 out of a hundred. Okay, well, what areas do we need to work on? So, it will quickly highlight for you some of those pieces. But I core values exercise, creating your one page, your business blueprint. Who are we? Where are we going? Why are we doing what we do? What makes us unique? What's our ideal client? Really, building on a business blueprint? Because when you look at going back to the stages, if you look at the when you're in your prime, this is like, this is like Apple. I mean, there's just, there's a. You just know that they come out with excellence at all times. Right, and you can be in your prime for decades. You can be in a prime for a long time. When you understand what that looks like, you want to strive to get to those levels of like. What's the pros of each one of those? So, self-education. Bradley Sutton: Taking it back to, I think, something that is at the top of mind of a lot of Amazon sellers nowadays. You know you started selling on Amazon and kind of like the glory days where you could just like fall into making tons of money by accident, not even knowing anything. You're doing right Nowadays I'm sure you talked about this with Amazon sellers. I think I see so much more fear and anxiety over all the new fees that Amazon has. You know rising PPC costs, rising logistics, this and that and now many people are stressing about how I mean not only just how to scale, but just how to stay afloat. And so, some of your successful people you talk to what are their characteristics or what are they doing to? Because it's still very viable to make money on Amazon. So, what are the successful people? How are they navigating all of these fees and increased costs? Christi Michelle: Well, first of all, they're treating it the successful ones are treating it like a holistic business. It's not just I'm going to throw up a product, make some money and then maybe I figure out a little bit of PPC with that right. There is an evolution to actually truly building something like a business, and so I say that in tandem. When I think of truly building a business, it's you have to look at all aspects, so it's not just the single focus of what are my resources within the Amazon or e-commerce space. So, for example, so when we talk about fees, one of my clients you know is has nothing to do with this, but it overlaps he gets the best rates on UPS and FedEx that you can imagine. Okay, well, maybe we can't. If you're doing FBA, then you can't necessarily use those right, because you're not going to get better fees. But if you are diversifying and if you are going, if you want to do FBM or if you want to do Shopify and you want to go to other places, those fees you can offset them by getting unbelievable discounts for those and you can kind of offset the cost of what Amazon is rising by decreasing the costs of other platforms in your Shopify store, let's say. So, that requires that you step out that you would not know that this person, this type of service exists, because it's not really talked about here, because most people go FBA if they're going to be selling on Amazon. But being resourceful and looking at just look at the problem plainly Okay, amazon fees are going up. Christi Michelle: What is my? If you look at your balance sheet, if you look at your P&L and you say these are all the costs that are associated with my business, what are ways that I can offset each one of these? Like I look at it, I like put my little MacGyver hat on and I'm like, okay, what else can I do? What else can I bring to the table? What else is working in completely different industries? What are they doing that I can kind of take and then bring that over into my space? So, I say two things. They treat it like a business, like it's holistic, it's not. I'm not just selling a product. They know that they're building a brand, they know that they're trying to. And if they try, if they know in two years they've got their vision, two or three years we want to sell for X amount, okay, well, you start working with folks, that will help you kind of get you set up for a sale. We'll do that a year and a half in advance. There's some brilliant tactics for how you can set for decisions you need to make today that 18 months from now will greatly pay off so that you can find the right buyer. So, these are all different ways that are just it's not just looking at selling a company or your business, it's what are all the resources that you're going to need in the future. So, thinking in advance, treating it like a business and looking for resources outside of space. Bradley Sutton: I think what you said is important, because there's a lot of Amazon sellers, I would say that this is probably the first business they've run. Maybe they came from the corporate world or they came from working a nine to five and so they don't have that experience. And there's a tendency, it's because it's such a different beast, on one hand, where it's like, oh no, it's not a real business, but then all of a sudden, they're like wait, this is a business doing seven figures a year. In your experience, when you first talk to people like that, what are they doing wrong? As far as not treating it like a business, like what's the most common things that you're like, okay, we got to get this fixed right away, okay. Christi Michelle: Okay, I'm going to answer this in sort of an evolutionary piece. So, most people, when they start a business, it's just you in your basement or wherever, and you're selling either your product or service, but probably your product, right, if you're not going to do an agency style and you figure that out. So, you go through that and it's just you, it's you're trying to do everything, and then you kind of get that going and then maybe you hire a customer service person or maybe you hire someone to help you out with the day-to-day operations. Okay, let's bump up the sales, let's do the marketing, let's get in some PPC how else can I get a lot more sales? So, then you switch your focus next to the department I'm going to put that in air quotes the department of marketing and sales, and you try to figure out let's pour some gas on this. We've got your product and service. Then you have your marketing and sales, okay. So finally, then we've got that flowing, we've got that going, we know what we're doing there. Oh crap, I'm making a lot of money. Now, what's my P&L look like? What's my balance sheet look like? What does my profit look like? What is margins? What is this about? So, then you start taking okay, do I have the right people? Okay, am I like doing the best that I can, and do I have a high turnover? So, then it gets to HR. So, my answer is actually HR. Christi Michelle: People ignore HR because in the evolution, it's the last, it's, we call it like crisis by management by crisis. Most, every one of those stages you're saying what's the biggest crisis that I need to focus on? So, HR doesn't feel like it's crisis, but it actually is like the underpinning of everything. So, most people ignore HR. So, one of the very first things that I do when I come in is I say what do our people? What does it look like? Do we have the right staff? Do you trust your people? Because a lot of times they'll hire someone but they don't trust them and so then they micromanage them and they never let them flourish. And then you have it keeps growing and growing and growing. And then you have this owner who now has like 15 employees. They've technically become successful, but they've got golden handcuffs because they can't leave, because they haven't figured out how to actually delegate and trust. That is one example. So, when I come in, the first thing I do is. I say what does HR look like? Because usually and, by the way, the whole time, whether you're doing the you're, you're doing your product surface, your operations, your marketing sales, your finance, you're still hiring people along the way. But that always tends to fall on the visionary, which most people didn't go to HR school. They don't know how to interview, they don't know how to hire the right people, they don't know how to manage and make sure that they're setting those expectations. Christi Michelle: So, I tend to think of that as I will come in first. I'll look at HR, because I know that that's one of the number one thing that's going to make or break a company. But it feels like it's the underpinning. It doesn't feel like it because it's not so much a big crisis loud thing usually, but that's the underpinning and it always falls on the visionary and that's not necessarily going to be their forte. So, if I can teach them how to do that and we can kind of clean house and get the right people in the right place and get the systems and all of that, that's typically what I see. Bradley Sutton: All right Now. You and I were just talking to elevator about. How Helium 10 are remote company. I would say nowadays most Amazon businesses as they scale and become a real business, it's almost all remote. Either they're hiring people within the United States remotely or in most cases hiring people from other countries, be it Philippines, Pakistan, et cetera. What are some things that Amazon, business owners can do to. In a remote lifestyle where they can just make sure, hey, everybody's on task. Like Helium 10, we started as an in-office company so it was easy for us to know like oh wait, this person is slipping when we run remote or like we know what. But, Amazon sellers from day one. They're kind of a remote company. So, how do they structure it to make sure that it's still operating as a well-oiled machine, even though maybe they've never even met some of their employees in person? Christi Michelle: Sure. I mean, I think it's a really good question. I think there's a lot of challenges the people have because it's not a natural state if you think about humans and how it all interactive with each other coming from villages that living, but this is very new thing. Covid did not help but it really exacerbated the fact that we so I would say the same way that you would handle a social situation if you moved away from your friends and family you, it takes effort. You actually have to put in conscious effort to reach out and create a relationship with. You can't like, if you moved away and you have all of your best friends and your family that lives back in your hometown, you no longer it's not. You have to actually put in the energy and effort to ask them how they're doing, see what they're up to, have constructive conversations. When you're in person, you just don't really think about it. You kind of take it for granted. You're like, oh, I'll just go bump into you at the water cooler. Hey, just pop in my office. That kind of thing. It's so much, it happens kind of effortlessly. It takes effort to actually maintain relationships and you have to build. You kind of have to rebuild your social skills. So, I would say that, from a culture perspective, is that you need to figure out what that looks like. So, I have a lot of clients where we'll implement. Just, you know, we do like happy hour Fridays where everybody, at three o'clock or four o'clock, we're like hey, let's all get on here, we'll share, we'll do trivia, they'll do things. So, there's lots of things you can do from a culture perspective. Christi Michelle: But in terms of just operations, of business, cadence of meetings and I say that carefully because I think a lot of people roll their eyes, I have a lot of meetings, lot of meetings. A lot of people roll their eyes at me because they hate meetings. Most people hate meetings because they're not productive meetings. I, like I said, I am a hands-on, tactical person. I don't want homework after a meeting. Don't make me do anything. As soon as we're over the phone, I'm done, I did my job. So, the moment that I get on meetings, I know what we're working at, I know what we're trying to solve and if it's, we don't know the answer. I'm building a matrix. I'm building, we're typing it out, we're having a constructive conversation and leading it. I'm constantly monitoring people to say okay, what are we trying to solve? You have this question how can you get our audience to solve? What do you need to move forward? So being you just have to be more cognizant about having constructive meetings. So, it's a lot more communication in that sense. But that more communication does not need to be waste of time. Christi Michelle: I think a lot of people have that sort of this equals that? Not true at all. Just have productive. So, learn how to have. So, I had to summarize one. Decide how you want the culture of the company to be and put an effort to make sure that that happens, that you are making building relationships again, whether that's a happy hour, you guys do like a weekly, like shout outs or something like that. And then the second one is learn how to have productive meetings. Learn how to have constructive meetings where you actually get work done during the meeting while everyone's together. They can put in their input if that is needed. Learn how to have constructive meetings that you don't have to have a lot of busy work on the other side and then you guys are learning and building and growing together, which just creates more camaraderie. Bradley Sutton: Awesome, awesome, all right, any last words of wisdom like a message you want to get out to Amazon sellers around the world here, what can you help them with? Like I sometimes call this like a 60 second strategy, but I'm not going to tie you down to a certain time, but just anything you want to close this out with. Christi Michelle: Oh goodness, I mean I think I've harped on the fact that treat it like it's a business. Truly, if you're not working on a business blueprint, you know EOS is a good one, it's. It's a limited. It's very, very good, but it's limited. There's, you have system and soul. There's lots of different ones. Get like, find, a business blueprint to work from, because most people don't understand strategic frameworks and it's not anybody's fault If you didn't go get an MBA, if you didn't know this, but you have the entrepreneurial spirit. You do have to educate yourself on how to run a business. So, treat it like it's a business, that there are all different components and aspects to it and I think that you will find that scaling and growing and educating you will be more balanced and less stress and you'll have less of those true deep pitfalls that I see a lot of people having. Bradley Sutton: All right, one more thing. At dinner last night I kind of got a little bit of this. But Boyan was telling me you went to Costa Rica and you didn't eat food for like two weeks or something. So, tell me a little bit about what prompt cause? You never. there are some people out there who are I'm not trying to throw anybody under the bus, but like the whole very spiritual and touchy, feely and yoga every morning, and let me go find my, my inner priestess, or whatever. You never struck me as that kind of person, but so I'm wondering what prompted you to do this retreat, what did it involve and what did you get out of it? Christi Michelle: So, funny that you said that, and I don't think that I used to be, but I'm happy to openly admit I'm actually quite a spiritual person. I'm not a religious person, but I am a very spiritual person. And so, what prompted it? Two things I could say. We grow by the most through our struggles, and I've read this in a lot of different places and people talk about I wanted to go do something that was challenging. I wanted to do something that pushes you, because in this particular retreat I was in Costa Rica, definitely out with the bugs. Every night I had to look for spiders and scorpions and snakes. In my bed, on top of not eating at all, you had a one job and that was to drink as much water as humanly possible. Bradley Sutton: So, I was doing you ever have those things in your bed? Christi Michelle: Not in my bed but, I definitely have a situation a very large spider that was a. Bradley Sutton: Crossing Costa Rica off my bucket list. Christi Michelle: But it's so beautiful there, but you're there and it's. You know, I meditate a lot, I mean. So, I thought I was. Oh, I'm just going to go there, I'm going to meditate for a couple of hours every day. I'll take some naps. No, you had one job to do and that was to drink as much water as possible. So, I was drinking up to 1.75 gallons a day. But the thing is, when you're not eating, when you're not eating, you're not replenishing the electrolytes, so you have to drink 16. You can't drink any more than 16 so that you don't flush it, so it's little sips. So, from the moment you wake up to the moment you go to bed, so from 7 am to about 10, 10.30 at night, all you have to do is drink, and you're not supposed to. There's no internet, there's no WIFI, there's, I mean, you can have your phone, but there's nothing that you can't, like you know, download anything. No one tells you physically purging. Most people that went there I was very different. Most people who went there had very much had cancer, had different things. Cleansing of your body is. It's fantastic. I recommend anybody research what fasting like water fasting can do. It's one of the best things I think you can do for your body, but so there was a combination of wanting to kind of do a good cleanse, but it challenges you mentally, emotionally and physically to be uncomfortable, to be in a space you don't have any. Christi Michelle: Most people use food for comfort or to repress I mean, we all do it right or to repress some feeling, or to kind of just enhance. I mean we use food almost like it. I mean it truly is kind of like a drug where you don't have that to rely on. So, then you're sitting there by yourself, no one really to talk to, nothing to entertain you in traditional ways. You're stuck with your thoughts and you go through a lot through that. So, I like to do pretty strong challenges and so that was one of my big challenges for this year. Can I do it? And I would probably not ever do it again, I mean, unless I got very, very sick, and I thought this would. If I did, I thought that would be the best thing that it could do. But, it's just to, to challenge myself, to grow to do something different. Bradley Sutton: All right, cool. All right, man, we'll see. I've tried a lot of different things. Maybe, maybe I can try that, just minus the scorpions and snakes and spiders. Yeah, all right. Well, how can people find your company, or are you out there on the interwebs these days? Christi Michelle: I am in fact on the interwebs. I think that we have so I am right now. So, my main is I'm the coo integrator that is my agency, so that's just coo for chief operating officer, the coo integrator, that is my website. And right now, I mean, the truth is I I'm extremely fortunate that I do have a backlog of clients. And, funny enough, I don't really scale my company of all the things. Who don't she helps people scale, but she doesn't when you've had so many companies and you're responsible for hundreds, you know dozens, if not a hundred, plus people there comes a point in your life where you're like I think I'm just good with keeping things simple, but you kind of have to go through that to appreciate this. It's kind of like water fasting you have to go without food before you can appreciate the food there. but yeah, so I'd love to. I usually do free analysis with people, thank you, thank you, just to kind of help them and I can point them in the right direction. So, I'm always just kind of happy to help guide people, and anymore now I spend some time on boards of companies and I do other investments and things. So, I love the game of business. I'm always happy to talk about it. So please reach out to me, Christy, at the CEO integrator, and I'm happy to chat. Bradley Sutton: Awesome. Well, hope to see you sooner than later and I don't have to travel around the world just to be able to see you. Like Karen Spade, Christi Michelle: Yeah, going to Europe, right All right, we'll see you guys in the next episode.
If measuring social validity is just about getting clients and stakeholders to fill out a 7-point Likert scale, we'd have a pretty short episode this week. Fortunately, it's a heck of a lot more important and effortful than that. This week we delve into the realm of using social validity measures to improve our practices and to better support our clients. So buckle-up for some thematic reviews of interviews, big picture practice examinations, and comparisons to how much better or worse things are since the 90s. At least, in relation to social validity measurement. This episode is available for 1.0 LEARNING CEU. Articles discussed this episode: Schwartz, I.S. & Baer, D. (1991). Social validity assessments: Is current practice state of the art? Journal of Applied Behavior Analysis, 24, 189-204. doi: 10.1901/jaba.1991.24-189 Ferguson, J.L., Cihon, J.H., Leaf, J.B., Van Meter, S.M., McEachin, J., & Leaf, R. (2018). Assessment of social validity trends in the journal of applied behavior analysis. European Journal of Behavior Analysis, 20, 146-157. doi: 10.1080/15021149.2018.1534771 Callahan, K., Hughes, H.L., Mehta, S., Toussaint, K.A., Nichols, S.M., Ma, P.S., Kutlu, M., Wang, H. (2017). Social validity of evidence-based practices and emerging interventions in autism. Focus on Autism and Other Developmental Disabilities, 32, 18-197. doi: 10.1177/1088357616632446 Anderson, R., Taylor, S., Tayler, T. & Virues-Ortega, J. (2022). Thematic and textual analysis methods for developing social validity questionnaires in applied behavior analysis. Behavioral Interventions, 37, 732-753. doi: 10.1002/bin.1832 If you're interested in ordering CEs for listening to this episode, click here to go to the store page. You'll need to enter your name, BCBA #, and the two episode secret code words to complete the purchase. Email us at abainsidetrack@gmail.com for further assistance.
In 2023 we did a few Fundamentals episodes covering Benchmarks 101, Datasets 101, FlashAttention, and Transformers Math, and it turns out those were some of your evergreen favorites! So we are experimenting with more educational/survey content in the mix alongside our regular founder and event coverage. Pls request more!We have a new calendar for events; join to be notified of upcoming things in 2024!Today we visit the shoggoth mask factory: how do transformer models go from trawling a deeply learned latent space for next-token prediction to a helpful, honest, harmless chat assistant? Our guest “lecturer” today is ; you might know him from his prolific online writing on and Twitter, or from his previous work leading RLHF at HuggingFace and now at the Allen Institute for AI (AI2) which recently released the open source GPT3.5-class Tulu 2 model which was trained with DPO. He's widely considered one of the most knowledgeable people on RLHF and RLAIF. He recently gave an “RLHF 201” lecture at Stanford, so we invited him on the show to re-record it for everyone to enjoy! You can find the full slides here, which you can use as reference through this episode. Full video with synced slidesFor audio-only listeners, this episode comes with slide presentation along our discussion. You can find it on our YouTube (like, subscribe, tell a friend, et al).Theoretical foundations of RLHFThe foundation and assumptions that go into RLHF go back all the way to Aristotle (and you can find guidance for further research in the slide below) but there are two key concepts that will be helpful in thinking through this topic and LLMs in general:* Von Neumann–Morgenstern utility theorem: you can dive into the math here, but the TLDR is that when humans make decision there's usually a “maximum utility” function that measures what the best decision would be; the fact that this function exists, makes it possible for RLHF to model human preferences and decision making.* Bradley-Terry model: given two items A and B from a population, you can model the probability that A will be preferred to B (or vice-versa). In our world, A and B are usually two outputs from an LLM (or at the lowest level, the next token). It turns out that from this minimal set of assumptions, you can build up the mathematical foundations supporting the modern RLHF paradigm!The RLHF loopOne important point Nathan makes is that "for many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior". For example, it might be difficult for you to write a poem, but it's really easy to say if you like or dislike a poem someone else wrote. Going back to the Bradley-Terry Model we mentioned, the core idea behind RLHF is that when given two outputs from a model, you will be able to say which of the two you prefer, and we'll then re-encode that preference into the model.An important point that Nathan mentions is that when you use these preferences to change model behavior "it doesn't mean that the model believes these things. It's just trained to prioritize these things". When you have preference for a model to not return instructions on how to write a computer virus for example, you're not erasing the weights that have that knowledge, but you're simply making it hard for that information to surface by prioritizing answers that don't return it. We'll talk more about this in our future Fine Tuning 101 episode as we break down how information is stored in models and how fine-tuning affects it.At a high level, the loop looks something like this:For many RLHF use cases today, we can assume the model we're training is already instruction-tuned for chat or whatever behavior the model is looking to achieve. In the "Reward Model & Other Infrastructure" we have multiple pieces:Reward + Preference ModelThe reward model is trying to signal to the model how much it should change its behavior based on the human preference, subject to a KL constraint. The preference model itself scores the pairwise preferences from the same prompt (worked better than scalar rewards).One way to think about it is that the reward model tells the model how big of a change this new preference should make in the behavior in absolute terms, while the preference model calculates how big of a difference there is between the two outputs in relative terms. A lot of this derives from John Schulman's work on PPO:We recommend watching him talk about it in the video above, and also Nathan's pseudocode distillation of the process:Feedback InterfacesUnlike the "thumbs up/down" buttons in ChatGPT, data annotation from labelers is much more thorough and has many axis of judgement. At a simple level, the LLM generates two outputs, A and B, for a given human conversation. It then asks the labeler to use a Likert scale to score which one it preferred, and by how much:Through the labeling process, there are many other ways to judge a generation:We then use all of this data to train a model from the preference pairs we have. We start from the base instruction-tuned model, and then run training in which the loss of our gradient descent is the difference between the good and the bad prompt.Constitutional AI (RLAIF, model-as-judge)As these models have gotten more sophisticated, people started asking the question of whether or not humans are actually a better judge of harmfulness, bias, etc, especially at the current price of data labeling. Anthropic's work on the "Constitutional AI" paper is using models to judge models. This is part of a broader "RLAIF" space: Reinforcement Learning from AI Feedback.By using a "constitution" that the model has to follow, you are able to generate fine-tuning data for a new model that will be RLHF'd on this constitution principles. The RLHF model will then be able to judge outputs of models to make sure that they follow its principles:Emerging ResearchRLHF is still a nascent field, and there are a lot of different research directions teams are taking; some of the newest and most promising / hyped ones:* Rejection sampling / Best of N Sampling: the core idea here is that rather than just scoring pairwise generations, you are generating a lot more outputs (= more inference cost), score them all with your reward model and then pick the top N results. LLaMA2 used this approach, amongst many others.* Process reward models: in Chain of Thought generation, scoring each step in the chain and treating it like its own state rather than just scoring the full output. This is most effective in fields like math that inherently require step-by-step reasoning.* Direct Preference Optimization (DPO): We covered DPO in our NeurIPS Best Papers recap, and Nathan has a whole blog post on this; DPO isn't technically RLHF as it doesn't have the RL part, but it's the “GPU Poor” version of it. Mistral-Instruct was a DPO model, as do Intel's Neural Chat and StableLM Zephyr. Expect to see a lot more variants in 2024 given how “easy” this was.* Superalignment: OpenAI launched research on weak-to-strong generalization which we briefly discuss at the 1hr mark.Note: Nathan also followed up this post with RLHF resources from his and peers' work:Show Notes* Full RLHF Slides* Interconnects* Retort (podcast)* von Neumann-Morgenstern utility theorem* Bradley-Terry model (pairwise preferences model)* Constitutional AI* Tamer (2008 paper by Bradley Knox and Peter Stone)* Paul Christiano et al. RLHF paper* InstructGPT* Eureka by Jim Fan* ByteDance / OpenAI lawsuit* AlpacaEval* MTBench* TruthfulQA (evaluation tool)* Self-Instruct Paper* Open Assistant* Louis Castricato* Nazneen Rajani* Tulu (DPO model from the Allen Institute)Timestamps* [00:00:00] Introductions and background on the lecture origins* [00:05:17] History of RL and its applications* [00:10:09] Intellectual history of RLHF* [00:13:47] RLHF for decision-making and pre-deep RL vs deep RL* [00:20:19] Initial papers and intuitions around RLHF* [00:27:57] The three phases of RLHF* [00:31:09] Overfitting issues* [00:34:47] How preferences get defined* [00:40:35] Ballpark on LLaMA2 costs* [00:42:50] Synthetic data for training* [00:47:25] Technical deep dive in the RLHF process* [00:54:34] Projection / best event sampling* [00:57:49] Constitutional AI* [01:04:13] DPO* [01:08:54] What's the Allen Institute for AI?* [01:13:43] Benchmarks and models comparisonsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have Dr. Nathan Lambert in the house. Welcome.Nathan [00:00:18]: Thanks guys.Swyx [00:00:19]: You didn't have to come too far. You got your PhD in Berkeley, and it seems like you've lived there most of the time in recent years. You worked on robotics and model-based reinforcement learning on your PhD, and you also interned at FAIR and DeepMind. You bootstrapped the RLHF team at Hugging Face, and you recently joined the Allen Institute as a research scientist. So that's your quick bio. What should people know about you that maybe is not super obvious about you on New LinkedIn?Nathan [00:00:43]: I stay sane in various insane sport and ultra-endurance sport activities that I do.Swyx [00:00:50]: What's an ultra-endurance sport activity?Nathan [00:00:52]: Long-distance trail running or gravel biking. Try to unplug sometimes, although it's harder these days. Yeah.Swyx [00:00:59]: Well, you know, just the Bay Area is just really good for that stuff, right?Nathan [00:01:02]: Oh, yeah. You can't beat it. I have a trailhead like 1.2 miles from my house, which is pretty unmatchable in any other urban area.Swyx [00:01:11]: Pretty excellent. You also have an incredible blog, Interconnects, which I'm a fan of. And I also just recently discovered that you have a new podcast, Retort.Nathan [00:01:20]: Yeah, we do. I've been writing for a while, and I feel like I've finally started to write things that are understandable and fun. After a few years lost in the wilderness, if you ask some of my friends that I made read the earlier blogs, they're like, oh, this is yikes, but it's coming along. And the podcast is with my friend Tom, and we just kind of like riff on what's actually happening on AI and not really do news recaps, but just what it all means and have a more critical perspective on the things that really are kind of funny, but still very serious happening in the world of machine learning.Swyx [00:01:52]: Yeah. Awesome. So let's talk about your work. What would you highlight as your greatest hits so far on Interconnects, at least?Nathan [00:01:59]: So the ones that are most popular are timely and or opinion pieces. So the first real breakout piece was when April and I also just wrote down the thing that everyone in AI was feeling, which is we're all feeling stressed, that we're going to get scooped, and that we're overworked, which is behind the curtain, what it feels to work in AI. And then a similar one, which we might touch on later in this, was about my recent job search, which wasn't the first time I wrote a job search post. People always love that stuff. It's so open. I mean, it's easy for me to do in a way that it's very on-brand, and it's very helpful. I understand that until you've done it, it's hard to share this information. And then the other popular ones are various model training techniques or fine tuning. There's an early one on RLHF, which is, this stuff is all just like when I figure it out in my brain. So I wrote an article that's like how RLHF actually works, which is just the intuitions that I had put together in the summer about RLHF, and that was pretty well. And then I opportunistically wrote about QSTAR, which I hate that you have to do it, but it is pretty funny. From a literature perspective, I'm like, open AI publishes on work that is very related to mathematical reasoning. So it's like, oh, you just poke a little around what they've already published, and it seems pretty reasonable. But we don't know. They probably just got like a moderate bump on one of their benchmarks, and then everyone lost their minds. It doesn't really matter.Swyx [00:03:15]: You're like, this is why Sam Altman was fired. I don't know. Anyway, we're here to talk about RLHF 101. You did a presentation, and I think you expressed some desire to rerecord it. And that's why I reached out on Twitter saying, like, why not rerecord it with us, and then we can ask questions and talk about it. Yeah, sounds good.Nathan [00:03:30]: I try to do it every six or 12 months is my estimated cadence, just to refine the ways that I say things. And people will see that we don't know that much more, but we have a bit of better way of saying what we don't know.Swyx [00:03:43]: Awesome. We can dive right in. I don't know if there's any other topics that we want to lay out as groundwork.Alessio [00:03:48]: No, you have some awesome slides. So for people listening on podcast only, we're going to have the slides on our show notes, and then we're going to have a YouTube version where we run through everything together.Nathan [00:03:59]: Sounds good. Yeah. I think to start skipping a lot of the, like, what is a language model stuff, everyone knows that at this point. I think the quote from the Llama 2 paper is a great kind of tidbit on RLHF becoming like a real deal. There was some uncertainty earlier in the year about whether or not RLHF was really going to be important. I think it was not that surprising that it is. I mean, with recent models still using it, the signs were there, but the Llama 2 paper essentially reads like a bunch of NLP researchers that were skeptical and surprised. So the quote from the paper was, meanwhile, reinforcement learning known for its instability seemed a somewhat shadowy field for those in the NLP research community. However, reinforcement learning proved highly effective, particularly given its cost and time effectiveness. So you don't really know exactly what the costs and time that Meta is looking at, because they have a huge team and a pretty good amount of money here to release these Llama models. This is just the kind of thing that we're seeing now. I think any major company that wasn't doing RLHF is now realizing they have to have a team around this. At the same time, we don't have a lot of that in the open and research communities at the same scale. I think seeing that converge would be great, but it's still very early days. And the other thing on the slide is some of Anthropic's work, but everyone knows Anthropic is kind of the masters of this, and they have some of their own techniques that we're going to talk about later on, but that's kind of where we start.Alessio [00:05:17]: Can we do just a one-second RL version? So you come from a robotics background, which RL used to be, or maybe still is, state-of-the-art. And then now you're seeing a lot of LLM plus RL, so you have the gym fans, Eureka, you have MPU, which we had on the podcast when they started with RL. Now they're doing RL plus LLMs. Yeah. Any thoughts there on how we got here? Maybe how the pendulum will keep swinging?Nathan [00:05:46]: I really think RL is about a framing of viewing the world through trial and error learning and feedback, and really just one that's focused on thinking about decision-making and inputs in the world and how inputs have reactions. And in that, a lot of people come from a lot of different backgrounds, whether it's physics, electrical engineering, mechanical engineering. There are obviously computer scientists, but compared to other fields of CS, I do think it's a much more diverse background of people. My background was in electrical engineering and doing robotics and things like that. It really just changes the worldview. I think that reinforcement learning as it was back then, so to say, is really different. You're looking at these toy problems and the numbers are totally different, and everyone went kind of zero to one at scaling these things up, but people like Jim Phan and other people that were... You saw this transition in the decision transformer and papers and when people are trying to use transformers to do decision-making for things like offline RL, and I think that was kind of like the early days. But then once language models were so proven, it's like everyone is using this tool for their research. I think in the long run, it will still settle out, or RL will still be a field that people work on just because of these kind of fundamental things that I talked about. It's just viewing the whole problem formulation different than predicting text, and so there needs to be that separation. And the view of RL in language models is pretty contrived already, so it's not like we're doing real RL. I think the last slide that I have here is a way to make RLHF more like what people would think of with RL, so actually running things over time, but a weird lineage of tools that happen to get us to where we are, so that's why the name takes up so much space, but it could have gone a lot of different ways. Cool.Alessio [00:07:29]: We made it one slide before going on a tangent.Nathan [00:07:31]: Yeah, I mean, it's kind of related. This is a...Swyx [00:07:35]: Yeah, so we have a history of RL.Nathan [00:07:37]: Yeah, so to give the context, this paper really started because I have this more diverse background than some computer scientists, such as trying to understand what the difference of a cost function or a reward function and a preference function would be without going into all of the details. Costs are normally things that control theorists would work with in these kind of closed domains, and then reinforcement learning has always worked with rewards that's central to the formulation that we'll see, and then the idea was like, okay, we now are at preferences, and each step along the way there's kind of different assumptions that you're making. We'll get into these, and those assumptions are built on other fields of work. So that's what this slide is going to say, it's like RLHF, while directly building on tools from RL and language models, is really implicitly impacted and built on theories and philosophies spanning tons of human history. I think we cite Aristotle in this paper, which is fun. It's like going pre-BC, it's like 2,300 years old or something like that. So that's the reason to do this, I think. We kind of list some things in the paper about summarizing what different presumptions of RLHF could be. I think going through these is actually kind of funny. It's fun to talk about these, because they're kind of grab bags of things that you'll see return throughout this podcast that we're talking about it. The core thing of RLHF that, in order to be a believer in this, is that RL actually works. It's like, if you have a reward function, you can optimize it in some way and get a different performance out of it, and you could do this at scale, and you could do this in really complex environments, which is, I don't know how to do that in all the domains. I don't know how to exactly make chat GPT. So it's kind of, we'll overshadow everything. And then there's, go from something kind of obvious like that, and then you read the von Neumann-Morgenstern utility theorem, which is essentially an economic theory that says you can weight different probabilities of different people, which is a theoretical piece of work that is the foundation of utilitarianism, and trying to quantify preferences is crucial to doing any sort of RLHF. And if you look into this, all of these things, there's way more you could go into if you're interested in any of these. So this is kind of like grabbing a few random things, and then kind of similar to that is the Bradley-Terry model, which is the fancy name for the pairwise preferences that everyone is doing. And then all the things that are like, that Anthropic and OpenAI figured out that you can do, which is that you can aggregate preferences from a bunch of different people and different sources. And then when you actually do RLHF, you extract things from that data, and then you train a model that works somehow. And we don't know, there's a lot of complex links there, but if you want to be a believer in doing this at scale, these are the sorts of things that you have to accept as preconditions for doing RLHF. Yeah.Swyx [00:10:09]: You have a nice chart of like the sort of intellectual history of RLHF that we'll send people to refer to either in your paper or in the YouTube video for this podcast. But I like the other slide that you have on like the presumptions that you need to have for RLHF to work. You already mentioned some of those. Which one's underappreciated? Like, this is the first time I've come across the VNM Utility Theorem.Nathan [00:10:29]: Yeah, I know. This is what you get from working with people like to my co-host on the podcast, the rhetoric is that sociologist by training. So he knows all these things and like who the philosophers are that found these different things like utilitarianism. But there's a lot that goes into this. Like essentially there's even economic theories that like there's debate whether or not preferences exist at all. And there's like different types of math you can use with whether or not you actually can model preferences at all. So it's pretty obvious that RLHF is built on the math that thinks that you can actually model any human preference. But this is the sort of thing that's been debated for a long time. So all the work that's here is like, and people hear about in their AI classes. So like Jeremy Bentham, like hedonic calculus and all these things like these are the side of work where people assume that preferences can be measured. And this is like, I don't really know, like, this is what I kind of go on a rant and I say that in RLHF calling things a preference model is a little annoying because there's no inductive bias of what a preference is. It's like if you were to learn a robotic system and you learned a dynamics model, like hopefully that actually mirrors the world in some way of the dynamics. But with a preference model, it's like, Oh my God, I don't know what this model, like I don't know what chat GPT encodes as any sort of preference or what I would want it to be in a fair way. Anthropic has done more work on trying to write these things down. But even like if you look at Claude's constitution, like that doesn't mean the model believes these things. It's just trained to prioritize these things. And that's kind of what the later points I'm looking at, like what RLHF is doing and if it's actually like a repeatable process in the data and in the training, that's just unknown. And we have a long way to go before we understand what this is and the link between preference data and any notion of like writing down a specific value.Alessio [00:12:05]: The disconnect between more sociology work versus computer work already exists, or is it like a recent cross contamination? Because when we had Tri Dao on the podcast, he said FlashAttention came to be because at Hazy they have so much overlap between systems engineer and like deep learning engineers. Is it the same in this field?Nathan [00:12:26]: So I've gone to a couple of workshops for the populations of people who you'd want to include this like R. I think the reason why it's not really talked about is just because the RLHF techniques that people use were built in labs like OpenAI and DeepMind where there are some of these people. These places do a pretty good job of trying to get these people in the door when you compare them to like normal startups. But like they're not bringing in academics from economics, like social choice theory. There's just too much. Like the criticism of this paper that this is based on is like, oh, you're missing these things in RL or at least this decade of RL and it's like it would be literally be bigger than the Sutton and Barto book if you were to include everyone. So it's really hard to include everyone in a principled manner when you're designing this. It's just a good way to understand and improve the communication of what RLHF is and like what is a good reward model for society. It really probably comes down to what an individual wants and it'll probably motivate models to move more in that direction and just be a little bit better about the communication, which is a recurring theme and kind of my work is like I just get frustrated when people say things that don't really make sense, especially when it's going to manipulate individual's values or manipulate the general view of AI or anything like this. So that's kind of why RLHF is so interesting. It's very vague in what it's actually doing while the problem specification is very general.Swyx [00:13:42]: Shall we go to the, I guess, the diagram here on the reinforcement learning basics? Yeah.Nathan [00:13:47]: So reinforcement learning, I kind of mentioned this, it's a trial and error type of system. The diagram and the slides is really this classic thing where you have an agent interacting with an environment. So it's kind of this agent has some input to the environment, which is called the action. The environment returns a state and a reward and that repeats over time and the agent learns based on these states and these rewards that it's seeing and it should learn a policy that makes the rewards go up. That seems pretty simple than if you try to mentally map what this looks like in language, which is that like the language models don't make this easy. I think with the language model, it's very hard to define what an environment is. So if the language model is the policy and it's generating, it's like the environment should be a human, but setting up the infrastructure to take tens of thousands of prompts and generate them and then show them to a human and collect the human responses and then shove that into your training architecture is very far away from working. So we don't really have an environment. We just have a reward model that returns a reward and the state doesn't really exist when you look at it like an RL problem. What happens is the state is a prompt and then you do a completion and then you throw it away and you grab a new prompt. We're really in as an RL researcher, you would think of this as being like you take a state, you get some completion from it and then you look at what that is and you keep kind of iterating on it and all of that isn't here, which is why you'll hear RLHF referred to as bandits problem, which is kind of like you choose one action and then you watch the dynamics play out. There's many more debates that you can have in this. If you get the right RL people in the room, then kind of like this is an RL even when you zoom into what RLHF is doing.Alessio [00:15:22]: Does this change as you think about a chain of thought reasoning and things like that? Like does the state become part of the chain that you're going through?Nathan [00:15:29]: There's work that I've mentioned on one slide called process reward models that essentially rewards each step in the chain of thought reasoning. It doesn't really give the part of interaction, but it does make it a little bit more fine grained where you can think about like calling it at least you have many states from your initial state. That formulation I don't think people have fully settled on. I think there's a bunch of great work out there, like even OpenAI is releasing a lot of this and let's verify step by step is there pretty great paper on the matter. I think in the next year that'll probably get made more concrete by the community on like if you can easily draw out like if chain of thought reasoning is more like RL, we can talk about that more later. That's a kind of a more advanced topic than we probably should spend all the time on.Swyx [00:16:13]: RLHF for decision making. You have a slide here that compares pre-deep RL versus deep RL.Nathan [00:16:19]: This is getting into the history of things, which is showing that the work that people are using now really came from well outside of NLP and it came before deep learning was big. Next up from this paper, Tamer, which is from 2008. Some names that are still really relevant in kind of human centric RL, Bradley Knox and Peter Stone. If you have an agent take an action, you would just have a human give a score from zero to one as a reward rather than having a reward function. And then with that classifier, you can do something with a policy that learns to take actions to maximize that reward. It's a pretty simple setup. It works in simple domains. And then the reason why this is interesting is you compare it to the paper that everyone knows, which is this Paul Christiano et al. Deep Reinforced Learning from Human Preferences paper, which is where they showed that learning from human preferences, you can solve like the basic RL tasks at the time. So various control problems and simulation and this kind of like human preferences approach had higher rewards in some environments than if you just threw RL at the environment that returned a reward. So the preferences thing was you took two trajectories. So in this case, it was like complete trajectories of the agent and the human was labeling which one is better. You can see how this kind of comes to be like the pairwise preferences that are used today that we'll talk about. And there's also a really kind of interesting nugget that is the trajectory that the humans were labeling over has a lot more information than the RL algorithm would see if you just had one state, which is kind of why people think that it's why the performance in this paper was so strong. But I still think that it's surprising that there isn't more RL work of this style happening now. This paper is in 2017. So it's like six years later and I haven't seen things that are exactly similar, but it's a great paper to understand where stuff that's happening now kind of came from.Swyx [00:17:58]: Just on the Christiano paper, you mentioned the performance being strong. I don't remember what results should I have in mind when I think about that paper?Nathan [00:18:04]: It's mostly like if you think about an RL learning curve, which is like on the X axis, you have environment interactions on the Y axis, you have performance. You can think about different like ablation studies of between algorithms. So I think they use like A2C, which I don't even remember what that stands for as their baseline. But if you do the human preference version on a bunch of environments, like the human preference labels, the agent was able to learn faster than if it just learned from the signal from the environment, which means like it's happening because the reward model has more information than the agent would. But like the fact that it can do better, I was like, that's pretty surprising to me because RL algorithms are pretty sensitive. So I was like, okay.Swyx [00:18:41]: It's just one thing I do want to establish as a baseline for our listeners. We are updating all the weights. In some sense, the next token prediction task of training a language model is a form of reinforcement learning. Except that it's not from human feedback. It's just self-supervised learning from a general corpus. There's one distinction which I love, which is that you can actually give negative feedback. Whereas in a general sort of pre-training situation, you cannot. And maybe like the order of magnitude of feedback, like the Likert scale that you're going to talk about, that actually just gives more signal than a typical training process would do in a language model setting. Yeah.Nathan [00:19:15]: I don't think I'm the right person to comment exactly, but like you can make analogies that reinforcement learning is self-supervised learning as well. Like there are a lot of things that will point to that. I don't know whether or not it's a richer signal. I think that could be seen in the results. It's a good thing for people to look into more. As reinforcement learning is so much less compute, like it is a richer signal in terms of its impact. Because if they could do what RLHF is doing at pre-training, they would, but they don't know how to have that effect in like a stable manner. Otherwise everyone would do it.Swyx [00:19:45]: On a practical basis, as someone fine-tuning models, I have often wished for negative fine-tuning, which pretty much doesn't exist in OpenAI land. And it's not the default setup in open-source land.Nathan [00:19:57]: How does this work in like diffusion models and stuff? Because you can give negative prompts to something to like stable diffusion or whatever. It's for guidance.Swyx [00:20:04]: That's for clip guidance.Nathan [00:20:05]: Is that just from like how they prompt it then? I'm just wondering if we could do something similar. It's another tangent.Swyx [00:20:10]: I do want to sort of spell that out for people in case they haven't made the connection between RLHF and the rest of the training process. They might have some familiarity with it.Nathan [00:20:19]: Yeah. The upcoming slides can really dig into this, which is like this in 2018 paper, there was a position paper from a bunch of the same authors from the Christiano paper and from the OpenAI work that everyone knows, which is like, they write a position paper on what a preference reward model could do to solve alignment for agents. That's kind of based on two assumptions. The first assumption is that we can learn user intentions to a sufficiently high accuracy. That doesn't last with me because I don't know what that means. But the second one is pretty telling in the context of RLHF, which is for many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior. And this is the whole thing. It's like we can compare two poems that the model generates and it can be viewed as liking a positive example, or it could be viewed as really disliking a negative example. And that's what I think a lot of people are doing in like the harm space is like a harmful response to a language model, whether or not you agree with the company's definition of harms is that it's a really bad negative example and they downweight them by preferring something more benign in the RLHF process, among other ways of dealing with safety. So that's a good way of saying it's like this is core, this kind of like comparison and positive or negative example is core to all of the RLHF work that has continued.Swyx [00:21:29]: People often say, I don't know what I want, but I'll know when I see it. This is that expressed in reinforcement learning tools.Nathan [00:21:35]: Yeah, it is. Yeah, it is. That's what everyone's doing in the preference modeling stage that we'll get to. Yeah. Yeah. And you can see there are more papers. This is really just to have all the links for people that go deeper. There's a Ziegler et al. paper in 2019, which shows that you can do this RLHF process on language models. This familiar diagram starts to emerge in 2019, and it's just to show that this goes really far back. I think we can kind of breeze through some of these. And then 2020 is the first open AI experiment that I think caught people's eyes, which is this learning to summarize experiment. It has this three-step process that we'll go to into more when I kind of go into the main concepts. But this is like the first time you see this diagram that they reuse with InstructGPT, they reuse with ChatGPT. And the types of examples that they would have, I don't think I need to read these exactly, but one that I have read a whole bunch of times is like, they took these prompts from Reddit that was like, explain like I'm five or get career advice, and people really pour their heart and soul into these. So these are like multi-paragraph pieces of writing. And then they essentially do comparisons between a vanilla language model, like I think it was either GPT-2 or GPT-3, I don't always get the exact years.Swyx [00:22:42]: 3 was early 2020. So that's about right.Nathan [00:22:45]: Yeah. So this is probably done with GPT-2. It doesn't really matter. But the language model does normal things when you do few shot, which is like it repeats itself. It doesn't have nice text. And what they did is that this was the first time where the language model would generate like pretty nice text from an output. It was restricted to the summarization domain. But I think that I guess this is where I wish I was paying attention more because I would see the paper, but I didn't know to read the language model outputs and kind of understand this qualitative sense of the models very well then. Because you look at the plots in the papers, these Learning to Summarize and Destruct GPT have incredibly pretty plots, just like nicely separated lines with error bars and they're like superfine tuning works, the RL step works. But if you were early to see like how different the language that was written by these models was, I think you could have been early to like things like ChatGPT and knowing RLHF would matter. And now I think the good people know to chat with language models, but not even everyone does this. Like people are still looking at numbers. And I think OpenAI probably figured it out when they were doing this, how important that could be. And then they had years to kind of chisel away at that and that's why they're doing so well now. Yeah.Swyx [00:23:56]: I mean, arguably, you know, it's well known that ChatGPT was kind of an accident that they didn't think it would be that big of a deal. Yeah.Nathan [00:24:02]: So maybe they didn't. Maybe they didn't, but they were getting the proxy that they needed.Swyx [00:24:06]: I've heard off the record from other labs that it was in the air. If OpenAI didn't do it, someone else would have done it. So you've mentioned a couple of other papers that are very seminal to this period. And I love how you say way back when in referring to 2019.Nathan [00:24:19]: It feels like it in my life.Swyx [00:24:21]: So how much should people understand the relationship between RLHF, instruction tuning, PPO, KL divergence, anything like that? Like how would you construct the level of knowledge that people should dive into? What should people know at the high level? And then if people want to dive in deeper, where do they go? Is instruct tuning important here or is that part of the overall process towards modern RLHF?Nathan [00:24:44]: I think for most people, instruction tuning is probably still more important in their day to day life. I think instruction tuning works very well. You can write samples by hand that make sense. You can get the model to learn from them. You could do this with very low compute. It's easy to do almost in like no code solutions at this point. And the loss function is really straightforward. And then if you're interested in RLHF, you can kind of learn from it from a different perspective, which is like how the instruction tuning distribution makes it easier for your RLHF model to learn. There's a lot of details depending on your preference data, if it's close to your instruction model or not, if that matters. But that's really at the RLHF stage. So I think it's nice to segment and just kind of understand what your level of investment and goals are. I think instruction tuning still can do most of what you want to do. And it's like, if you want to think about RLHF, at least before DPO really had taken off at all, it would be like, do you want to have a team of at least like five people if you're really thinking about doing RLHF? I think DPO makes it a little bit easier, but that's still really limited to kind of one data set that everyone's using at this point. Like everyone's using this ultra feedback data set and it boosts AlpacaVal, MTBench, TruthfulQA and like the qualitative model a bit. We don't really know why. It's like, it might just be a data set combined with the method, but you've got to be ready for a bumpy ride if you're wanting to try to do RLHF. I don't really recommend most startups to do it unless it's like going to provide them a clear competitive advantage in their kind of niche, because you're not going to make your model chat GPT like better than OpenAI or anything like that. You've got to accept that there's some exploration there and you might get a vein of benefit in your specific domain, but I'm still like, oh, be careful going into the RLHF can of worms. You probably don't need to.Swyx [00:26:27]: Okay. So there's a bit of a time skip in what you mentioned. DPO is like a couple months old, so we'll leave that towards the end. I think the main result that I think most people talk about at this stage, we're talking about September 2020 and then going into, I guess maybe last year was Vicuña as one of the more interesting applications of instruction tuning that pushed LLAMA1 from, let's say a GPT 3-ish model to a GPT 3.5 model in pure open source with not a lot of resources. I think, I mean, they said something like, you know, they use like under $100 to makeNathan [00:26:58]: this. Yeah. Like instruction tuning can really go a long way. I think the claims of chat GPT level are long overblown in most of the things in open source. I think it's not to say, like Vicuña was a huge step and it's just kind of showing that instruction tuning with the right data will completely change what it feels like to talk with your model. Yeah.Swyx [00:27:19]: From text completion to actually chatting back and forth. Yeah. Yeah.Nathan [00:27:23]: Instruction tuning can be multi-turn. Just having a little bit of data that's like a couple of turns can go a really long way. That was like the story of the whole first part of the year is like people would be surprised by how far you can take instruction tuning on a small model. I think the things that people see now is like the small models don't really handle nuance as well and they could be more repetitive even if they have really good instruction tuning. But if you take that kind of 7 to 70 billion parameter jump, like the instruction tuning at the bigger model is like robustness, little things make more sense. So that's still just with instruction tuning and scale more than anything else.Swyx [00:27:56]: Excellent. Shall we go to technical overview?Nathan [00:27:58]: Yeah. This is kind of where we go through my own version of this like three phase process. You can talk about instruction tuning, which we've talked about a lot. It's funny because all these things, instruction tuning has the fewest slides, even though it's the most practical thing for most people. We could save the debate for like if the big labs still do instruction tuning for later, but that's a coming wave for people. And then like preference data and training and then kind of like what does reinforce learning optimization actually mean? We talk about these sequentially because you really have to be able to do each of them to be able to do the next one. You need to be able to have a model that's chatty or helpful instruction following. Every company has their own word that they like to assign to what instructions mean. And then once you have that, you can collect preference data and do some sort of optimization.Swyx [00:28:39]: When you say word, you mean like angle bracket inst or do you mean something else?Nathan [00:28:42]: Oh, I don't even know what inst means, but just saying like they use their adjective that they like. I think Entropic also like steerable is another one.Swyx [00:28:51]: Just the way they describe it. Yeah.Nathan [00:28:53]: So like instruction tuning, we've covered most of this is really about like you should try to adapt your models to specific needs. It makes models that were only okay, extremely comprehensible. A lot of the times it's where you start to get things like chat templates. So if you want to do system prompts, if you want to ask your model, like act like a pirate, that's one of the ones I always do, which is always funny, but like whatever you like act like a chef, like anything, this is where those types of things that people really know in language models start to get applied. So it's good as a kind of starting point because this chat template is used in our early childhood and all of these things down the line, but it was a basic pointer. It's like, once you see this with instruction tuning, you really know it, which is like you take things like stack overflow where you have a question and an answer. You format that data really nicely. There's much more tricky things that people do, but I still think the vast majority of it is question answer. Please explain this topic to me, generate this thing for me. That hasn't changed that much this year. I think people have just gotten better at scaling up the data that they need. Yeah, this is where this talk will kind of take a whole left turn into more technical detail land. I put a slide with the RLHF objective, which I think is good for people to know. I've started going back to this more, just kind of understand what is trying to happen here and what type of math people could do. I think because of this algorithm, we've mentioned this, it's in the air, direct preference optimization, but everything kind of comes from an equation of trying to learn a policy that maximizes the reward. The reward is some learned metric. A lot can be said about what the reward should be subject to some constraint. The most popular constraint is the KL distraint, which is just a distributional distance. Essentially in language models, that means if you have a completion from your instruction or RLHF model, you can compare that completion to a base model. And looking at the log probs from the model, which are essentially how likely each token is, you can see a rough calculation of the distance between these two models, just as a scalar number. I think what that actually looks like in code, you can look at it. It'd be like a sum of log probs that you get right from the model. It'll look much more simpler than it sounds, but it is just to make the optimization kind of stay on tracks.Make sure it doesn't overfit to the RLHF data. Because we have so little data in RLHF, overfitting is really something that could happen. I think it'll fit to specific features that labelers like to see, that the model likes to generate, punctuation, weird tokens like calculator tokens. It could overfit to anything if it's in the data a lot and it happens to be in a specific format. And the KL constraint prevents that. There's not that much documented work on that, but there's a lot of people that know if you take that away, it just doesn't work at all. I think it's something that people don't focus on too much. But the objective, as I said, it's just kind of, you optimize the reward. The reward is where the human part of this comes in. We'll talk about that next. And then subject to a constraint, don't change the model too much. The real questions are, how do you implement the reward? And then how do you make the reward go up in a meaningful way? So like a preference model, the task is kind of to design a human reward. I think the equation that most of the stuff is based on right now is something called a Bradley-Terry model, which is like a pairwise preference model where you compare two completions and you say which one you like better. I'll show an interface that Anthropic uses here. And the Bradley-Terry model is really a fancy probability between two selections. And what's happening in the math is that you're looking at the probability that the chosen completion, the one you like better, is actually the better completion over the rejected completion. And what these preference models do is they assume this probability is correlated to reward. So if you just sample from this probability, it'll give you a scalar. And then you use that reward later on to signify what piece of text is better. I'm kind of inclined to breeze through the math stuff because otherwise, it's going to be not as good to listen to.Alessio [00:32:49]: I think people want to hear it. I think there's a lot of higher level explanations out there. Yeah.Nathan [00:32:55]: So the real thing is you need to assign a scalar reward of how good a response is. And that's not necessarily that easy to understand. Because if we take back to one of the first works, I mentioned this tamer thing for decision making. People tried that with language models, which is if you have a prompt in a completion and you just have someone rate it from 0 to 10, could you then train a reward model on all of these completions in 0 to 10 ratings and see if you can get chat2BT with that? And the answer is really kind of no. Like a lot of people tried that. It didn't really work. And then that's why they tried this pairwise preference thing. And it happened to work. And this Bradley Terry model comes from the 50s. It's from these fields that I was mentioning earlier. And it's wild how much this happens. I mean, this screenshot I have in the slides is from the DPO paper. I think it might be the appendix. But it's still really around in the literature of what people are doing for RLHF.Alessio [00:33:45]: Yeah.Nathan [00:33:45]: So it's a fun one to know.Swyx [00:33:46]: I'll point out one presumption that this heavily relies on. You mentioned this as part of your six presumptions that we covered earlier, which is that you can aggregate these preferences. This is not exactly true among all humans, right? I have a preference for one thing. You have a preference for a different thing. And actually coming from economics, you mentioned economics earlier. There's a theorem or a name for this called error impossibility, which I'm sure you've come across..Nathan [00:34:07]: It's one of the many kind of things we throw around in the paper.Swyx [00:34:10]: Right. Do we just ignore it?Nathan [00:34:14]: We just, yeah, just aggregate. Yeah. I think the reason this really is done on a deep level is that you're not actually trying to model any contestable preference in this. You're not trying to go into things that are controversial or anything. It's really the notion of preference is trying to stay around correctness and style rather than any meaningful notion of preference. Because otherwise these companies, they don't want to do this at all. I think that's just how it is. And it's like, if you look at what people actually do. So I have a bunch of slides on the feedback interface. And they all publish this.Swyx [00:34:43]: It's always at the appendices of every paper.Nathan [00:34:47]: There's something later on in this talk, which is like, but it's good to mention. And this is when you're doing this preference collection, you write out a very long document of instructions to people that are collecting this data. And it's like, this is the hierarchy of what we want to prioritize. Something amount like factuality, helpfulness, honestness, harmlessness. These are all different things. Every company will rank these in different ways, provide extensive examples. It's like, if you see these two answers, you should select this one and why. And all of this stuff. And then my kind of like head scratching is like, why don't we check if the models actually do these things that we tell the data annotators to collect? But I think it's because it's hard to make that attribution. And it's hard to test if a model is honest and stuff. It would just be nice to understand the kind of causal mechanisms as a researcher or like if our goals are met. But at a simple level, what it boils down to, I have a lot more images than I need. It's like you're having a conversation with an AI, something like type GPT. You get shown two responses or more in some papers, and then you have to choose which one is better. I think something you'll hear a lot in this space is something called a Likert scale. Likert is a name. It's a name for probably some research in economics, decision theory, something. But essentially, it's a type of scale where if you have integers from like one to eight, the middle numbers will represent something close to a tie. And the smallest numbers will represent one model being way better than the other. And the biggest numbers will be like the other models better. So in the case of one to eight, if you're comparing models A to B, if you return a one, if you really liked option A, you return eight if you really like B, and then like a four or five if they were close. There's other ways to collect this data. This one's become really popular. We played with it a bit at Hugging Face. It's hard to use. Filling out this preference data is really hard. You have to read like multiple paragraphs. It's not for me. Some people really like it. I hear I'm like, I can't imagine sitting there and reading AI-generated text and like having to do that for my job. But a lot of these early papers in RLHF have good examples of what was done. The one I have here is from Anthropic's collection demo because it was from slides that I did with Anthropic. But you can look up these in the various papers. It looks like Chat2BT with two responses, and then you have an option to say which one is better. It's nothing crazy. The infrastructure is almost exactly the same, but they just log which one you think is better. I think places like Scale are also really big in this where a lot of the labeler companies will help control like who's doing how many samples. You have multiple people go over the same sample once and like what happens if there's disagreement. I don't really think this disagreement data is used for anything, but it's good to know like what the distribution of prompts is, who's doing it, how many samples you have, controlling the workforce. All of this is very hard. A last thing to add is that a lot of these companies do collect optional metadata. I think the Anthropic example shows a rating of like how good was the prompt or the conversation from good to bad because things matter. Like there's kind of a quadrant of preference data in my mind, which is you're comparing a good answer to a good answer, which is like really interesting signal. And then there's kind of the option of you're comparing a bad answer to a bad answer, which is like you don't want to train your model on two different issues. This is like, we did this at Hugging Base and it was like, our data was like, we don't know if we can use this because a lot of it was just bad answer to bad answer because you're like rushing to try to do this real contract. And then there's also good answer to bad answer, which I think is probably pretty reasonable to include. You just prefer the good one and move on with your life. But those are very different scenarios. I think open AIs of the world are all in good answer, good answer, and have learned to eliminate everything else. But when people try to do this in open source, it's probably like what Open Assistance saw is like, there's just a lot of bad answers in your preference data. And you're like, what do I do with this? Metadata flags can help. I threw in the instruct GPT metadata. You can see how much they collect here. And like everything from the model fails to actually complete the task, hallucinations, different types of offensive or dangerous content, moral judgment, expresses opinion. Like, I don't know exactly if they're doing this now, but you can kind of see why doing RLHF at scale and prioritizing a lot of different endpoints would be hard because these are all things I'd be interested in if I was scaling up a big team to do RLHF and like what is going into the preference data. You do an experiment and you're like, okay, we're going to remove all the data where they said the model hallucinates like just that and then retrain everything. Like, what does that do?Swyx [00:38:59]: Yeah, so hallucination is big, but some of these other metadata categories, and I've seen this in a lot of papers, it's like, does it contain sexual content? Does it express a moral judgment? Does it denigrate a protected class? That kind of stuff, very binary. Should people try to adjust for this at the RLHF layer or should they put it as a pipeline where they have a classifier as a separate model that grades the model output?Nathan [00:39:20]: Do you mean for training or like a deployment? Deployment. I do think that people are doing it at deployment. I think we've seen safety and other things in the RLHF pipeline. Like Lama 2 is famous for kind of having this like helpfulness and safety reward models. Deep in the Gemini report is something that Gemini has like four things, which is like helpfulness, factuality, maybe safety, maybe something else. But places like Anthropic and Chattopadhyay and Bard almost surely have a classifier after, which is like, is this text good? Is this text bad? That's not that surprising, I think, because you could use like a hundred times smaller language model and do much better at filtering than RLHF. But I do think it's still so deeply intertwined with the motivation of RLHF to be for safety that some of these categories still persist. I think that's something I'll kind of settle out, I think.Swyx [00:40:11]: I'm just wondering if it's worth collecting this data for the RLHF purpose, if you're not going to use it in any way, separate model to-Nathan [00:40:18]: Yeah, I don't think OpenAI will collect all of this anymore, but I think for research perspectives, it's very insightful to know, but it's also expensive. So essentially your preference data scales with how many minutes it takes for you to do each task and every button is like, it scales pretty linearly. So it's not cheap stuff.Swyx [00:40:35]: Can we, since you mentioned expensiveness, I think you may have joined one of our spaces back in Lama 2 was released. We had an estimate from you that was something on the order of Lama 2 costs $3 to $6 million to train GPU-wise, and then it was something like $20 to $30 million in preference data. Is that something that's still in the ballpark? I don't need precise numbers.Nathan [00:40:56]: I think it's still a ballpark. I know that the 20 million was off by a factor of four because I was converting from a prompt number to a total data point. So essentially when you do this, if you have multi-turn setting, each turn will be one data point and the Lama 2 paper reports like 1.5 million data points, which could be like 400,000 prompts. So I would say it's still say like 6 to 8 million is safe to say that they're spending, if not more, they're probably also buying other types of data and or throwing out data that they don't like, but it's very comparable to compute costs. But the compute costs listed in the paper always are way lower because all they have to say is like, what does one run cost? But they're running tens or hundreds of runs. So it's like, okay, like... Yeah, it's just kind of a meaningless number. Yeah, the data number would be more interesting.Alessio [00:41:42]: What's the depreciation of this data?Nathan [00:41:46]: It depends on the method. Like some methods, people think that it's more sensitive to the, this is what I was saying. It was like, does the type of instruction tuning you do matter for RLHF? So like, depending on the method, some people are trying to figure out if you need to have like what is called like, this is very confusing. It's called like on policy data, which is like your RLHF data is from your instruction model. I really think people in open source and academics are going to figure out how to use any preference data on any model just because they're scrappy. But there's been an intuition that to do like PPO well and keep improving the model over time and do like what Meta did and what people think that OpenAI does is that you need to collect new preference data to kind of edge the distribution of capabilities forward. So there's a depreciation where like the first batch of data you collect isn't really useful for training the model when you have the fifth batch. We don't really know, but it's a good question. And I do think that if we had all the LLAMA data, we wouldn't know what to do with all of it. Like probably like 20 to 40% would be pretty useful for people, but not the whole data set. Like a lot of it's probably kind of gibberish because they had a lot of data in there.Alessio [00:42:51]: So do you think like the open source community should spend more time figuring out how to reuse the data that we have or like generate more data? I think that's one of the-Nathan [00:43:02]: I think if the people are kind of locked into using synthetic data, people also think that synthetic data is like GPT-4 is more accurate than humans at labeling preferences. So if you look at these diagrams, like humans are about 60 to 70% agreement. And we're like, that's what the models get to. And if humans are about 70% agreement or accuracy, like GPT-4 is like 80%. So it is a bit better, which is like in one way of saying it.Swyx [00:43:24]: Humans don't even agree with humans 50% of the time.Nathan [00:43:27]: Yeah, so like that's the thing. It's like the human disagreement or the lack of accuracy should be like a signal, but how do you incorporate that? It's really tricky to actually do that. I think that people just keep using GPT-4 because it's really cheap. It's one of my like go-to, like I just say this over and over again is like GPT-4 for data generation, all terms and conditions aside because we know OpenAI has this stuff is like very cheap for getting pretty good data compared to compute or salary of any engineer or anything. So it's like tell people to go crazy generating GPT-4 data if you're willing to take the organizational like cloud of should we be doing this? But I think most people have accepted that you kind of do this, especially at individuals. Like they're not gonna come after individuals. I do think more companies should think twice before doing tons of OpenAI outputs. Also just because the data contamination and what it does to your workflow is probably hard to control at scale.Swyx [00:44:21]: And we should just mention at the time of recording, we've seen the first example of OpenAI enforcing their terms of service. ByteDance was caught, reported to be training on GPT-4 data and they got their access to OpenAI revoked. So that was one example.Nathan [00:44:36]: Yeah, I don't expect OpenAI to go too crazy on this cause they're just gonna, there's gonna be so much backlash against them. And like, everyone's gonna do it anyways.Swyx [00:44:46]: And what's at stake here to spell it out is like, okay, that's like cost $10 to collect one data point from a human. It's gonna cost you like a 10th of a cent with OpenAI, right? So like it's just orders of magnitude cheaper. And therefore people-Nathan [00:44:58]: Yeah, and it's like the signal you get from humans is from preferences isn't that high. The signal that you get from humans for instructions is pretty high, but it is also very expensive. So like the human instructions are definitely like by far and away the best ones out there compared to the synthetic data. But I think like the synthetic preferences are just so much easier to get some sort of signal running with and you can work in other, I think people will start working in other goals there between safety and whatever. That's something that's taking off and we'll kind of see that. I think in 2024, at some point, people will start doing things like constitutional AI for preferences, which will be pretty interesting. I think we saw how long it took RLHF to get started in open source. Instruction tuning was like the only thing that was really happening until maybe like August, really. I think Zephyr was the first model that showed success with RLHF in the public, but that's a long time from everyone knowing that it was something that people are interested in to having any like check mark. So I accept that and think the same will happen with constitutional AI. But once people show that you can do it once, they continue to explore.Alessio [00:46:01]: Excellent.Swyx [00:46:01]: Just in the domain of human preference data suppliers, Scale.ai very happily will tell you that they supplied all that data for Lama 2. The other one is probably interesting, LMSYS from Berkeley. What they're running with Chaterina is perhaps a good store of human preference data.Nathan [00:46:17]: Yeah, they released some toxicity data. They, I think, are generally worried about releasing data because they have to process it and make sure everything is safe and they're really lightweight work. I think they're trying to release the preference data. I have, if we make it to evaluation, I'd pretty much say that Chaterina is the best limited evaluation that people have to learn how to use language models. And like, it's very valuable data. They also may share some data with people that they host models from. So like if your model is hosted there and you pay for the hosting, you can get the prompts because you're pointing the endpoint at it and that gets pinged to you and you're any real LLM inference stack saves the prompts tha
In this episode we talk about the power of using emotional check in with students and ideas of check ins to use with students.Why Use Emotional Check-Ins:Emotional check-ins are a valuable tool for school counselors.They serve as an assessment of students' emotional well-being.Identify students who may need additional support or intervention.Promote a safe and supportive environment for students to express their feelings.Foster empathy among students as they share their emotions.Build trust and enhance group dynamics.Reduce potential conflicts within the group.Provide tailored support for individual students.Teach emotional intelligence and self-awareness.Normalize the ups and downs of life to reduce the stigma around counseling.Track progress over time.Enhance students' communication skills.Check In Examples:Sentence starters: "I feel," "I wish," "I need," "I hope," "I want.""High Low Buffalo" - Positive, negative, and something funny from the day."Hit Rewind, Let's Rerecord, Press Play" - Reflecting on what to replay, rerecord, or look forward to."Happy News, Sad News, No News Today" - Sharing something positive, something negative, or having nothing to share.Using thermometers or Likert scales to rate their day."Feelings Check-In" with pictures of emotions for younger students.Creative options like "Brick and a Balloon," "Glow and Grow," "High and Low," and more."Rose, Thorn, and Bud" - Sharing a good thing, a not-so-great thing, and something to look forward to.Links Mentioned:Grab this FREE Check In SamplerWant the bundle of check ins? Grab them here!Grab the Show Notes: Counselingessentials.org/podcastJoin Perks Counseling Club Membership and get the lessons, small group and individual counseling materials you need. Join now and get your first month free when you sign up for 3 months!Connect with Carol:TpT StoreCounseling Essentials WebsiteInstagramFacebookElementary School Counselor Exchange Facebook GroupCaught In The Middle School Counselors Facebook GroupHigh School Counselor Connection Facebook GroupMentioned in this episode:Perks Membership Join The Perks School Counseling Membership for K-8 Counselors and get...
“Don't skip rest day!” That's not a phrase you often hear, but there's a valid argument that it should be if you want to boost your fitness and performance. The key to enhancing your recovery is not another supplement or a fancy gadget, but simply adequate rest. Quality rest isn't laziness; it's an often-overlooked investment in your wellbeing. In this podcast, you'll learn about data-backed restful activities and why common habits like TV or social media might not offer the restoration you need. Tune in, and invest in your rest! Timestamps: 0:00 - Please leave a review of the show wherever you listen to podcasts and make sure to subscribe! 0:25 - How much you rest isn't all that matters 0:46 - What you do to relax is important! 2:10 - Using a basic Likert scale 3:38 - My new protein cookie: buylegion.com/cookie Mentioned on the Show: Try my new protein cookie! Go to https://buylegion.com/cookie and use coupon code MUSCLE to save 20% or get double reward points!
[00:00:00] Tommy Thomas: This week, we're continuing the conversation that we began last week with Paul Mauer, the president of Montreat College. If you missed that episode, we've talked about what one writer has referred to as the “Miracle at Montreat”. Today Paul is sharing lessons that he's learned about nonprofit board governance over the years. Let's change over a little bit to the board aspect of being a president. What was the biggest adjustment that you had to make between, say, reporting to the CEO as a cabinet member and then as the President reporting to the board? [00:00:40] Paul Maurer: Yeah, it's a great question. I'm a bit of a governance nerd. I really think about and study governance. I did that in my doctoral work. I do it as a college president in nonprofit governance. Your board policy manual really matters. It matters because your board needs clarity. The president needs clarity. What is the role of the board? What is the role of the president? What's the role of the relationship and what's the role of everyone else on campus in relationship to the board? And so, in the world of board governance, there are working boards and there are policy boards. Startups tend to have working boards, like true startups, like really small organizations, more established organizations. If they haven't transitioned to a policy board, they probably ought to consider doing so. Because you don't really want a board involved in the operations of an organization. I'm deeply grateful that my board gave me the lead role in board development, meaning recruitment of new board members, training of board members, and the board policy manual. And we have a great board today, and they really understand that the board should not be involved in operations. That's the CEO's job but one should be sure that they're being fiduciaries, that they're making sure there's a strategic plan that's being carried out, their success along the way, and that they manage or evaluate. They don't manage, they evaluate the presidents. They hire and fire the president, the CEO. I do think that my argument would be that it's more important for a President to be a CEO than a President. The President is, as I think of a bit of an old model for college leadership, it's rooted in what I think is not a very useful model of shared governance. I think the CEO is a better model, but you also need a CEO who's sensitive to campus dynamics and the idea that consensus really matters. And a consensus building CEO I think is the best model, but I think that the CEO also needs to be the CUO - the Chief Urgency Officer, because things are changing so fast. And if the CEO is not leading change with a great sense of urgency, then I think the institution puts itself at some measure of risk. [00:03:21] Tommy Thomas: You've served on other boards, and you've reported to at least two, give me some attributes of a great Board Chair. [00:03:29] Paul Maurer: I think the central role of a Board Chair is to manage the board. It's not principally to be a person of wealth or to be connected to persons of wealth. I don't think that's the right model for a Board Chair of a college. I think the right model is someone who understands nonprofit governance and manages the board meeting to meeting because the board ultimately is the boss of the President - CEO, only during those board meetings. So the board chair needs to constantly instill clarity in the board to encourage them and steer them away from being involved in operations from directing the presidents, and to maintaining the role of being an overseer that the CEO reports to three times a year or however many times a year that board meets. The best chairs I've worked with really understand governance and really do well in managing the board's expectations of what that governance entails. [00:04:41] Tommy Thomas: How does a good Board Chair draw out the silent board member? [00:04:47] Paul Maurer: In our board meetings, we have blocks of time for plenary sessions for the big picture items. And there's always time in there for dialogue and for feedback. And there are times when we build into our board meetings. When I give my board report, I give a little bit of a board update, a little bit of a report, and then I just open the floor to questions. And so there's just this open dialogue that I have with my board during the president's report at the beginning of the day and then the middle of the day during plenary sessions. If I'm informing or bringing an action item to the board as a whole, we are sure to build in time for dialogue, deliberation, questions, understanding, and in between board meetings, I'm sending information on kind of the latest update on what's happening in my world. So, they're getting articles on a regular, semi-regular basis that if they're able to take time to read them helps keep them abreast of the most pressing issues that I'm facing on a regular basis. [00:06:04] Tommy Thomas: So how often do you and your Board Chair, do y'all have regularly scheduled times or is it as needed? How do y'all relate to each other? [00:06:12] Paul Maurer: I'm aware that friendship is a tricky element in these things. I happen to have a very deep and strong friendship with my board chair, which preceded him coming on the board and he became a board member. And now as chair and I've changed my mind on this, Tommy, because there was a time earlier on when I thought that those were mutually exclusive and now, I don't think they're mutually exclusive. I think it can work very healthfully. And now I actually try to cultivate friendships with my board members in a way that I didn't early on in my first presidency, certainly not early on at Montreat. And so I think that dynamic when healthy is a really powerful part of making it work well. Any model can be abused. Any model can go awry. And I've seen that and I've heard about it an awful lot. I've experienced it. But I've also experienced the flip side of that, where a really meaningful friendship can also be the basis of a really healthy CEO-Board Chair relationship. [00:07:34] Tommy Thomas: Can you think back as to, you mentioned early on at Montreat you hadn't gotten there yet. What changed? [00:07:43] Paul Maurer: In the relationship with my board chair? [00:07:46] Tommy Thomas: Yeah, how did you make that transition from thinking it wasn't healthy to realizing that it could be healthy? [00:07:52] Paul Maurer: I guess experiencing it along the way, initially without intending it to be that, and I went, this actually works. And so, when my current chair, when I began discussions with him about, because he had led a major healthcare nonprofit and grown it from a $25 million budget to $125 million budget. He had led a nonprofit. He had worked in that sector for all of his career in healthcare, not in education. And so, I knew that I wanted him to be my next board chair when that time came. And so it was really then that I began to think in this kind of new model that maybe there's a way for and as I look back, I've actually had these like really healthy relationships with my past two board chairs here at Montreat. And gosh, what a better way to do it, and it really is possible. It eventually dawned on me that I could intentionally pursue that. [00:09:01] Tommy Thomas: Do you have a term limit for your board chair? [00:09:04] Paul Maurer: Five years, but it's year to year, up to five years. [00:09:09] Tommy Thomas: And what about your board members? [00:09:10] Paul Maurer: Nine years, the terms are three years renewable, two times for a nine year max with a one year minimum required off before renomination. One of the changes we made here was that every three-year term we do the board does self-evaluations for those that term and peer evaluations for those that come to term. There's an honest, self-reflective, peer reviewed process that goes through a committee on trusteeship every year for those at a term to ask the question, is this going well? Is this a time to continue on or a time to step off? And so it's not a nine, it's not a nine year. Every three years we talk about it. [00:10:08] Tommy Thomas: Is that fairly common in the nonprofit sector from your experience? [00:10:12] Paul Maurer: The board policy manual that we use was the work of Bob Andringa who was the CEO of the Council for Christian Colleges Universities some years ago. And Bob developed the BPM (Board Policy Manual) that we use. And as I understand it, there are 60 or 70 or 80, I think mostly CCCU schools that have adopted some version of Bob's work. And I just think it's so well-crafted and we of course made it ours with Bob's permission. And it's just a really, it's a really well done, thoughtful way to do governance. ++++++++++++++++++++ [00:10:53] Tommy Thomas: A lot of people that I talk with, there's a move toward lowering the mean age of the board and also increasing diversity. What kind of experience have y'all had at Montreat on those issues? [00:11:03] Paul Maurer: We're intentionally trying to increase diversity. We've not found that to be an easy pathway, but we are committed to it. And on age I would just gently push back on the median age lowering. I'm very much of the Aristotelian camp that young people have less wisdom. And part of what you want for board members is wisdom. Wisdom comes with experience, and experience comes with age and the hard knocks of life. And just the journey of life with gray hair and getting beat up occasionally. And I want younger people on the board, but that's more, that's less common. They're actually very hard to get on the board because they're less really qualified candidates in my view, and they're uber busy with career and family. So the young members I have, the 30 somethings I have on my board, I have two of them. They're like up to their eyeballs, four or five kids each, they're CEOs or leaders in their own rights and rising in the ranks. And these people have large portfolios and enormous demands on their time. Then my 70- and 80-year-olds, and even I have a 91-year-old board member who I recruited at the age of 87. And he said to me, he said, Paul, what if I die? And I said, Bill what if I die? We're all going to die. You've got a lot of gas left in your tank. You've got an enormous amount of wisdom. And you may have others who think that you're too old to be a board member. I don't think that at all. And if there comes a day when your health has slipped, your metro capacities have slipped, we'll have that conversation and we'll have it openly and honestly. Honestly, the seventies, eighties, and 90-year-old trustees I have are really easily among my best trustees. They're phenomenal. [00:13:22] Tommy Thomas: Let me get you to respond to this quote. You need a director on the board who will be a pleasant irritant, someone who will force people to think a little differently. That's what a good board does. [00:13:39] Paul Maurer: I think I would probably not gravitate toward the word irritant, and I would say I, I'd probably substitute something a little softer than that, that you want to be objective and you want to be able to deal with the hard issues. And frankly, the CEO ought to be leading the way on that, not a board member. I think it's fine for a board member to raise difficult or uncomfortable matters, and I certainly have board members who do that, and I think that's fine and it's healthy, but I think that can come by from different means, and it can come without it being quote unquote, maybe I'm just hung up on the word irritant. I think you can have really robust, difficult, honest, truthful conversations without it being irritating. [00:14:40] Tommy Thomas: Okay. Talk about your philosophy or your use of the executive committee? [00:14:48] Paul Maurer: I think it's vital and extremely valuable in a healthy board situation, and I'm qualifying a lot of my comments with a healthy board because I've worked for both healthy and unhealthy boards. I happen to be working for a very healthy board in my time here at Montreat. And so the executive committee functionally is a decision that needs to be made quickly between board meetings and the CEO either doesn't have the authority or just wisely wants the board to help own that decision and goes to the executive committee in between board meetings for a fast decision. Early in my time here, I used that executive committee with more frequency than I do now. But I don't have the number of fires now that I had back in 14, 15, 16, 17. And so I still use the executive committee, but it's less frequent and the larger board has fully embraced the executive committee in that way. [00:16:01] Tommy Thomas: How often do you use the executive session? [00:16:04] Paul Maurer: Every board meeting, we have two executive sessions, one with the president and one without the president. Actually in inverse order - the first without the president. And then I'm brought back in for executive session with the president and where I'm told what was discussed in session without the president fully briefed and then engaging in a conversation where it's just me and the board in whatever they want to talk about freely, they don't feel free to talk about necessarily with a cabinet in the room. +++++++++++++++++++++ [00:16:37] Tommy Thomas: We mentioned strategic planning a few minutes ago. Does your board, are they involved in that, or do you and your staff bring that to the board? [00:16:44] Paul Maurer: The latter in our board policy manual, the board's role is to approve a strategic plan recommended by the president and to receive updates and make sure that the CEO is making progress on the strategic plan. And so I give reports on the strategic plan, but the board is not involved in the creation of the strategic plan. [00:17:07] Tommy Thomas: How does the CEO evaluation take place at Montreat? [00:17:11] Paul Maurer: So I submit a set of goals to the board on an annual basis that are metrics tied to the strategic plan, and they're evaluated at the end of the year. And we, in our executive session, have a conversation about my delivery toward those goals. [00:17:32] Tommy Thomas: Is that on an annual basis? [00:17:35] Paul Maurer: It is in our policy manual. It is an annual activity. [00:17:39] Tommy Thomas: How have you and your board addressed board turnover? In terms of maybe involuntary or voluntary? I guess people decide they don't have time. They don't enjoy it. How are y'all doing with that? [00:17:53] Paul Maurer: We've grown our board over the years, but we've certainly had people who, I had two resignations in this last run up to my board meeting last week. And they were just personal situations that they felt like they just needed to focus on some personal matters that they didn't feel like they could do justice to their service on the board. And we regretfully accepted their resignations. But in those cases, it had nothing to do with the college or the board or it was purely personal. That's mostly what we've experienced over these years. Most of our trustees go to term and we have them term out after nine years. We celebrate them and thank them. We've grown our board from our bylaws. Say that we can have between 12 and 36. It's a very wide range. When I first got here, we were in that 12 to 15 range for a number of years. Maybe ironically, maybe not. Ironically, during covid we had just a tremendous breakthrough in people saying yes to joining the board. I do a lot of board cultivation with board members who are bringing prospective trustee names to the table. We have a very robust list of prospective trustees at all times. Somewhere between 10 and 15 on our prospect list. And some go fast, some go slow, some never materialize. We're about 20 board members today. Our target is to get in somewhere between 25 and 28. [00:19:31] Tommy Thomas: What kind of strategy do you use to keep that list at 15 to 20? [00:19:36] Paul Maurer: Probably closer to 10 to 15. Yeah. And that's really the work of the committee on trusteeship to surface names. We also have, as we recruit new board members in, they bring fresh names to that list. So we're constantly messaging like that. That's a document. That's a living, breathing document. And some people stay on the A-list, some move to the B, some move to, we ask and they said no. We've got six or eight tabs on that spreadsheet, and it's constantly a living, breathing kind of document. [00:20:15] Tommy Thomas: This might be a mundane question, but I hear it asked a lot. Do you have a board meeting evaluation fairly regularly, or how do y'all approach that? [00:20:25] Paul Maurer: Every board meeting, as soon as the board meeting is over, they get a email in their inbox asking them to fill out an evaluation of the board meeting. They've just finished. We give just a small number of days to do that so it's fresh in their minds. And then the Committee on Trusteeship takes that feedback which is both on a Likert scale as well as open comments available to, for them to make. And then that is discussed at the next committee on trusteeship meeting. And we're always trying to get better and refine and bring some changes to how the board meetings are conducted. And those surveys have served a very valuable role in that way. [00:21:09] Tommy Thomas: What did you learn through Covid that you'll take forward? That maybe you didn't do before Covid in terms of board relationships and board governance? [00:21:19] Paul Maurer: One of the observations I made during Covid was man, we're in this together. And my board chair is a public health expert, as I mentioned before and when Covid hit I remember calling him in early April and I said I don't have a clue how we're going to reopen. Can you help us? And he said I'd love to help you. And I said I've developed a friendship with the other four-year residential college presidents here in Western North Carolina. There are four privates and then a couple of major publics. Would you be willing to help them too? And he said, absolutely I would. That group of six presidents plus my board chair met on a zoom call at noon every Wednesday for a year and a half to figure out how to open residential both years of covid. And that was a powerful experience of teamwork and collaboration and friendship and setting aside the inevitable competition that exists between these institutions and saying, there's a bigger picture here, and I think the benefit of that was very great for all of us. The second thing I'd point to is that the level of fear that I observed during covid was something I'd never seen before, how widespread, how deep it was. And so the word courage became a central concept that whatever we did, we needed to really lean into the courage of critical thinking and what's best for the institution, what's best for the students and the staff here. And there was no one size fits all in Covid in vastly different circumstances in different parts of the country. Vastly different realities of the impact of covid with different age groups and so we had to make decisions for 18- to 22-year-olds in our campus and our employees. That's how we had to make decisions. And you can't possibly have state mandates or county mandates or federal recommendations fit every circumstance. And we made decisions that we believed to be in the best interest of our community. And we took some criticism for that. But overall, I would say that those who chose that kind of a pathway were probably more rewarded than not. +++++++++++++++++++= [00:24:20] Tommy Thomas: I'll ask you two final questions and we'll try to land this thing. Go to the board and the CEO's succession plan. What have y'all done there to ensure some sort of untimely succession? [00:24:35] Paul Maurer: So we're actually just starting that conversation like literally last Friday at the board meeting with kind of keyman questions. And we haven't done a lot there on the longer question of succession. I've started thinking about that. I'd like to stay longer. I don't really have an interest in retirement. Not at this point anyway. And today I'd love to go another decade or so. We'll see what happens. But I'm increasingly of the mind that the best succession plan is to bring one or more people onto your team who may have the potential and groom them. Talk openly about succession and see what happens with the possibility that the CEO can actually play a central role in the recommendation of his or her successor. The way the church does this, and the way colleges and universities do this, in my experience the pastor and the president really play very little role at all. Either limited or none. And the more I've been thinking about this and talking to peers about this, the less that makes sense to me. And again, in a healthy situation, the board I think could and should rightly lean on and engage at a very deep level, the CEO of the college to say, what do you think? Who do you think we should hire? What are the core competencies? Can we get that person on board? And so, what I'd like to do in the years ahead is get two or three, maybe even four people on my cabinet who have the potential capacity for becoming a college president and see if we can't raise one of them up into the role as my successor. Whether that works or not, I can't predict that, but that to me seems like a wise model if you can do it healthfully. [00:26:43] Tommy Thomas: What are you going to say if you get a call next week from either a friend or maybe someone you don't know that says Paul, I've been asked to serve on a nonprofit board. What kind of council are you giving somebody who's considering a nonprofit board service? [00:27:00] Paul Maurer: It ought to be done with a significant measure of time, talent, and treasure. It ought to be a major commitment of yours if you're serving on lots of nonprofit boards. Unless you're willing to put this new one at a higher level of commitment than the others, maybe you shouldn't do it. I think that the best board members of nonprofits are vested. They've got skin in the game. They're giving of their time, their talent, and significantly of their treasure. The treasure's the hardest one, I think. We ask all of our trustees to commit to Montreat being a top three philanthropic priority prior to trusteeship. And that's a stumbling block for some people. But I think in the end, it also fosters the creation of a board that has skin in the game and that really is serious about the future of the institution. It's not a casual kind of volunteering. It's a serious kind of volunteering. [00:28:13] Tommy Thomas: It has been great. Paul, this has been so much fun. Thank you for carving out an hour and a half of your time for me. I appreciate it. [00:28:20] Paul Maurer: Tommy, I've enjoyed it very much. You ask a lot of very good questions and I'm certain that your podcasts are of great value to those in leadership and those thinking about leadership. So, thank you. ++++++++++++++++++++++++ [00:28:32] Tommy Thomas (2): Next week, we're going to conclude the conversation that we started with Caryn Ryan and Episode 84. In that conversation, Caryn shared her leadership journey from BP/AMOCO to CFO for World Vision International to her current role as Founder and Managing Member of Missionwell. In next week's episode, Caryn will be sharing lessons on nonprofit board governance that she's learned over the years. [00:29:04] Caryn Ryan: There's a lot of financial literacy questions there. So how can you ask tough questions if you can't read the financial statements or financial reports and understand them? And sometimes there's issues with what's delivered to boards too, in terms of information, but sometimes it's just a basic lack of understanding. I think too, there's also a fundamental issue that sometimes with boards, they don't get enough board development or board training and they really just don't understand their key role when it comes to accountability. And so, they don't understand that it's their job to ask the tough questions. ++++++++++++++++ Links and Resources JobfitMatters Website Next Gen Nonprofit Leadership with Tommy Thomas Montreat College Website The Miracle at Montreat Montreat College Facebook Montreat College Instagram Connect Tommy Thomas - tthomas@jobfitmatters.com Tommy's LinkedIn Profile Paul Maurer's LinkedIn Profile
Determinants of gait dystonia severity in cerebral palsyBhooma R Aravamuthan, Toni S Pearson, Keisuke Ueda, Hanyang Miao, Gazelle Zerafati-Jahromi, Laura Gilbert, Cynthia Comella, Joel S PerlmutterAffiliations expandPMID: 36701240DOI: 10.1111/dmcn.15524AbstractAim: To determine the movement features governing expert assessment of gait dystonia severity in individuals with cerebral palsy (CP).Method: In this prospective cohort study, three movement disorder neurologists graded lower extremity dystonia severity in gait videos of individuals with CP using a 10-point Likert-like scale. Using conventional content analysis, we determined the features experts cited when grading dystonia severity. Then, using open-source pose estimation techniques, we determined gait variable analogs of these expert-cited features correlating with their assessments of dystonia severity.Results: Experts assessed videos from 116 participants (46 with dystonia aged 15 years [SD 3] and 70 without dystonia aged 15 years [SD 2], both groups ranging 10-20 years old and 50% male). Variable limb adduction was most commonly cited by experts when identifying dystonia, comprising 60% of expert statements. Effect on gait (regularity, stability, trajectory, speed) and dystonia amplitude were common features experts used to determine dystonia severity, comprising 19% and 13% of statements respectively. Gait variables assessing adduction variability and amplitude (inter-ankle distance variance and foot adduction amplitude) were significantly correlated with expert assessment of dystonia severity (multiple linear regression, p < 0.001).Interpretation: Adduction variability and amplitude are quantifiable gait features that correlate with expert-determined gait dystonia severity in individuals with CP. Consideration of these features could help optimize and standardize the clinical assessment of gait dystonia severity in individuals with CP.
We are joined by two guests today, Maria, a Ph.D. student in the CORE Robotics Lab at Georgia Tech, and Matthew Gombolay, the Director of the CORE Robotics Lab. They both discuss practices for measuring a respondent's perception in a survey.
Dr. Shannon Westin, Dr. Ezra Bernstein, and Dr. Nadir Arber discuss increasing cancer prevention and early detection with a one-stop-shop comprehensive cancer screening center. TRANSCRIPT The guest on this podcast episode has no disclosures to declare. Dr. Shannon Westin: Hello, everyone, and welcome to another episode of JCO After Hours, our podcast where we get in-depth on manuscripts that have been published in the Journal of Clinical Oncology. I am your host, Shannon Westin, GYN oncologist and social media editor of the JCO. And I am thrilled to be discussing this very interesting paper entitled “Data From a One-Stop-Shop Comprehensive Cancer Screening Center,” focused on asymptomatic screening. And this very important work was published by these two authors who are joining me today. We have Dr. Nadir Arber, professor of Medicine and Gastroenterology, head of the Integrated Cancer Prevention Center, head of the Cancer Prevention section of the European Society of Medical Oncology at Tel Aviv Sourasky Medical Center in Tel Aviv, Israel. And we're also joined by Dr. Ezra Bernstein, Fulbright fellow and researcher at the Integrated Cancer Prevention Center that we're going to be discussing today at Tel Aviv Sourasky Medical Center in Tel Aviv, Israel. And he's also, impressively, a resident in internal medicine at the New York University, so he's a gentleman of many talents and quite busy. Welcome. Dr. Ezra Bernstein: It's great to be here. I had a slow clinic day. Dr. Shannon Westin: Oh, I was going to say I'm impressed you, as a resident, could find the time. So we're really excited to have you, and you certainly have a bright future ahead of you as an oncology practitioner. So let's get started. I think certainly most of our listeners are quite familiar with the benefits of cancer screening. But I think it would be great if you all could level set and review the benefits at the patient level as well as at the healthcare system level. Dr. Ezra Bernstein: Sure. So I think, kind of breaking it down, on the patient level, the scientific community has made incredible progress over the last several decades in not only the understanding of the biology of cancer, but also that's translated into the treatment of cancers, from genomic sequencing to targeted therapy, which you now have specific small molecule inhibitors for specific mutations in each cancer. But despite all these incredible improvements and advances and the ability to treat many cancers, the greatest prognostic factor is still often the stage of diagnosis because the chances of survival and the chances of complete cure increase dramatically if the disease is detected in its earlier stages. So earlier detection and diagnosis can greatly reduce mortality, it can increase treatment effectiveness, and ultimately improve the quality of life for the cancer patients. On a healthcare system level, often when you're doing screening, you're discussing cost-effectiveness. And so the thing about the healthcare system, which we didn't really address in our paper–we initially were going to, but we think we're going to do a follow-up paper on this–the cost of cancer care is very high. In Europe, the total cost of cancer care in 2018 was $199 billion. And then I think the last data I saw was the US, in 2015, the total cost of cancer care was $183 billion. So, on a healthcare system level—and those are just the costs of cancer care; there's tons of other costs that go into when patients have cancer: lost wages… So I think that it's crucial not only for the patient but also for the healthcare system to help catch these cancers earlier. Dr. Shannon Westin: Yeah, I completely agree. I think we have such great guidelines on how we should be screening our patients. I think there's a number of different areas where providers can look to understand what they should be doing with the patient in front of them. What do you think are some of the barriers of implementation of this guideline-based cancer screening? Dr. Ezra Bernstein: That's a crucial question. We have the guidelines, especially in the US; we have our grade A recommendations: colon cancer, cervical cancer. We have our grade B recommendations: mammography. And lung cancer. So a big hurdle, especially in the US now with the recommended screenings at this point, is just getting people to do it. You look at the US, and for Pap smears, it's pretty good; 80% of the population is up to date with PAP smears. Mammography, a little bit less, low 70s. And then colon cancer screening, a little bit less. So how do we get these up, and what's the barriers? And that's kind of the idea behind the Integrated Cancer Prevention Center is it's cost, it's time, and it's also awareness. And this kind of gets into a little bit of the theory for what kind of created the Integrated Cancer Prevention Center is the idea that if you do a one-stop-shop approach where patients come in in a single visit and they get screened for all the recommended cancers, so they don't have to do multiple appointments, they don't have to take off work multiple times, and they can get it all done at once, that automatically leads to 100% compliance for those screenings that they do during that day. So that was kind of the theory behind it is you're able to remove a lot of the barriers to the implementation of the guideline-based cancer screening. The other thing is awareness and just making sure that patients are aware. I think it's actually great timing with this paper. They haven't done it so much lately, but the NFL, the National Football League, they were actually running a campaign for the first few games of the season where—you know, you have millions and millions of American viewers—it was called Intercepting Cancer, and they were highlighting the importance of screening and prevention. They gave a link, which is great, but it really just links you to your providers in your area to then go ahead and screen individually. That initiative was important in getting more awareness, but still, there's the cost and the time issues that are still barriers. Dr. Nadir Arber: Let me just emphasize what Ezra brilliantly says. He said that the best therapy of cancer is prevention, so increase early detections. I'm a gastroenterologist, so I'm fairly aware of preventions. I do colonoscopy, I found the polyp, I take it out, I prevent colon cancer. In other cancers, it's not that obvious, especially colon cancer. But then to detect it at an early stage, it means when the patients have no symptoms. When there are symptoms, mostly—not all, but mostly—it's too late. And then, when somebody feels good and well, it is not acceptable, it's not possible to go to all these different facilities and to have the referral. You want to go to screen for colon cancer, you go to your GP, who sends you to a gastroenterologist, you get the colonoscopy, then you come back, go to another GP, then you have to screen for prostate cancer. So you ask him, “Can you send me to a urologist?” They send you to a urologist. And they send him for a PSA and then free PSA and the rest of it. And then, if you do it on one stop, this is the only way that is feasible, for maybe the five major or six major cancer screening, there is no doubt and no questions. But then, with Ezra, we have learned that there is more than that. It's not only screening, but it's also case finding. Patients come to me for different reasons. They came to the center to be screened for skin cancer, breast cancer, colon cancer. So let's just tell him, “Can you open your mouth?” And then five minutes, not more than that, an oral surgeon checks your mouth. It is not cost effectiveness to call the patients to come to check, but he's coming for another reason. This is what you call case finding. People are not that aware about the difference between screening and case findings, and I would like to emphasize before I let the floor to you, so I'm not speaking about cost-effectiveness; I'm speaking about cost saving. It saves money. It does not cost; it saves. Actually, I was in Singapore when maybe I was invited to Singapore by the Singapore government. They heard about my concept and invited me to speak, and like we have just said, and that it's going to be published in the JCO, in the journal. And they told me—and they were very impressed—“We are going to do it because it's going to save money for us.” And I said, “But how you are going to implement it?” They said, “We are a democracy; we cannot force the people to do it, but we are going to offer them free of charge. Free, because we understand that if they do, the government are going to save money.” But if they offer it free of service, free of charge, and then somebody does not want to do it and he has cancer, it is his problem. It's like somebody has a car, buying a new car. He's not obliged to make insurance for the car, but if he doesn't do it and something happened to the car, he cannot come to the government or the insurance company or his spouse and complain. It's his problem. I think when you understand it, and we understand that it's cost-saving and it's a win-win situation, then we can make a big step ahead, and this is the way to go. Dr. Ezra Bernstein: I think Professor Arber brings up a very interesting point that I'm just going to make real quickly when he talks about kind of the case findings and the screening for something like oral cancer and skin cancer, which aren't currently grade A or grade B recommendations by the USPSTF, but in the context of a one-stop shop, it kind of changes the game a little bit. It's one thing to go to your dermatologist once a year in terms of screening for skin cancer, but if you're already at a one-stop-shop center screening for skin cancer, screening for thyroid cancer, and things which aren't currently—the thyroid is a little bit different, but screening for certain cancers aren't currently recommended for because they're not cost-effective, it changes the game a little bit in the concept of the unique setup that Professor Arber started. Dr. Nadir Arber: And at the same times, we also measure blood pressure, we measure sugar, hemoglobin, A1Cs, and other things that are known, but not only for cancer, but the patient is scanned when he's there. And also, if we found something—and we have the data, between 1% to 2%, we do find cancers—then we can help the patient to solve it on the spot because he's in the hospital, so we can arrange whatever he needs. So the patients like it. They appreciate. They know that they're in safe hands. And if you like, “I need to do ultrasound. I have some very suspicious legion,” I can do it on the spot. Vaginal ultrasound, Pap smear, now we do it for HPV, DNA. So everything is on the spot, and we give solutions if something happens. People need—they found something or anything, so we can do everything. For this is the way of modern medicine. Ezra and myself have been working for many years. But eventually, it's going to catch up because this is the right things to do. The modern medicine, the way we see it in the future, is turning from sick care to health care. This is the way to go. This is one of these when we are advancing the technology and everyone is health conscious. Dr. Shannon Westin: I really appreciate you kind of giving that laundry list of all the things you're doing because I do think you're exactly right, that people are doing a better job around the kind of the most common screening tests. I'm a gynecologist by trade, so Pap smears are part of my daily activity. But I think mammograms, Pap smears, colon—I think that's more common. But this is a great way to kind of get screening for all of those other things. Like, I know a friend who just got diagnosed with an oral cancer, and it's like nobody's screening for those things. And you're right, Ezra, where they say that it's not cost-effective to have a visit just for that, but if you can encompass it all in one visit, it just makes so much sense. So I think that that takes us into the current study. Can you kind of just take us through briefly the design and your outcomes that you looked at? Dr. Ezra Bernstein: Yeah. So it was a retrospective analysis of over 17,000 patients that have visited the ICPC, the Integrated Cancer Prevention Center, between 2016 and 2019. And as Professor Arber was saying, patients come in, they're mailed a questionnaire beforehand, they ideally fill it out before, but they'll then meet with an internist or an oncologist. And they'll go over the questionnaire, they'll go over family history, risk factors, and then it will be really a tailored screening exam for that. They'll get the classic recommended screenings based on if they're due for a mammogram, if they're due for a Pap smear, and then from there– Do you want me to go through in detail or just kind of overall? Dr. Shannon Westin: I think overall is fine, yeah. Dr. Ezra Bernstein: Overall? Okay. So they do their screening test, they'll get some blood work done, and then if there's anything abnormal, then as Professor Arber was saying, the great thing is you get worked up right there. So they'll get TFTs, thyroid function tests. If those are abnormal, then they'll indicate a thyroid ultrasound needs to be done. They do all their screenings, all their testings. Most of it is done there. Professor Arber, do they do the biopsies there too if there needs to be a biopsy? Dr. Nadir Arber: Yes, obviously. But what is Ezra also referring, we are trying to do precision medicine. When somebody is coming to us, like you said, the woman, so we are measuring her Tyrer-Cuzick score and to see, if it is high, then we send her to do an MRI, if it is above 20%. Dr. Shannon Westin: So what were the outcomes that you measured in the study? Dr. Ezra Bernstein: So, after they did all the screening, our main outcomes were the number of malignant lesions detected and then also what stage were they detected. Basically, if the cancer was found within a year of a visit to the ICPC and it was found as a direct result of the screening done that day or if there were recommendations for follow-up that then led to a successful detection, we counted that as a malignant cancer that was successfully detected through the ICPC. Dr. Shannon Westin: What about the results? Were they as expected? How did your detection rates compare to kind of, say, the general population? Dr. Ezra Bernstein: A successful cancer screening program is always going to have a shift in detection of cancer to earlier stages, which is exactly what we saw, which was great. We then compared it to the Israeli general public over a similar time period, and the percentage of cancers found at a metastatic stage at the ICPC, that was lower for all cancers. Just going through: Colon was 20% versus 46.2% in the general populations. There was no metastatic, cervical, or uterine cancer found. Prostate was 5.6 versus 10.5, lungs 6.7 versus 11.4, as well as renal, which isn't recommended we screen for, but that was 7.7 versus 10.3. Dr. Shannon Westin: That's so incredible. I guess the other thing I thought was really nice that you did is looking at patient satisfaction and making sure—we're very much focused on patient experience and satisfaction right now in medicine. So what were your findings there? Were the patients satisfied with the process? Dr. Ezra Bernstein: Yeah, so this is something that we started doing towards the end of the study period. So we really had respondents from 2019. There was about 1300 patients who responded on a Likert scale, 1 to 10. The average response was 8.35. So it's really good response. I try to go back there whenever I can, and it's always bustling. People are coming back. Professor Arber can speak—he's interacting with the patients every day. But talking with just friends who have heard of it, every time I bring it up, “Oh, that's amazing. We love that program.” Dr. Shannon Westin: Were there any limitations or weaknesses to the mechanism? Dr. Ezra Bernstein: The main limitations and weaknesses were that it was a one-arm study; it was not a randomized controlled trial, whereas randomization would control for generalizability as well as other confounders. And as with all cancer screening, there's the issues of the lead time bias, the concept that cancer is detected earlier through screening, but the length of time a person survives with the cancer does not. So survival time is falsely lengthened. There's also time bias, which is the idea that more indolent and less aggressive diseases with longer survival times are more likely to be detected through screening, artificially inflating survival time. So that's always present in cancer screening. In addition, the most reliable measure for determining the efficacy of a screening program is cancer-related mortality rates, and that's from the time of randomization as opposed to the time of diagnosis. And given the study design, that's just something that we weren't able to do. And any successful screening program such as ours is going to have the natural shift in the incidence of cancer to an earlier stage. But some of that can be attributed to overdiagnosis, the diagnosis of indolent diseases that are never going to actually cause harm, which has been heavily studied in the case of breast cancer with mammography. But it's not something that can be really proven; it's just more—overdiagnosis is more of a theory, and it can be hypothesized. So you're going to have overdiagnosis, you're going to have lead time bias, you're going to have a length time bias with any screening program, but in particular with ours, I think it's mainly the randomized control trial aspect, which is the gold standard. Dr. Shannon Westin: Yeah, that makes sense. Okay, well, and then the final—let's give a call to action here. So how do we implement this more broadly? What's kind of necessary to get something like this up and running? Infrastructure, personnel, yeah? Dr. Ezra Bernstein: So to implement this intervention more broadly, you really need the want from the general public and whoever's going to help implement this. And I think we're starting to see that with, as I mentioned, the recent program that the NFL is running about intercepting cancer. And then, to actually implement it, I'm going to leave some of it to Professor Arber, who did an amazing job setting up this program. But I think it makes the most sense setting it up within a hospital setting because you'd need certain imaging modalities within a clinic. And then, depending on which specific cancers you're screening for, you're going to need specialists able to screen for those cancers. Primary care, they can screen for skin cancer, but really, it should be someone with more training, dermatologists, to do that kind of screening. But Professor, I don't know if you want to jump in and talk about what you think it would take to set it up. I know you did an amazing job setting it up. Can't imagine the amount of work and coordinating. Because at the clinic in Tel Aviv, you have many different specialists coordinating every day. It has got to be quite difficult. So I don't know if you have anything to add, professor. Dr. Nadir Arber: I think everything in medicine, in order to be successful, has to be simple. We are physicians. We are simple people. And this is a way that you are able to do this screening. Because first, from economy, it's cost saving. It doesn't cost money for the government, for the health providers, but also for the patients. If somebody feels well, there is no way that he will go all this saga of going to the GP and have this referral to five, six specialists. And when we understand also the issue of cost and case finding on top of the screening, then we understand that this is the only way that this screening program—when people are feeling healthy, no symptoms, the only way that once a year, they can afford it. If I break my leg or have a rectal bleeding or have a chest pain, I go to the special physician because I have symptoms. But if I have no symptoms, then only once a year, I would like to go to a special place which has all the expertise which the GP cannot provide and then to implement it. This is the only way, and this is simple. And we are happy that you took the lead, and with this initial project, that should be multiplied everywhere. This is the only way. Now we understand that the best therapy of cancer is prevention or at least early detections. And also, Ezra maybe mentioned that we also teach for lifestyle modifications. If needed, we are doing genetic testing; that is going to be very important. I don't know if Ezra mentioned that we are checking for this polymorphism in the APC genes that we have shown in the [inaudible]. Carriers of this APC can have double the risk of having cancer. Dr. Shannon Westin: Well, this has been such a fascinating discussion, and I'm just so glad that you both had the time to spend with us today to review this. I think this is an incredible intervention, and I really do hope that we can mimic this across the States and across the world. So, again, listeners, this has been JCO After Hours. We're discussing “Data from a One-Stop-Shop Comprehensive Cancer Screening Center,” focused on asymptomatic screening. We're so glad that you joined us today. Please do check out our other podcasts on the JCO website. Be well. The purpose of this podcast is to educate and to inform. This is not a substitute for professional medical care and is not intended for use in the diagnosis or treatment of individual conditions. Guests on this podcast express their own opinions, experience, and conclusions. Guest statements on the podcast do not express the opinions of ASCO. The mention of any product, service, organization, activity, or therapy should not be construed as an ASCO endorsement.
Manifest $100,000 Quickly w/ Tapping for Money
For older adults living with dementia, cognitive impairment can lead to susceptibility to fraudulent activities. In this episode we'll discuss with Dr. Duke Han from the Keck School of Medicine at USC what's known about the intersection of aging, cognition, and susceptibility to scams.The transcript for this episode can be found here.Duke Han PhD Faculty Profile: https://profiles.sc-ctsi.org/duke.han Additional Information:The susceptibility to scams scale developed by James, Boyle, & Bennett (2014)* is a 5-item self-report measure in which participants rated their agreement using a 7-point Likert scale (strongly agree to strongly disagree) for the following statements:I answer the phone whenever it rings, even if I do not know who is calling.I have difficulty ending a phone call, even if the caller is a telemarketer, someone I do not know, or someone I did not wish to call me.If something sounds too good to be true, it usually is.Persons over the age of 65 are often targeted by con-artists.If a telemarketer calls me, I usually listen to what they have to say.Resources for older adults (and non-older adults) to report fraud: U.S. Senate Special Committee on Aging: Fraud Hotline | Senate Committee On AgingHotline: 1-855-303-9470 (open weekdays from 9 a.m. to 5 p.m. Eastern Time)Internet Crime Compliant Center (IC3): https://Ic3.gov/Federal Trade Commission: Reportfraud.ftc.gov/*James BD, Boyle PA, Bennett DA. Correlates of susceptibility to scams in older adults without dementia. J Elder Abuse Negl. 2014;26(2):107-122. doi:10.1080/08946566.2013.821809CAPRA Website: http://capra.med.umich.edu/ You can subscribe to Minding Memory on Apple Podcasts, Spotify, Google Podcasts or wherever you listen to podcasts. Hosted on Acast. See acast.com/privacy for more information.
Interviewer I'm Ellen Bernstein-Ellis, Program Specialist and Clinical Supervisor for the Aphasia Treatment Program at Cal State East Bay and a member of the Aphasia Access Podcast Working Group. AA's strives to provide members with information, inspiration, and ideas that support their aphasia care through a variety of educational materials and resources. Today, I have the honor of speaking with Dr. Jamie Lee who was selected as a 2022 Tavistock Distinguished Scholar. We'll discuss her research interests and do a deeper dive into her work involving the study of texting behaviors of individuals with aphasia and her efforts to develop an outcome measure that looks at success at the transactional level of message exchange. As we frame our podcast episodes in terms of the Gap Areas identified in the 2017 Aphasia Access State of Aphasia Report by Nina Simmons-Mackie, today's episode best addresses Gap areas: Insufficient attention to life participation across the continuum of care; Insufficient training and protocols or guidelines to aid implementation of participation-oriented intervention across the continuum of care; Insufficient or absent communication access for people with aphasia or other communication barriers For more information about the Gap areas, you can listen to episode #62 with Dr. Liz Hoover or go to the Aphasia Access website. Guest bio Jaime Lee is an Associate Professor in the department of Communication Sciences and Disorders at James Madison University. Jaime's clinical experience goes back nearly 20 years when she worked as an inpatient rehab SLP at the Rehabilitation Institute of Chicago (now Shirley Ryan Ability Lab). She later worked for several years as a Research SLP in Leora Cherney's Center for Aphasia Research and Treatment. Jaime earned her PhD at the University of Oregon, where she studied with McKay Sohlberg. Her research interests have included evaluating computer-delivered treatments to improve language skills in aphasia, including script training and ORLA, examining facilitation of aphasia groups, and most recently, exploring text messaging to improve participation, social connection and quality of life in IWA. Listener Take-aways In today's episode you will: Learn about why texting might be a beneficial communication mode for IwA Explore the reasons it's important to consider the communication partner in the texting dyad Find out more about measures examining texting behaviors, like the Texting Transactional Success (TTS) tool. Consider how Conversational Analysis may be helpful in understanding texting interactions Edited show notes Ellen Bernstein-Ellis Jamie, welcome to the podcast today. I'm so excited that we finally get to talk to you. And I want to offer a shout out because you mentioned two mentors and colleagues who I just value so much, McKay Solberg and Leora Cherney, and I'm so excited that you've also had them as mentors. Jaime Lee 02:44 Thanks, Ellen. It's really great to talk with you today. And speaking of shout outs, I feel like I have to give you a shout out because I was so excited to meet you earlier this summer at IARC. We met at a breakfast. And it was exciting because I got to tell you that I assigned to my students your efficacy of aphasia group paper, so it was really fun to finally meet you in person. Ellen Bernstein-Ellis 03:11 Thank you, that is the paper that Roberta Elman was first author on. I was really proud to be part of that. I was excited to get to come over and congratulate you at the breakfast on your Tavistock award. I think it's very, very deserving. And I'm excited today that we can explore your work and get to know each other better. And I'm just going to start with this question about the Tavistock. Can you share with our listeners what you think the benefits of the Tavistock Distinguished Scholar Award will be to your work? Jaime Lee 03:43 Sure, I think first off being selected as a Tavistock Distinguished Scholar has been really validating of my work in terms of research and scholarship. It's made me feel like I'm on the right track. And at least maybe I'm asking the right kinds of questions. And it's also really meaningful to receive an award that recognizes my teaching and impact on students. And I was thinking about this and a conversation that I had with my PhD mentor McKay Solberg. And it was early into my PhD when we were talking about the impact of teaching and how important it was, where she had said that when we work as a clinician, we're working directly with clients and patients were hopefully able to have a really positive meaningful impact. But when we teach, and we train the next generation of clinicians, you know, we have this even greater impact on all of the people that our students will eventually work with throughout their career. And so that's just huge. Ellen Bernstein-Ellis 04:51 It really is huge. And I have to say I went to grad school with McKay and that sounds like something she would say, absolutely, her value of teaching. I just want to do a quick shout out to Aphasia Access, because I think they also recognize and value the importance of teaching. They have shown that commitment by their LPAA curricular modules that they developed and make accessible to Aphasia Access members, so people can bring content right into their coursework, which is helpful because it takes so much time to prepare these materials. So, if you haven't heard of these curricular modules yet, please go to the website and check them out. So yes, I'm so glad that you feel your work is validated. It's really important to validate our young researchers. I think there's an opportunity to expand who you meet during this year. Is that true? Jaime Lee 05:40 That is already true. This honor has already led to growing connections with other aphasia scholars and getting more involved with Aphasia Access. I'm excited to share that I'll be chairing next year's 2023 Aphasia Access Leadership Summit together with colleagues Esther Kim and Gretchen Szabo. We're really enthusiastic about putting together a meaningful and inspiring program. I am just really grateful for the opportunity to have a leadership role in the conference. Ellen Bernstein-Ellis 06:17 Wow, that's a fantastic team. And I, again, will encourage our listeners, if you've never been to a Aphasia Access Leadership Summit, it is worth going to and everybody is welcomed. We've had several podcast guests who have said that it has been a game changer for them-- their first attendance at the Leadership Summit. So, we'll be hearing more about that. Well, I want to start our interview today by laying some foundation for your work with texting and developing some outcome measures for treatment that captures transactional exchange in individuals with aphasia. And let me just ask what piqued your interest in this area? Jaime Lee 06:57 Yeah, thanks. Well, before I got interested specifically, in texting, I had this amazing opportunity to work as a research SLP with Leora Cherney and her Center for Aphasia Research and Treatment. And we all know Leora well for the contributions she's made to our field. At that time, she had developed ORLA, oral reading for language and aphasia, and a computerized version, and also a computerized version of aphasia scripts for script training. And these were treatments that not only improve language abilities in people with aphasia, but I really had this front row seat to seeing how her interventions really made a difference in the lives of people with aphasia, and help them reengage in the activities that they wanted to pursue-- reading for pleasure and being able to converse about topics that they want to do with their script training. So at the same time, I was gaining these really valuable research skills and understanding more about how to evaluate treatment. I was also able to start learning how to facilitate aphasia groups because Leora has this amazing aphasia community that she developed at what was then RIC. I'm just really grateful for the opportunity I had to have Leora as a mentor, and now as a collaborator. And her work really helped orient me to research questions that address the needs of people with aphasia, and to this importance of building aphasia community. Ellen Bernstein-Ellis 08:37 Wow, that sounds like a really amazing opportunity. And I think it's wonderful that you've got to have Leora as a mentor and to develop those interests. Then look at where you're taking it now. So that's really exciting to talk about with you today. Jaime Lee 08:54 As for the texting interest that really started after I earned my PhD and was back at the Rehab Institute, now Shirley Ryan Ability Lab, Leora was awarded a NIDILRR field initiated grant and I served as a co-investigator on this grant. It was a randomized, controlled trial, evaluating ORLA, combined with sentence level writing. The two arms of the trial were looking at ORLA plus writing using a handwriting modality, versus ORLA combined with electronic writing or we kind of thought about this as texting. So we call that arm T-write. And ORLA was originally designed to improve reading comprehension, but we know from some of Leora's work that there were also these nice cross-modal language improvements, including improvements in written expression. This was a study where we really were comparing two different arms, two different writing modalities, with some secondary interest in seeing if the participants who were randomized to practice electronic writing, would those improvements potentially carry over into actual texting, and perhaps even changes in social connectedness? Ellen Bernstein-Ellis 10:15 Those are great questions to look at. Interest in exploring texting's role in communication has just been growing and growing since you initiated this very early study. Jamie, would you like to explain how you actually gathered data on participants texting behaviors? How did that work? Jaime Lee 10:32 Yes. So we were very fortunate that the participants in this trial, in the T-write study, consented to have us extract and take a look at their real texting data from their mobile phones prior to starting the treatment. So, for those who consented, and everyone, I think we had 60 participants in the trial, and every single participant was open to letting us look at their texts and record them. We recorded a week's worth of text messages between the participant and their contacts at baseline, and then again at a follow up point after the treatment that they were assigned to. And that was so that maybe we could look for some potential changes related to participating in the treatment. So maybe we would see if they were texting more, or if they had more contacts, or maybe they might even be using some of the same sentences that were trained in the ORLA treatment. We haven't quite looked at that, the trial just finished so we haven't looked at those pre/ post data. But when my colleagues at Shirley Ryan and I started collecting these texting data, we realized there were some really interesting things to be learned from these texts. And there have been a couple of studies, we know Pagie Beeson's work, she did a T-CART study on texting, right? And later with her colleague, Mira Fein. So we had some texting studies, but nothing that really reported on how people with aphasia were texting in their everyday lives. Ellen Bernstein-Ellis 12:08 Well, Jamie, do you want to share what you learned about how individuals with aphasia texts are different from individuals without aphasia? Jaime Lee 12:15 We saw that first, people with aphasia do text, there were messages to be recorded. I think only a couple of participants in the trial didn't have any text messages. But we took a look at the first 20 people to enroll in the trial. We actually have a paper out-- my collaborator, Laura Kinsey is the first author. This is a descriptive paper where we describe the sample, 20 people, both fluent aphasia and nonfluent aphasia, a range of ages from mid 30s up to 72. And one striking finding, but maybe not too surprising for listeners, is that the participants with aphasia in our sample texted much less frequently than neurologically healthy adults, where we compared our findings to Pew Research data on texting. And our sample, if we took an average of our 20 participants and look at their texts sent and received over a week, over the seven days, they exchanged an average of about 40 texts over the week. Adults without aphasia, send and receive 41.5 texts a day. Ellen Bernstein-Ellis 13:36 Wow, that's quite a difference. Right? Jaime Lee 13:39 Yes, even knowing that younger people tend to text more frequently than older adults. Even if we look at our youngest participants in that sample who were in their mid 30s, they were sending and receiving text much less frequently than the age matched Pew data. Ellen Bernstein-Ellis 13:56 Okay, now, I want to let our listeners know that we're going to have the citation for the Kinsey et al. article that you just mentioned in our show notes. How can we situate addressing texting as a clinical goal within the life participation approach to aphasia? Jaime Lee 14:14 I love this question. And it was kind of surprising from the descriptive paper, that texting activity, so how many texts participants were sending and receiving, was not correlated with overall severity of aphasia or severity of writing impairment? Ellen Bernstein-Ellis I'm surprised by that. Were you? Jaime Lee Yes, we thought that there would be a relationship. But in other words, having severe aphasia was not associated with texting less. And we recognize, it's dangerous to draw too many conclusions from a such a small sample. But a major takeaway, at least an aha moment for us, was that we can't make assumptions about texting behaviors based on participants' language impairments, also based on their age, their gender. You know, in fact, our oldest participant in the sample, who was 72, was actually most active texter. He sent and received 170 texts over the week period. Ellen Bernstein-Ellis 15:22 Wow, that does blow assumptions out of the water there, Jamie. So that's a really good reminder that this to be individualized with that person at the center? Because you don't know. Jaime Lee 15:32 You don't know. Yeah. And I think it comes down to getting to know our clients and our patients, finding out if texting is important to them. And if it's something they'd like to be doing more of, or doing more effectively, and going from there. Ellen Bernstein-Ellis Wow, that makes a lot of sense. Jaime Lee Yeah, of course, some people didn't text, before their stroke and don't want to text. But given how popular texting has become as a form of communication, I think there are many, many people with aphasia, who would be interested in pursuing texting as a rehab goal. Ellen Bernstein-Ellis 16:08 Right? You really have to ask, right? Jaime Lee 16:11 Yes, actually, there's a story that comes to mind about a participant who was in the T-write study, who had stopped using her phone after her stroke. Her family had turned off service; she wasn't going to be making calls or texting. Ellen Bernstein-Ellis Well, I've seen that happen too many times. Jaime Lee And when she enrolled in the study, and she was a participant at Shirley Ryan, because we ran participants here at JMU and they ran participants in Chicago. And she was so excited. I heard from my colleagues that she went out and got a new phone so that she could use her phone to participate in the study. And then her follow up data. When we look at her real texts gathered after the study at the last assessment point, her text consists of her reaching out to all of her contacts with this new number, and saying hello, and getting in touch and in some cases, even explaining that she'd had a stroke and has aphasia. Ellen Bernstein-Ellis 17:13 Oh, well, that really reminds me of the value and importance of patient reported outcomes, because that may not be captured by a standardized test, per se, but man, is that impactful. Great story. Thank you for sharing that. So well, you've done a really nice job in your 2021 paper with Cherney that's cited in our show notes of addressing texting's role in popular culture and the role it's taking in terms of a communication mode. Would you explain some of the ways that conversation and texting are similar and ways that they're different? Jaime Lee 17:45 That is a great question, Ellen and a question I have spent a lot of time reading about and thinking about. And there is a great review of research that used conversation analysis (CA) to study online interactions. This is a review paper by Joanne Meredith from 2019. And what the review tells us is that there are many of the same organizing features of face to face conversation that are also present in our online communications. So we see things like turn taking, and we see conversation and texting or apps unfold in a sequence. So what CA refers to as sequential organization. We also see, just like in face to face conversation, there are some communication breakdowns or trouble sources in online communication. And sometimes we see the need for repair to resolve that breakdown. Ellen Bernstein-Ellis 18:45 Yeah, Absolutely. I'm just thinking about auto corrects there for a moment. Jaime Lee 18:51 And they can cause problems too. When the predictive text or the AutoCorrect is not what we meant to say that can cause a problem.Ellen Bernstein-Ellis 18:59 Absolutely. Those are good similarities, I get that. Jaime Lee 19:03 I think another big similarity is just about how conversation is co-constructed. It takes place between a person and a conversation partner and in texting, we have that too. We have a texting partner, or in the case of a group text, we have multiple partners. There's definitely similarities. And another big one is that purpose, I think we use conversation ultimately, and just like we're using texting to build connection, and that's really important Ellen Bernstein-Ellis 19:32 Yeah, I can really see all of those parallels. And there are some differences, I'm going to assume. Jaime Lee 19:39 Okay, yes, there are some definite interesting differences in terms of the social aspects of conversation. We do a lot in person, like demonstrating agreement, or giving a compliment, or an apology, or all of these nonverbal things we do like gesture and facial expression and laughter. Those nonverbal things help convey our stance, or affiliation, or connection. But in texting, we can't see each other. Right? So we have some different tools to show our stance, to show affiliation. What we're seeing is people using emojis and Bitmojis, and GIFs, even punctuation, and things like all capitals. We've all seen the all caps and felt like someone is yelling at us over text, that definitely conveys a specific tone, right? Ellen Bernstein-Ellis 20:34 I was just going to say emojis can be a real tool for people with aphasia, right? If the spelling is a barrier, at least they can convey something through an image. That's a real difference. Jaime Lee 20:45 Absolutely, I think some of the problematic things that can happen and the differences with texting have to do with sequencing and timing. Because people can send multiple texts, they can take multiple turns at once. And so you can respond to multiple texts at once, or that can lead to some confusion, I think we're seeing, but texting can also be asynchronous, so it's not necessarily expected that you would have to respond right away Ellen Bernstein-Ellis 21:16 So maybe giving a person a little more time to collect their thoughts before they feel like they have to respond versus in a person-to-person exchange where the pressure is on? Jaime Lee Absolutely, absolutely. Ellen Bernstein-Ellis Well, why might texting be a beneficial communication mode for individuals with aphasia, Jamie, because you have spelling challenges and all those other things. Jaime Lee 21:37 Yeah, I think it comes back to what you just said, Ellen, about having more time to read a message, having more time to be able to generate a response. I know that texting and other forms of electronic communication like email, can give users with memory or language problems a way to track and reread their messages. And in some cases, people might choose to bank responses that they can use later. We know this from actually some of Bonnie Todis and McKay Sohlberg's work looking at making email more accessible for users with cognitive impairment. So I think there are some really great tools available to people with aphasia to feel successful using texting. Ellen Bernstein-Ellis 22:30 That's great. I think banking messages is a really important strategy that we've used before, too. Jaime Lee 22:37 So there's all these other built-in features, that I'm still learning about that are in some mobile phones, that individuals with aphasia can potentially take advantage of. I think some features might be difficult, but there are things like we've just talked about, like the predictive text or the autocorrect. And then again, all these nonlexical tools, like the emojis and the GIFS and being able to link to a website or attach a photograph. I think this is a real advantage to communicating through text. Ellen Bernstein-Ellis 23:10 It lets you tell more of the story, sometimes. One of my members talks about when his spelling becomes a barrier, he just says the word and then that speech-to-text is really helpful. It's just one more support, I guess. Jaime Lee 23:24 Yes. And we're needing to find out a little bit more about the features that people are already using, and maybe features that people don't know about, but that they would like to use like that speech-to-text. That's a great point. Ellen Bernstein-Ellis 23:37 Well, how did you end up wanting to study texting for more than an amount of use or accuracy? In other words, what led you to studying transaction? Maybe we can start with a definition of transaction for our listeners? Jaime Lee 23:51 Sure. Transaction in the context of communication is the exchange of information. So it involves understanding and expression of meaningful messages and content. And this is a definition that actually comes from Brown and Yule's concepts of transaction and interaction and communication. So Brown, and Yule tell us that transaction again, is this exchange of content, whereas interaction pertains to the more social aspects of communication. Ellen Bernstein-Ellis 24:26 Okay, thank you. I think that's really good place to start. Jaime Lee 24:29 Part of the interest in transaction, first came out of that descriptive paper where we were trying to come up with systems to capture what was going on. So we were counting words that the participants texted and coding whether they were initiated or are they texts that are simple responses. We counted things they were doing, like did they use emojis or other multimedia? But we were missing this idea of how meaningful their text were and kind of what was happening in their texting exchanges. So this kind of combined with another measure we had, it was another measure in T-write really inspired by Pagie Beeson and Mira Fein's paper where they were using some texting scripts in their study. We also love scripting. We wanted to just have a simple measure, a simple brief texting script that we could go back and look at. We had as part of our protocol a three turn script. And I remember we sat around and said, what would be a really common thing to text about? And we decided to make a script about making dinner plans. And so we're collecting these simple scripts. And as I'm looking at these data coming in, I'm asking myself, what's happening here? How are we going to analyze what's happening? What was important didn't seem to be spelling or grammar. What seemed most important in this texting script was how meaningful the response was. And ultimately, would the person be able to make dinner plans and go plan a dinner date with a friend. So it seemed like we needed a measure of successful transaction within texting. Ellen Bernstein-Ellis 26:23 Jamie, I'm just going say that that reminded me of one of my very favorite papers, whereas you started out counting a lot of things that we can count, and it did give you information, like how much less people with aphasia are texting compared to people without aphasia, and I think that data is really essential. But there's a paper by Aura Kagan and colleagues about counting what counts, right, not just what we can count. And we'll put that citation and all the citations in the show notes-- you're bringing up some wonderful literature. So I think you decided to make sure that you're counting what counts, right? In addition to what we can count. Jaime Lee 26:59 Yes. And I do love counting. I was trained at the University of Oregon in single case experimental design. So really, behavioral observation and counting. So I am a person who likes to count but that sounds, like counting what counts. I love that. Ellen Bernstein-Ellis 27:13 Yeah, absolutely. In that 2021 paper, you look at the way some researchers have approached conversational analysis measures and you acknowledge Ramsberger and Rende's 2002 work that uses sitcom retells in the partner context. And you look at the scale that Leaman and Edmonds developed to measure conversation. And again, I can refer listeners to Marion Leaman's podcast as a 2021 Tavistock distinguished scholar that discusses her work on capturing conversation treatment outcomes, but you particularly referred to Aura Kagan and colleagues' Measurement of Participation in Conversation, the MPC. We'll put the citation in the show notes with all the others, but could you describe how it influenced your work? Jaime Lee 27:58 Yeah, sure. That's funny that you just brought up a paper by Aura Kagan, because I think I'll just first say how much Aura's work on Supported Conversation for Adults with Aphasia, SCA, how influential it's been throughout my career. First as a clinician and actually interacting with people with aphasia, and then later in facilitating conversation groups and helping to train other staff on the rehab team, the nursing staff. And now, it's actually a part of my coursework that I have students take the Aphasia Institute's free eLearning module, the introduction to SCA, as part of my graduate course, and aphasia, and all of the new students coming into my lab, do that module. So they're exposed really early on to SCA. Ellen Bernstein-Ellis 28:50 I'm just gonna say me too. We also use that as a training tool at the Aphasia Treatment Program, It's really been a cornerstone of how we help students start to learn how to be a skilled communication partner. So I'm glad you brought that up. Jaime Lee 29:03 Absolutely. So yes, Kagan's Measurement of Participation in Conversation (MPC), was really influential in developing our texting transactional success rating scale. And this is a measure that they created to evaluate participation and conversation. And they were looking actually both at transaction and interaction, I needed to start simply and just look at transaction first. They considered various factors. They have a person with aphasia and a partner engage in a five minute conversation. And they looked at factors like how accurately the person with aphasia was responding, whether or not they could indicate yes/no reliably, and could they repair misunderstandings or miscommunications. And then the raters made judgments on how transactional was that conversation? So, we looked at that measure and modeled our anchors for texting transactional success after their anchors. We had a different Likert scale, but we basically took this range from no successful transaction, partial transaction, to fully successful. And that was really modeled after their MPC. Ellen Bernstein-Ellis 30:17 Wow. Thank you for describing all of that. Jaime Lee 30:20 Yeah. Another big takeaway I'll add is that, and this really resonated with what we were hoping to capture, the scores on the MPC weren't necessarily related to traditional levels of severity. So Kagan and colleagues write that someone even with very severe aphasia, could score at the top of the range on the MPC. And I think similarly, what we feel about texting is even someone with severe writing impairments could be very successful, communicating via text message, really, depending on the tools they used, and perhaps, depending on the support they received from their texting partner. Ellen Bernstein-Ellis 31:02 You and your colleagues develop this Texting Transaction Success tool, the TTS, right? What is the goal of this measure? Jaime Lee 31:13 The goal of the TTS is to measure communicative success via texting. We wanted this functional measure of texting, not limited to accuracy, not looking specifically at spelling, or syntax, or morphology, but something that reflected the person with aphasia-- his ability to exchange meaningful information. I think the measure is really grounded in the idea that people with aphasia are competent and able to understand and convey meaningful information even despite any errors or incorrect output. So this is really relevant to texting because lots of us are using texting without correct spelling or without any punctuation or grammar. Yet lots and lots of people are texting and conveying information and feeling that benefit of connecting and exchanging information. Ellen Bernstein-Ellis 32:08 It sounds like a really helpful tool that you're developing. Could you please explain how it's used and how it's scored? Jaime Lee 32:16 Sure. So the TTS is a three-point rating scale that ranges from zero, which would be no successful transaction, no meaningful information exchanged, one, which is partial transaction, to two, which is successful transaction. And we apply the rating scale to responses from an individual with aphasia on the short texting script that I was talking about earlier. So this is a three-turn script that is delivered to a person with aphasia where the first line there, we ask them to use their mobile phone or give them a device, and the prompt is: “What are you doing this weekend?” We tell the person to respond any way they want, without any further cues. And then the script goes on, we deliver another prompt, “What about dinner?” And then another prompt, “Great, when should we go?” Each of those responses, we score on the TTS rating scale. We give either a zero, a one or a two. We have lots of examples in the paper of scores that should elicit a zero, a one or a two.We feel like it should be pretty easy for readers to use. Ellen Bernstein-Ellis 33:33 Wow, that's going to be really important. I always appreciate when I can see examples of how to do things. Jaime Lee 33:40 We did some really initial interrater reliability on it. The tools are pretty easy to score. We're able to recognize when something is fully transactional, even if it has a spelling error or lexical error, we can understand what they're saying. And a zero is pretty easy to score, if there are graphemes letters that don't convey any meaning, there's no transaction. Where things are a little more interesting, are the partial transaction. I think about an example to “What about dinner” and the participant responded, “Subway, Mexico.” So that's a one because the conversation, the texting partner, would really need to come back and clarify like, “Do you want to get a Subway sandwich?” Or “Do you want to go eat Mexican?” It could still be really transactional, and they could resolve that breakdown, but the partner would have a little bit more of a role in clarifying the information. Ellen Bernstein-Ellis 34:36 When you were actually trying to validate the TTS and establish its interrater reliability in your 2021 article with Cherney you mentioned using the Technology Confidence Survey from the 2021 Kinsey et al. article. Having tools that allow us to understand our clients' technology user profile is really informative in terms of understanding what modes of communication might be important to them. We talked earlier about not assuming, right, not assuming what people want to do or have done. Can you describe the survey? And is it available? Jaime Lee 35:13 Sure, yes. This is a survey we developed for the T-write study, the ORLA Plus Electronic Writing study. It's a simple aphasia friendly survey with yes/no questions and pictures that you can ask participants or clients about their technology usage. from “Are you using a computer? Yes or No” or “Are using a tablet?”, “Are you using a smartphone?” We ask what kinds of technology they're using and then what are they using it for? Are they doing email? Are they texting? Are they looking up information? Are they taking photos? It also has some prompts to ask specifically about some of the technology features like “You're texting? Are you using voice to text?” or “Are you using text to speech to help you with reading comprehension of your text?” At the very end, we added some confidence questions. We modeled this after Leora Cherney and Ed Babbitt's Communication Confidence Rating scale. So we added some questions like, “I am confident in my ability to use my smartphone” or “I am confident in my ability to text” and participants can read that on a rating scale. We use this in the context of the research study to have some background information on our participants. I think it could be a really great tool for starting a conversation about technology usage and goals, with people who are interested in using more technology, or are using it in different ways. This (survey) is in the Kinsey et al. article. It's a supplement that you can download. It's just a really good conversation starter, that when I was giving the technology survey to participants, many times they would take out their phone or take out their iPad and say, “No, I do it. I use it just like this”. It was really hands on and we got to learn about how they're using technology. And I definitely learned some new things that are available. Ellen Bernstein-Ellis 37:20 I think many of us use kind of informal technology surveys. I'm really excited to see the very thoughtful process you went through to develop and frame that (technology use). That's wonderful to share. Jamie, can you speak to the role of the TTS in terms of developing and implementing intervention approaches for texting? You just mentioned goals a moment ago? Jaime Lee 37:42 Sure. I think we have some more work to do in terms of validating the TTS and that's a goal moving forward. But it's a great starting place. If you have a client who wants to work on texting, it only takes a few minutes to give the script and then score their responses and gives us a snapshot of how effectively they're able to communicate through text. But in terms of developing intervention, to support texting, that's really where we're headed with this. I mean, the big drive is to not just study how people are texting, but really to help support them and texting more effectively and using texting to connect socially and improve their quality of life. But with any kind of intervention, we need a really good outcome measure to capture potential changes. Another reason I'm motivated to continue to work on the TTS, if people with aphasia are going to benefit from a treatment, we need rigorous tools to capture that change and document that potential change. 38:50 Ellen Bernstein-Ellis Absolutely. Absolutely. Jaime Lee 38:53 At the same time, I'd say the TTS isn't the only method we are focused on, we're really interested in understanding what unfolds during texting interactions. What's happening in these interactions. So, most recently, I've been working with my amazing collaborator, Jamie Azios, who is an expert in Conversation Analysis. I've been working with Jamie to say, “Hey, what's happening here? Can we use CA to explore what's going on?” Ellen Bernstein-Ellis 39:25 Well, Jamie, you probably heard this before, but Conversation Analysis can sometimes feel daunting for clinicians to use within their daily treatment settings. In fact, we've had several podcasts that have addressed this and have asked this question. What are you finding? Jaime Lee 39:40 I can definitely relate because I am still very new to CA and learning all the terminology. But Jamie and Laura and I are actually working on paper right now, a CAC special issue, because we presented some data at the Clinical Aphasiology Conference and then will have this paper. We'll be submitting to a JSHL on how we're applying CA to texting interactions. That goal is really based around understanding how people with aphasia and their partners are communicating via texting and looking at these naturalistic conversations to see what barriers they're coming across, and what strategies they are using to communicate in this modality. Ellen Bernstein-Ellis 40:27 That makes a lot of sense. And it really circles back again to communication partner training. That does not surprise me. Jaime Lee 40:33 We're seeing some really interesting, creative, and strategic behaviors used both by people with aphasia and their partners. We're seeing people link to a website, or instead of writing out the name of a restaurant, you know, “meet me here” with a link, or using an emoji to help convey their stance when they can't meet up with a friend. They might have more of an agrammatic production. But that emoji helps show the emotion and we're seeing a lot of people with more severe aphasia using photographs really strategically. Ellen Bernstein-Ellis 41:09 So those are the strategies are helping and I'm sure that CA also looks at some of the barriers or breakdowns, right? Jaime Lee 41:15 Yes, we're seeing some breakdowns, trouble sources in the CA lingo. In some instances, we see the partner clarify, send a question mark, like, “I don't know what you're saying”. And that allows the person with aphasia, a chance to self-repair, like, “Oops, here, this is what I meant.” And that's really useful. We also have seen some examples of breakdowns that may not get repaired. And we don't know exactly what was happening. In those instances, I suspect there were some cases where maybe the partner picked up the phone and called the person with aphasia, or they had a conversation to work out the breakdown. But we really don't know because we're using these data that were previously collected. So a lot of this does seem to be pointing towards training the partners to provide supports, and also helping people with aphasia be more aware of some of the nonlinguistic tools, and some of the shortcuts that are available, but there's still a lot to learn. Ellen Bernstein-Ellis 42:22 Well, Jamie, as you continue to explore this work, I know you're involved in a special project that you do with your senior undergrads at your university program at James Madison. Do you want to describe the student text buddy program? It sounds really engaging. Jaime Lee 42:38 Sure. This is a program I started here at JMU. JMU has a really big focus on engaging undergrads and research experiences. And we have students who are always asking for opportunities to engage with people with aphasia. Particularly during COVID, there weren't these opportunities. It just wasn't safe. But I know some of the participants from the T-write study and some people with aphasia in our community here in Harrisonburg, were looking for ways to be involved and continue to maybe practice their texting in a non-threatening situation. So this was a project and I was actually inspired by one of the students in my lab, Lindsay LeTellier. She's getting her master's degree now at the University of New Hampshire. But Lindsay had listened to an interview with one of our participants where she said she wanted a pen pal. And Lindsay said, “Oh, this participant says she wants a pen pal, I'd love to volunteer, I'll be her pen pal.” And I said, “Lindsay, that's great. I love the idea of a pen, pal. But if we're going to do it, let's make it a research project. And let's open it up and go bigger with this.” So Lindsey helped spearhead this program where we paired students with people with aphasia to have a texting pen pal relationship for four weeks. And in order to be able to kind of watch their texts unfold, we gave them a Google Voice number, so that we can watch the texts. We've really seen some really interesting things. We've only run about 10 pairs, but all of the feedback has been really positive from the people with aphasia, they felt like it was a good experience. And the students said it was a tremendous learning experience. We're seeing some interesting things. Using CA, Jamie and I presented this at IARC, sharing what the students/person with aphasia pairs are doing that's resulting in some really natural topic developments and really natural relationship development. Ellen Bernstein-Ellis 44:39 Nice! What a great experience, and we'll look forward to hearing more about that. Jamie, I can't believe how this episode has flown by. But I'm going to ask you a last question. What are you excited about in terms of your next steps for studying texting? Jaime Lee 44:57 I think we definitely want to continue the Text Buddy project because it's such a great learning experience for students, so we'll be continuing to do that. Jamie and I have applied for funding to continue to study texting interactions and use mixed methods, which is a pairing of both of our areas of expertise. I think there's just more to learn, and we're excited to eventually be able to identify some texting supports to help people with aphasia use texting to connect and be more effective in their communication. Ellen Bernstein-Ellis 45:35 Well, Jamie, this work is going to be really impactful on the daily lives and the daily ability for people with aphasia to have another mode of support for communicating. So thank you for this exciting work. And congratulations again on your Tavistock award, and I just am grateful that you are our guest for this podcast today. Thank you. Jaime Lee 45:58 Thank you so much, Ellen. This has been great, thanks. Ellen Bernstein-Ellis 46:01 It's been it's been a pleasure and an honor. So for our listeners, for more information on Aphasia Access and to access our growing body of materials, go to www.aphasiaaccess.org. And if you have an idea for a future podcast series topic, just email us at info@aphasia access.org. And thanks again for your ongoing support of aphasia access. References and Resources Babbitt, E. M., Heinemann, A. W., Semik, P., & Cherney, L. R. (2011). Psychometric properties of the communication confidence rating scale for aphasia (CCRSA): Phase 2. Aphasiology, 25(6-7), 727-735. Babbitt, E. M., & Cherney, L. R. (2010). Communication confidence in persons with aphasia. Topics in Stroke Rehabilitation, 17(3), 214-223. Bernstein-Ellis, E. (Host). (2021, July 29). Promoting Conversation and Positive Communication Culture: In conversation with Marion Leaman (No. 73) [Audio podcast episode] In Aphasia Access Aphasia Conversations. Resonate. https://aphasiaaccess.libsyn.com/episode-73-conversation-and-promoting-positive-communication-culture-in-conversation-with-marion-leaman Brown, G., & Yule, G. (1983). Discourse analysis. Cambridge. University Press. https://doi.org/10.1017/CBO9780511805226 Fein, M., Bayley, C., Rising, K., & Beeson, P. M. (2020). A structured approach to train text messaging in an individual with aphasia. Aphasiology, 34(1), 102-118. Kagan, A., Simmons‐Mackie, N., Rowland, A., Huijbregts, M., Shumway, E., McEwen, S., ... & Sharp, S. (2008). Counting what counts: A framework for capturing real‐life outcomes of aphasia intervention. Aphasiology, 22(3), 258-280. Kagan, A., Winckel, J., Black, S., Felson Duchan, J., Simmons-Mackie, N., & Square, P. (2004). A set of observational measures for rating support and participation in conversation between adults with aphasia and their conversation partners. Topics in Stroke Rehabilitation, 11(1), 67-83. Kinsey, L. E., Lee, J. B., Larkin, E. M., & Cherney, L. R. (2022). Texting behaviors of individuals with chronic aphasia: A descriptive study. American Journal of Speech-Language Pathology, 31(1), 99-112. Leaman, M. C., & Edmonds, L. A. (2021). Assessing language in unstructured conversation in people with aphasia: Methods, psychometric integrity, normative data, and comparison to a structured narrative task. Journal of Speech, Language, and Hearing Research, 64(11), 4344-4365. Lee, J. B., & Cherney, L. R. (2022). Transactional Success in the Texting of Individuals With Aphasia. American Journal of Speech-Language Pathology, 1-18. Meredith, J. (2019). Conversation analysis and online interaction. Research on Language and Social Interaction, 52(3), 241-256. Ramsberger, G., & Rende, B. (2002). Measuring transactional success in the conversation of people with aphasia. Aphasiology, 16(3), 337–353. https://doi.org/10.1080/02687040143000636 Todis, B., Sohlberg, M. M., Hood, D., & Fickas, S. (2005). Making electronic mail accessible: Perspectives of people with acquired cognitive impairments, caregivers and professionals. Brain Injury, 19(6), 389-401. Link to Jaime Lee's University Profile: https://csd.jmu.edu/people/lee.html mu.edu/people/lee.html
Sheil and Danny go through the hottest topics in the NFL including Russell Wilson's new contract, Jimmy G staying in San Francisco, Justin Fields, the Jets, and more (00:46). Also, Sheil answers some of your listener mailbag questions (43:22). Hosts: Sheil Kapadia and Danny Kelly Associate Producer: Jessie Lopez Additional Production Supervision: Arjuna Ramgopal and Conor Nevins Learn more about your ad choices. Visit podcastchoices.com/adchoices
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 4 Ways to Give Feedback to Job or Grant Applicants, published by Kirsten on August 5, 2022 on The Effective Altruism Forum. I have previously posted about my belief that all EA organisations should provide candidates with feedback. Some people responded by suggesting that providing feedback to every job or grant applicant would be very costly and take a lot of staff time. I have a more flexible view of feedback! I think a lot of things can be considered feedback, and it's worth considering how each organisation can provide more feedback and more value to the community without creating disproportionate burdens on themselves. I've listed ways of giving feedback from the least work for organisations to the most work. All of these options can be mixed and matched. There are also downsides to some, which I don't go into, but I would hope hiring managers would also consider the upsides of including several of these methods at different points in their hiring or grant application review process. Telling applicants whether or not they progressed to the next stage or got the job. Knowing how far you've progressed with an organisation is feedback and is useful to candidates. It's important to promptly update candidates who were unsuccessful as well as successful candidates; in general, EA organisations are good at this. If it's been longer than expected but you haven't made a decision yet, it can be helpful to update applicants, especially for grants, as people may start to believe you dislike their project idea. Providing concrete information on your hiring process. Small pieces of factual information can help applicants understand how much they should update from your acceptance or rejection. We invited 60 applicants to complete this one-hour work test. 42 applicants completed the test, and we invited the top 20 for an initial interview. In this case, the applicant knows they were in the top half of work test results, which is useful - it's pretty different from being in the top 5% or top 80%. Your grant application was determined to be complete and within the scope of our fund. After careful review, we have decided not to offer a grant at this time. In this situation, the grant applicant knows that it's worth applying to the same grantmaker on similar topics, which is valuable information. Giving standardized responses for why people didn't progress to the next stage. Interviewers, grantmakers, and assessors can use a pass/fail or Likert scale in clear categories to assess applications and tell applicants how they did. We assessed your CV and cover letter for understanding of our organisation's mission, working knowledge of Python and data analysis techniques, and relevant work experience. We felt you had a good understanding of our organisation's mission and relevant work experience. We did not see evidence of knowledge of Python and data analysis techniques, so we will not be progressing with your application. Thank you for applying and please feel free to apply for roles with us in the future. In this situation, the applicant knows the organisation was looking at three categories - hopefully categories that were mentioned in the job advertisement! - and that they met two of those categories. If they wanted to apply again for a similar job, they really need to learn Python first, or mention on their CV that they know it! My employer, the Civil Service, tells applicants in advance which categories they'll be assessed on at interview (for example Delivering at Pace), provides a rubric for how that category will be assessed, and then provides scores at the end (averaged from 2-4 interviewers). You can learn more about Civil Service interviews here. Personalizing feedback for candidates (either all candidates, or those who ask). Of course, the most helpful and most costly fee...
Lauren Ohayon's Restore Your Core programGoal Setting Cards from Entropy Some Outcome measures that I love to use with these clients are:ICSI/ICPI (IC symptom index)BPIC-SS (bladder pain IC score)PUF (pelvic pain and urinary frequency patient symptom scale)Likert scales 0-10 urgency, bladder pain, overall painGUPI Genitourinary Pain Index With men :NIH_CPSI PPIQ pelvic pain impact questionnaire (quality of life)CSI central sensitization inventory DASS depression, anxiety stress scorePCS pain catastrophizing scoreISI insomnia severity indexPSEQ pain self efficacy
Welcome back to a NEW season of PT Elevated where we are broadening our topics to include more researchers but still focusing on topics that you can use in your clinic every day. This season some of our speakers join us as guests that will be live in-person at the EIM Align Conference this August 26-28 in Dallas, Texas. On our seventh episode of season 3, Lori Michener, PT, PhD, ATC, FAPTA, joins to discuss the high-value treatment she has conducted for shoulder pain. Lori has been a professor at the University of Southern California in the Division of Biokinesiology & Physical Therapy for 7 years and is also a director at the University of Southern California Clinical Biomechanics Orthopedic and Sports Outcomes Research. At the beginning of her career, Lori trained as an athletic trainer and a physical therapist then went into college athletics and taught for six years in a typical undergraduate institution. She taught athletic training and pre-med, pre-physical therapy, and pre-occupational therapy students. She says it was a great opportunity for her to learn how to be a teacher. She then went back and got her PhD in biomechanics and orthopaedics at Hahnemann now Drexel University and taught for 15 years at Virginia Commonwealth University. Now she has been in Southern California for the last 7 years at the University of Southern California. In this episode, they focus on why she chose to focus on the shoulder for her speciality area of study. They also discuss the decision of pursuing a Ph.D. and how she came to the conclusion to do so and more! Here are some of the highlights: Lori says no matter what area you are interested in if you have questions about pursuing a Ph.D. reach out to her and she is happy to talk about it. She says it took her about 2 years to come to the decision that she wanted to get her Ph.D. The advice she gives to anyone looking to pursue their Ph.D. program is to explore a lot of different programs because they are all different. Paul asks Lori what outcome measures she thinks we should be using to measure our patients with shoulder pain? Lori lists several tests she has used and for what over the years: The Penn Shoulder Score (PPS) – a condition-specific self-report measure. It is a 100-point scale that consists of 3 subscales, including pain, satisfaction, and function The American Shoulder and Elbow Surgeons Shoulder Score (ASES) which is a mixed outcome reporting measure. It has 10 questions but some of the questions can be limited depending upon the patient's abilities. Shoulder Pain and Disability Index (SPADI), which is a self-administered questionnaire that consists of two dimensions, one for pain and the other for functional activities. The pain consists of five questions regarding the severity of an individual's pain. Functional activities are assessed with eight questions designed to measure the degree of difficulty an individual has with various activities of daily living that require upper-extremity use. The DASH outcome measure – the disabilities of the arm, shoulder, and hand questionnaire is a 30-item questionnaire that looks at the ability of a patient to perform certain upper extremity activities. The questionnaire is a self-report questionnaire that patients can rate difficulty and interference with daily life on a 5-point Likert scale. Patient Satisfaction Score– a direct question, how satisfied are you with the use of your shoulder presently? 100 is full, 0 is not satisfied. Some of these outcome measures use legacy measure, some are specific questionnaires if your legacy measure does not capture that and then some anchor patient acceptable symptom state or patient satisfaction with the use of your body part that is injured. Lori's Clinical Pearl – “I wish I would have known that connecting with the patient is more important than what you're doing with the patient. I don't think there is a magical set of exercises or manual therapy you can do, how you connect with the patient and deliver care is more important. I try to remember that when I walk in the door, I am in patient mode and the shield is up and I am present with the patient. Your behavior and how you are doing it can change how the patient responds.” Helpful research and training: University of Southern California in the Division of Biokinesiology & Physical Therapy. University of Southern California Clinical Biomechanics Orthopedic and Sports Outcomes Research Management of the Shoulder and Elbow Surgery versus Physical Therapy for Shoulder Impingement Ad Info: We are excited to be back in person and back to hands-on learning for the 2022 Align Conference. This year you can join an all-star lineup of speakers in Dallas, Texas, August 26 through the 28. The labs and lectures focus on sharpening the physical, hands-on treatments essential to patient care. Save 5% on registration as a PT Elevated Podcast listener. Visit alignconference.com and use the promo code PTELEVATED at checkout. You can find the promo code and a link to the website in the show notes. We can't wait to see you! Connect with us on socials:@ZimneyKJ on Twitter @PMintkenDPT on Twitter @LoriMichener on Twitter @lorimw7 on Instagram Align Conference 2022, Website
Tune in and listen as Erica Harrell shares about her experience and knowledge with creating effective Professional Development opportunities for educators. You'll be inspired as Erica shares her challenges and wins along her education journey. https://alwaysalesson.com/wp-content/uploads/2022/06/empowering-educators-podcast-15.png ()Quotables When you are the expert…it's often hard to make the information digestible for others because you want to give so much. Facilitators who really hit the nail on the head, get to know their learners. When you can be willing and transparent to show all the data, that's when growth happens. As a PD facilitator there is a time to be the GPS and a time to be the map. About Erica: Erica Harrell is the CEO of Erica Harrell Consulting, an education consulting firm created to help dedicated educators create strategic professional development plans to increase student achievement and improve staff capacity. Erica has over 12 years of experience in K-8 education. She began her career as a special education teacher and has held multiple leadership roles from instructional coach to principal to Director of Leadership Development. In all of her roles, Erica has always had the desire to grow and help others do the same. As a school leader, Erica has led teams to develop strategic and comprehensive project plans for multi-day and multi-week professional development series and coached leaders to ensure high-quality session facilitation. Under her leadership, a team of school-based leaders facilitated 4-week summer professional development with an average of over 90% of participants rating sessions and operations as “Platinum” (highest rating on 5-point Likert scale) multiple years in a row. Erica is originally from Upstate New York. She attended University of Maryland, College Park for undergrad. She holds a MEd in Instructional Leadership from Relay Graduate School of Education. Erica currently lives in Maryland with her husband and one year old son. Connect with Erica: https://www.instagram.com/ericaharrellconsulting/?hl=en (IG: @ericaharrellconsulting) Email: Info@ericaharrellconsulting.com https://www.buzzsprout.com/1783607 (Podcast: The Power of PD Podcast) Come Chat on Clubhouse! Instructional Coaching Clubhttp://www.clubhouse.com/club/instructionalcoaching (- www.clubhouse.com/club/instructionalcoaching) Join the Always A Lesson Newsletter Join http://eepurl.com/lJKNn (here) and grab a freebie! Connect with Gretchen Email: gretchen@alwaysalesson.com Blog: https://alwaysalesson.com/blog/ (Always A Lesson) Facebook: https://www.facebook.com/AlwaysALesson/ (Always A Lesson) Twitter: https://twitter.com/gschultek/ (@gschultek) Instagram: https://www.instagram.com/always.a.lesson/ (Always.A.Lesson) Linkedin: https://www.linkedin.com/in/GretchenSchultekBridgers/ (Gretchen Schultek Bridgers) Book: https://alwaysalesson.com/product/elementary-educ-101-what-they-didnt-teach-you-in-college/ (Elementary EDUC 101: What They Didn't Teach You in College) Leave a Rating and Review: This helps my show remain active in order to continue to help other educators remain empowered in a career that has a long-lasting effect on our future. https://itunes.apple.com/us/podcast/always-lessons-empowering/id1006433135?mt=2 (https://itunes.apple.com/us/podcast/always-lessons-empowering/id1006433135?mt=2) Search for my show on iTunes or Stitcher. Click on ‘Ratings and Reviews.' Under ‘Customer Reviews,' click on “Write a Review.” Sign in with your iTunes or Stitcher log-in info Leave a Rating: Tap the greyed out stars (5 being the best) Leave a Review: Type in a Title and Description of your thoughts on my podcast Click ‘Send'
Even though Bo is on vacation, Sheil and Zach strongly agree that you will enjoy this episode. Sheil presents Zach with 11 Eagles and life scenarios to rank on the 5-point Likert scale. Learn more about your ad choices. Visit megaphone.fm/adchoices
ENabling VISions And Growing Expectations (ENVISAGE): Parent reviewers' perspectives of a co-designed program to support parents raising a child with an early-onset neurodevelopmental disabilityLaura Miller, Grace Nickson, Kinga Pozniak, Debra Khan, Christine Imms, Jenny Ziviani, Andrea Cross, Rachel Martens, Vicki Cavalieros, Peter RosenbaumAffiliations expandPMID: 34942443DOI: 10.1016/j.ridd.2021.104150AbstractAims: This study reports parents' perspectives of, ENVISAGE: ENabling VISions And Growing Expectations. ENVISAGE - co-designed by parents and researchers - is an early intervention program for parents raising children with neurodisability.Methods and procedures: Using an integrated Knowledge Translation approach, this feasibility study explored parents' perspectives of the comprehensibility, acceptability, and usability of ENVISAGE workshops. Participants were Australian and Canadian parents of children with neurodisabilities, ≥12 months post-diagnosis, who independently reviewed ENVISAGE workshops using an online learning platform. Parents completed study-specific 5-point Likert-scaled surveys about individual workshops. Following this, qualitative interviews about their perceptions of ENVISAGE were conducted. Survey data were analysed descriptively, and interviews analysed inductively using interpretive description.Outcomes and results: Fifteen parents completed surveys, of whom 11 participated in interviews. Workshops were reported to be understandable, relevant, and meaningful to families. ENVISAGE was judged to empower parents through enhancing knowledge and skills to communicate, collaborate and connect with others. Pragmatic recommendations were offered to improve accessibility of ENVISAGE.Conclusions and implications: ENVISAGE workshops address key issues and concerns of parents of children with neurodisability in a way that was perceived as empowering. Involving parents as reviewers enabled refinement of the workshops prior to the pilot study.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities' Operations team is expanding — we're hiring for several positions on our Core Ops and Special Projects teams, published by abrahamrowe on March 21, 2022 on The Effective Altruism Forum. We are also hiring researchers — find out more here. We are having a webinar on these roles on March 29th, 2022. Find out more here. Summary TLDR: Rethink Priorities is hiring for lots of operations positions. Apply here. You can apply for as many of these roles (and our research roles) as you'd like to! Rethink Priorities (RP) is growing rapidly. Since the beginning of 2021, we've expanded from around 15 staff to around 40. By the end of 2022, we will likely have over 60 staff, and plan to continue growing in 2023. We're also now adding new programs beyond research, including a new longtermist megaproject incubation initiative. We expect that these projects will rapidly scale, and getting operational support in place for them early will be critical for their success. Our operations have become increasingly complex during this growth, but nonetheless we've had a high degree of operational success: Despite operating in several countries, conducting research in several cause areas, and growing rapidly over the last year, we've experienced extremely low turnover and no major operational issues. RP staff consistently rate RP extremely highly on measures like psychological safety, as well as questions like “RP is a good place to work.” For the latter question, we've never had an internal survey where a staff member hasn't answered 5/5 on a Likert scale (answering anonymously). RP research staff consistently report that the administrative burden of working at RP is minimal. For many staff, it is well below 1% of their total time. These are great results from the perspective of our Ops team, but as we grow, they become harder and harder to maintain. We've consistently invested heavily in ops, putting staff in place well before we need them, paying well for operations talent (we have the same salaries for ops people and researchers with identical title levels), and working to be a flexible and compassionate employer for our research team. Core Operations To keep up this success through our next period of growth, we're growing our Core Operations team, which oversees the day-to-day functioning and success of Rethink Priorities. These roles are now open to applicants. We have roles at multiple seniority levels, and would be excited to see applicants with EA backgrounds joining our team. Special Projects The Special Projects program is a new initiative at Rethink Priorities to support activities other than direct research conducted at RP. Roles on this team would liaise between the RP Core Operations team and these special projects. We expect that most of these projects will be in the longtermism space. RP plans on fiscally sponsoring, incubating, or otherwise hosting several of these projects over the next few years. Rethink Priorities is increasingly working on projects that require complicated operational support. RP's Core Operations team is focused on ensuring the day-to-day functioning of the organization, but these special projects often require more value alignment and exploration of novel areas than is necessary for the Core Operations team. Many of the special projects may spin off from RP after incubation — Special Projects staff may be well-suited to join these (or perhaps even help lead them), which will help ensure the operational success of the projects after the spinoff. RP has an especially strong operations program, and extending the support of our Ops team beyond RP's core research program could be helpful for both: Aiding the success of these special projects Training ops staff in these areas, which those projects may hire as they grow Although we currentl...
Hello everybody! In this episode, I provide you with some Likert scales. Usually, you search for one online. Here, I made this work for you. If you think about running a survey relying on a quantitative questionnaire, this might be a good option for you ;) Enjoy, comment, subscribe, and share - it does matter! Best Eugene (Yevgen)
What if you could find out if you were truly a great leader from reading less than 150 words in just under 30 seconds? While there is definitely an exhaustive list of podcasts and TedTalks to listen to about the subject, Google may have narrowed it down to just a limited number of characters. Their semi-annual Manager Feedback Survey focuses on a Likert scale measuring very simple yet remarkably qualified bullet points towards what the best of the best leaders do consistently. Are you one of them? What does each point mean exactly, and how would you improve on an area you may be lacking? In this episode, Joe and Zac break down this golden list in our first quick slice of the new season! While they have decades of corporate leadership between them, they have also been on the opposite side of this list; whether wayward follower or lacking leader. So jump in and take yourself through a quick self-reflection. Who you get the pleasure to lead is worth it! Afterward, download our free resource under this podcast on our website: ‘The 5 Strategic Ways Businesses, Ministries and Parents Maximize Their Impact!'
Guest Heiko Tietze Panelists Richard Littauer | Eriol Fox | Django Skorupa Show Notes Hello and welcome to Sustain Open Source Design! The podcast where we talk about sustaining open source with design. Learn how we, as designers, interface with open source in a sustainable way, how we integrate into different communities, and how we as coders, work with other designers. We are very excited to have as our guest today, Heiko Tietze, who is a full-time UX Mentor at the Document Foundation. Today, Heiko fills us in on the Document Foundation, what his job involves as a UX Mentor, and the challenges in mentoring designers in open source. We also learn what building a team means to Heiko, how the teams integrate other user experience with people from different backgrounds, and how someone can contribute to open source besides translations. Go ahead and download this episode now to learn more! [00:02:57] Heiko tells us what the Document Foundation is and what he does there. [00:04:18] Since Heiko mentors UX people, he fills us in on much UX work there is to go around and how many UX people he mentors. [00:06:02] We learn about some unique challenges for designers and mentoring designers in open source. [00:09:51] Heiko talks about the backgrounds of the people that he mentors. [00:12:57] Eriol is curious to know what kind of expectations designers or people that contribute design to the projects have about the team and what does the team mean to him and the rest of the folks in the project. [00:17:05] Since LibreOffice has tons of contributors who contribute in other languages, Richard wonders how Heiko integrate different contributors from different languages. [00:19:02] We find out how you can contribute to the open source besides translations, if there's a way to improve UX besides internalization and localization, and how the teams integrate other user experience with people from different backgrounds coming. [00:22:56] We learn how conversations happen in the Document Foundation and the different tools that Heiko is working on. [00:29:46] Find out where you can follow Heiko on the internet and how to join the design team. Quotes [00:06:20] “Designer [as a term] is misunderstood as 'people who do the visual part.'” [00:23:30] “I'm not concerned with the one percent. The other percent is more important.” Spotlight [00:30:55] Django's spotlight is a project he's working on called The Vulnerability History Project. [00:31:52] Eriol's spotlights are Human Rights Centered Design and Open AAC Systems. [00:32:44] Richard's spotlight is David J. Peterson. [00:33:31] Heiko's spotlights are Free Pascal and Lazarus. Links Open Source Design Twitter (https://twitter.com/opensrcdesign) Open Source Design (https://opensourcedesign.net/) Sustain Design & UX working group (https://discourse.sustainoss.org/t/design-ux-working-group/348) SustainOSS Discourse (https://discourse.sustainoss.org/) Sustain Open Source Twitter (https://twitter.com/sustainoss?lang=en) Richard Littauer Twitter (https://twitter.com/richlitt?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Eriol Fox Twitter (https://twitter.com/EriolDoesDesign?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Django Skorupa Twitter (https://twitter.com/djangoskorupa) Heiko Tietze Twitter (https://twitter.com/heikotietze) Heiko Tietze LinkedIn (https://de.linkedin.com/in/heiko-tietze-4204aa30/en?trk=people-guest_people_search-card) LibreOffice Design Twitter (https://twitter.com/libodesign) Design and User Experience team (Document Foundation) (https://wiki.documentfoundation.org/Design) Easyhack Archive (LibreOffice Design Team Blog) (https://design.blog.documentfoundation.org/category/easyhack/) Likert scale (https://en.wikipedia.org/wiki/Likert_scale) Design Principles (Document Foundation) (https://wiki.documentfoundation.org/Design/Principles) Human Rights Centered Design (https://hrcd.pubpub.org/) Open AAC Systems (https://www.openaac.org/aac.html) David J. Peterson (https://en.wikipedia.org/wiki/David_J._Peterson) Free Pascal (https://www.freepascal.org/) Lazarus (https://www.lazarus-ide.org/) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Dr. Heiko Tietze.
Hello everybody I make a lot of research using questionnaires. So do also most of my colleagues and students. Therefore, I decided to create a mini-series on questionnaires and related issues. Today, I talk about the most common (as to me) mistakes with regard to the size (range) of the Likert scale. Some prefer the even, some prefer the odd number of answer options. Some prefer a 5 and some prefer a 7 Likert scale. I share my opinion in this regard in the current episode. Enjoy, comment, subscribe! It does matter! Best Eugene (Yevgen)
► 00:55 — Cosa c'entra il Questionario con la UX? ► 01:22 — CONSIGLIO 1 : soddisfa le aspettative ► 04:30 — CONSIGLIO 2 : attenzione alle domande chiuse! ► 04:44 — CONSIGLIO 3 : dai controllo all'utente ► 05:31 — CONSIGLIO 4 : usa parole che il tuo utente capisce subito ► 05:56 — CONSIGLIO 5 : attenzione alla scala di valutazione (scala Likert) ► 08:34 — CONSIGLIO 6 : non annoiare l'utente ► 09:34 — CONSIGLIO 7 : non fare troppe domande! ► 10:06 — CONSIGLIO 8 : crei percorsi specifici in base alla domanda ► 12:29 — CONSIGLIO 9 : usa l'effetto contesto ► 14:04 — CONSIGLIO 10 : fai le domande importanti nel momento giusto ► 15:43 — CONSIGLIO 11 : non fare domande inutili ►► 16:55 — RIASSUNTO ▶️ Guarda il video su https://youtu.be/Vo6Cg41rfKQ Piacere, sono Lorenzo Pinna dal 2010 aumento le Conversioni potenziando l'esperienza utente (UX). Lo faccio per brand internazionali e posso farlo per te! Analizzo la tua UX per potenziare le tue conversioni — Fissa un appuntamento su https://calendly.com/lorenzopinna/analisi_ux Consigli di UX
Welcome! Today we talk about a very important bias - the common method bias. It can happen that your research method itself, e.g. your questionnaire, creates a skewness in your answers. For instance, if your Likert scales have different font sizes or if the checkboxes on your Likert have different sizes, you may prime your respondents and (un)internationally force them to select a specific answer option. In this episode, I explain how the bias appears, how to avoid it, and how to check whether your study contains a common method bias. Enjoy! Best, Eugene (Yevgen)
The Modern Therapist's Survival Guide with Curt Widhalm and Katie Vernoy
Which Theoretical Orientation Should You Choose? Curt and Katie chat about how therapists typically select their clinical theoretical orientation for treatment. We look at the different elements of theoretical orientation (including case conceptualization, treatment interventions, and common factors), what impacts our choices, the importance of having a variety of clinical models to draw from, the types of practices that focus on only one clinical theory, and suggestions about how to approach choosing your theories for treatment, including some helpful assessments. In this podcast episode we talk about how therapists pick their theoretical orientation We received a couple of requests to talk about clinical theoretical orientation and how Curt and Katie chose their own. We tackle this question in depth: Choosing a clinical theoretical orientation The problem with the term “eclectic” when describing a clinical orientation How Curt and Katie each define their clinical orientations “Multi-modal” therapy The different elements of clinical orientations Case conceptualization Treatment interventions Common Factors and what actually makes therapy work What impacts which theoretical orientation we choose as therapists Clinical supervision Training Personal values and alignment with a theoretical orientation Common sense (what makes sense to you logically) Choosing interventions that you like The importance of having a variety of clinical theories that you can draw from “You need to know the theories well enough to know when not to use them” – Curt Widhalm Comprehensive understanding is required to be able to apply and know when not to apply a clinical orientation Avoid fitting a client's presentation into your one clinical orientation Deliberate, intentional use of different orientations Why some therapy practices operate with a single clinical model Comprehensive Dialectical Behavioral Therapy (DBT) therapists run their practices and their lives with DBT principals Going deeply into a very specific theory (like DBT, EMDR, EFT, etc.) while you learn it Researchers are more likely to be singularly focused on one theory Suggestions on How to Approach Choosing Your Clinical Theoretical Orientation “Theoretical orientation actually can be very fluid over time” – Katie Vernoy Obtain a comprehensive understanding of the theoretical orientation Understand the theory behind the interventions Recognizing when to use a very specific theory or when you can be more “eclectic” in your approach Deciding how fluid you'd like to be with your theoretical orientation Find what gels with you and do more of that The ability to pretty dramatically shift your theoretical orientation later in your career Instruments for Choosing a Theoretical Orientation Theoretical Orientation Scale (Smith, 2010) Counselor Theoretical Position Scale Our Generous Sponsor for this episode of the Modern Therapist's Survival Guide: Buying Time LLC Buying Time is a full team of Virtual Assistants, with a wide variety of skill sets to support your business. From basic admin support, customer service, and email management to marketing and bookkeeping. They've got you covered. Don't know where to start? Check out the systems inventory checklist which helps business owners figure out what they don't want to do anymore and get those delegated asap. You can find that checklist at http://buyingtimellc.com/systems-checklist/ Buying Time's VA's support businesses by managing email communications, CRM or automation systems, website admin and hosting, email marketing, social media, bookkeeping and much more. Their sole purpose is to create the opportunity for you to focus on supporting those you serve while ensuring that your back office runs smoothly. With a full team of VA's it gives the opportunity to hire for one role and get multiple areas of support. There's no reason to be overwhelmed with running your business with this solution available. Book a consultation to see where and how you can get started getting the support you need - https://buyingtimellc.com/book-consultation/ Resources for Modern Therapists mentioned in this Podcast Episode: We've pulled together resources mentioned in this episode and put together some handy-dandy links. Please note that some of the links below may be affiliate links, so if you purchase after clicking below, we may get a little bit of cash in our pockets. We thank you in advance! Institute for Creative Mindfulness Very Bad Therapy Podcast Petko, Kendrick and Young (2016): Selecting a Theory of Counseling: What influences a counseling student to choose? What is the Best Type of Therapy Elimination Game The Practice of Multimodal Therapy by Arnold A. Lazarus Poznanski and McClennan (2007): Measuring Counsellor Theoretical Orientation Relevant Episodes of MTSG Podcast: Unlearning Very Bad Therapy Interview with Dr. Diane Gehart: An Incomplete List of Everything Wrong with Therapist Education Who we are: Curt Widhalm, LMFT Curt Widhalm is in private practice in the Los Angeles area. He is the cofounder of the Therapy Reimagined conference, an Adjunct Professor at Pepperdine University and CSUN, a former Subject Matter Expert for the California Board of Behavioral Sciences, former CFO of the California Association of Marriage and Family Therapists, and a loving husband and father. He is 1/2 great person, 1/2 provocateur, and 1/2 geek, in that order. He dabbles in the dark art of making "dad jokes" and usually has a half-empty cup of coffee somewhere nearby. Learn more at: www.curtwidhalm.com Katie Vernoy, LMFT Katie Vernoy is a Licensed Marriage and Family Therapist, coach, and consultant supporting leaders, visionaries, executives, and helping professionals to create sustainable careers. Katie, with Curt, has developed workshops and a conference, Therapy Reimagined, to support therapists navigating through the modern challenges of this profession. Katie is also a former President of the California Association of Marriage and Family Therapists. In her spare time, Katie is secretly siphoning off Curt's youthful energy, so that she can take over the world. Learn more at: www.katievernoy.com A Quick Note: Our opinions are our own. We are only speaking for ourselves – except when we speak for each other, or over each other. We're working on it. Our guests are also only speaking for themselves and have their own opinions. We aren't trying to take their voice, and no one speaks for us either. Mostly because they don't want to, but hey. Stay in Touch with Curt, Katie, and the whole Therapy Reimagined #TherapyMovement: www.mtsgpodcast.com www.therapyreimagined.com https://www.facebook.com/therapyreimagined/ https://twitter.com/therapymovement https://www.instagram.com/therapyreimagined/ Consultation services with Curt Widhalm or Katie Vernoy: The Fifty-Minute Hour Connect with the Modern Therapist Community: Our Facebook Group – The Modern Therapists Group Modern Therapist's Survival Guide Creative Credits: Voice Over by DW McCann https://www.facebook.com/McCannDW/ Music by Crystal Grooms Mangano http://www.crystalmangano.com/ Transcript for this episode of the Modern Therapist's Survival Guide podcast (Autogenerated): Curt Widhalm 00:00 This episode of the modern therapist Survival Guide is sponsored by Buying Time. Katie Vernoy 00:04 Buying Time is a full team of virtual assistants with a wide variety of skill sets to support your business. From basic admin support customer service and email management to marketing and bookkeeping, they've got you covered. Don't know where to start, check out the system's inventory checklist, which helps business owners figure out what they don't want to do anymore and get those delegated ASAP. You can find that checklist at buyingtimellc.com/systems-checklist. Curt Widhalm 00:31 Listen at the end of the episode for more information. Announcer 00:35 You're listening to the modern therapist survival guide where therapists live, breed and practice as human beings to support you as a whole person and a therapist. Here are your hosts, Curt Widhalm and Katie Vernoy. Curt Widhalm 00:51 Welcome back modern therapists. This is the modern therapist Survival Guide. I'm Curt Widhalm with Katie Vernoy. And this is the podcast for therapists about how we are as therapists. And we have received a couple of requests for in episodes about how people select their theoretical orientations. And I think that this is a great opportunity for us to maybe gear an episode a little bit more towards early career therapists, some of the students who listened to our show, but also for those of you who are maybe a little bit later in your practice to consider how you came up with your theoretical orientation or orientations. And we're gonna dive into a little bit of our stories about this, but also what some of the research ends up saying about how a lot of therapists end up practicing in the way that they do. So, Katie, from the top of the show, what are your orientations? And how did you get to where you are? Katie Vernoy 01:54 I think the the word that probably best describes my orientation is one that I was told not to use because it was bad, which was eclectic, Curt Widhalm 02:06 eclectic Katie Vernoy 02:07 ecelctic! Curt Widhalm 02:08 lazy eclectics. Katie Vernoy 02:11 And I think it's, it's not exactly true. But I really feel like I draw from a lot of orientations. A lot of models, maybe it's better than orientations, where there are a lot of really cool interventions that I like from CBT DBT narrative, even psychodynamic or Gestalt, or different things like that. There's a lot of really cool interventions that I've been able to kind of pick up in my my toolbox or tool tool belt over the years. And so to me, when we talk about orientation, and maybe this is a question to ask, I would say, I'm probably mostly existential, and certainly relational. And, and that's kind of where I sit. I think with orientation, though, there's how you conceptualize a case, how you treat a client's you know, so, orientation feels like a very broad thing, where case conceptualization seems more like okay, that's my that's how I'm orienting myself to a case specific interventions, I think tie to theoretical orientations. But I once had a supervisor say, pretty much all theories are the same. They just use different words, people want to make money. And orientations are different, but I feel like you can you can mix and match pretty well. Curt Widhalm 03:33 And on that point, you're talking about Bruce Wampold's common factors that soar looking at therapeutic treatment where theoretical orientation affects treatment about 1%. Maybe some of the emphasis of where some of these questions are coming from is our therapists, education, emphasis on every class being about orientation, really not looking at the other 99% of what actually makes therapy work? Yes. Now, like you, maybe Unlike you, I look at myself not as a dirty eclectic therapist, but as a very intentional, multimodal therapist. Katie Vernoy 04:19 Oh, my goodness, words, words. Curt Widhalm 04:24 So, like you, I also end up using a lot of CBT. In my practice, I'm also drawn to existentialism, and very much utilize a lot of EMDR work which, for the EMDR people that I trained with over at the Institute of creative mindfulness, we really look at EMDR as being the greatest hits of a lot of other therapeutic styles that got it just naturally pulls from a number of different areas. But when we first got these cases, My first reaction was kind of, I wonder how much of how we practice is based in who our supervisors were and how they practiced at, you know, kind of a developmental stage of where we were at in becoming therapists. And if that's just stuff that because we were forced to practice in a way for a while, if that's why we continue to practice that once we're out on our own, and I'm wondering how much of that rings true for your story here. Katie Vernoy 05:34 It certainly rings true for me, I think about some of the newer clinicians and certainly talking to like Carrie Wiita and Ben Fineman over it. Very bad therapy, it seems like they're more thoughtful than we are, or than I was anyway, when I was coming up. But I found myself trying to soak everything in and I had a psychodynamic supervisor and a CBT supervisor when I first started, and then I went into community mental health, it's very behavioral and, and CBT oriented, with some, you know, trauma informed, you know, different things that kind of layered in there. But I did find that the supervisor made a big difference if they had a strong orientation, because I that's how they framed everything. And that's why I think I, when I say the case, conceptualizations are oftentimes more along the lines of like psychodynamic or CBT. I think it's because that was how I was trained. The other piece that I was really lucky is that I also had a group supervision with several folks who are narrative, and they would talk about their cases from a narrative perspective, and would provide feedback on some of the cases that I was working on from a narrative perspective. And so I feel like there's some narrative that came in early enough that that was something that also I added to the pool. But it wasn't something I learned in school, I think it was newer, you know, I was getting ready to get licensed at that time. So to me, I feel like the people around us, primarily the supervisor, but also potentially even, you know, our colleagues in our group supervision can really impact how we see cases how we've, you know, kind of the types of interventions we try, and therefore our orientation. Curt Widhalm 07:22 I don't know that I can tell you my supervisors orientation from my trainee years, maybe that speaks to the quality of supervision that was being given at the time, potentially, but I, I largely agree with you in the what did end up shaping up out at the time was the other people who were part of my supervision groups and kind of being pushed into recognizing that we were naturally drawn to some techniques, whether we knew it or not. Looking at a 2016 article from the universal Journal of Psychology, this is by Pepco, Kendrick and Jung, and aptly titled selecting a theory of counseling, what influences it counseling students to choose? Katie Vernoy 08:13 Very good, very appropriate, Good, find, Curt! Curt Widhalm 08:16 Good find Curt. They came up with three categories that probably worth exploring here a little bit for ourselves, the first topic on here does not necessarily fall into that I practice this way because my supervisor practices this way. And in fact, none of these three do. The first one is the counseling theory is similar to my personal value system. And Katie Vernoy 08:43 that's where I remember because we did that orientation game. What was that called? With Carrie and Ben and Ben? Curt Widhalm 08:51 Oh, the elimination game? Katie Vernoy 08:53 Yeah, yeah. And I just I hear Ben talking about how amazing narrative is. And it seemed like it was so aligned with his values and stuff like that. I was like, I don't know that I was that thoughtful when I was in that stage of my my development. Curt Widhalm 09:09 It's something where I really expect our audience to resonate with this one, just because we do talk about value systems as such an important factor of the work that we do, and that obviously should be reflected in the work that you do with your clients and make sense as far as how that would carry over as, as an extension of yourself and your personality to make the therapeutic alliance work. I think it's better done when it's intentional, maybe not in the way that you're describing of like looking for justification five years after a journal article is published to be like, Yeah, that's what I did. But to really be able to clarify, it's like you're giving credit to Ben for doing it. As far as saying, These are my values, this is a theory that ends up reflecting what those are. And I think that there are going to be certain theories that end up lending themselves to that more easily than others. Things like narrative therapy, where it really does have more of a social justice aspects to it. Yeah, as compared to something like behaviorism, which is going to be very much about pushing people to certain measurable outcomes, unless that's who you are as a person and why you don't get invited to dinner parties? Katie Vernoy 10:38 Well, I think that there are things that I was trained as a therapist 20 years ago. And I think that there are, there are limitations on some of the research that was available 20 years ago, and so even if I were to come up now, I don't know that I would spend a lot of time on CBT, just based on, you know, kind of the limited transfer across different cultures and that kind of stuff, I think that there are great interventions, and I've kind of learned over the years, especially in working in a lot of different multicultural and cross cultural environments, how to make those adjustments and kind of what to hold to and what not to, but I think that there are, are definitely different pieces of information around orientation and kind of our personal value systems that I think, is a constant or a continual assessment. I don't know that, you know, I don't know that there's, you know, it kind of goes to that, like, what's what's been indoctrinated and what needs to be unlearned, and kind of the whole decolonizing therapy, but I think that there's, there's definitely things that feel inherently true to me, because of when I learned about them and and how they were just kind of organically fold it in. And I would have liked to have that assessment that personal values assessment around which theory fits best for me early enough on so I'm glad we're talking about it, hopefully, the students are going to do those assessments for themselves. But, um, but I don't know that I even thought to do it, because it was, you know, everything was kind of a truism. Like, this is what psychology is, you know, back in the olden days, when I was trained. Curt Widhalm 12:20 And you what you're leading into, is this second on this list, which is people to series, because it's what makes sense logically, yeah, it's, oh, I can see how a leads to B leads to C. And this might lead to some more of those directive type therapies and CBT being an example of this, where but I think in, it's not just let me get to CBT. It's also being able to look at anything from a comprehensive way. And as much as I know, students, and really anybody else hates doing case conceptualizations it's an important factor to be able to see this is how people fit logically into this set of patterns as described by this theory. Historically, I have seen some pushback from educators and supervisors as far as this approach when it comes to trying to make clients fit into a theory, rather than hearing the client stories. And this is where I think most educators, most researchers when it comes to this, and we'll put some citations in the show notes. But people like Lazarus, Norcross and golden freed, all talk about the importance of learning a variety of theories. So that way you can shift to when clients don't fit a particular one that you're still able to practice in a way that makes sense for them. So having some theories that do make sense to you make sense. But don't, don't fall just into the logic trap of everything needs to follow into this set of patterns. Katie Vernoy 14:05 Completely agree. And I want to just acknowledge that what makes sense to you may be what you were trained, which I think ties back into, it makes sense to me because that's what my supervisor taught me. And that's how the, the practice of doing therapy, this is what it is, and this is what makes sense to me. The follow on to that is the importance of either having a supervisor that has this kind of palette of different orientations and teaches to all of them and and has that as part of your supervision or having a number of different supervisors across your internship or trainee years or your associate years so that you can get your own perspective on something versus this is how it logically fits into the model I was trained by my one supervisor. Curt Widhalm 15:02 And this is getting a comprehensive understanding, not just not just like, oh, we covered this in class last week, and I should try this out on clients. And here's parts of it that work. And because it worked, it made sense to me. But it does take a ability to get in to the depths. And I've always kind of naturally described this as you need to know the theories well enough to know when not to use them. And knowing that you should be able to shift to something else is the level of depth that you need to know. And rather than just forcing clients to do something, because the theory says that it should work means that you're maybe not quite there yet. And that's where having a more comprehensive understanding of switching between theories, or utilizing aspects of different theories, together with intention definitely helps out. Katie Vernoy 16:04 Oh, for sure, I think to me, I see folks that are very immersed in a single theory, or a single orientation. And I think there are reasons to do that. I don't want to say anything negative about folks who do that. But to me, that wouldn't fit for me, because I would have to refer clients out who I could serve with a different theory. But specifically, I'm talking, the most frequent one that I see are, are people who are like doing comprehensive DBT. And that's their whole practice. And then there's also folks that end up doing a lot of EMDR, I feel like that's become less because there's so many people that have been trained in EMDR at this point or anything. But the DBT thing, it requires a lot to set up, you have to have a consultation team. You know, if you're doing comprehensive stuff, you have to have a group with CO leaders, there's a specific way you run your individual session. And it works really well for the folks that works for. And I think that the comprehensive DBT therapists who only do DBT would argue they know who it's not for, and they refer them out. For me, I don't think I'd be comfortable with that. But I think the level of knowledge to determine that, I think is is higher than I think some folks who initially come into a single theory, and maybe this is where the question came from is I need to have my orientation. And it's like, should I become an EMDR? therapist, or a DBT? therapist, or a CBT? therapist or a blank right? kind of therapist? And I think very few people end up with just one orientation, I believe. I think when someone's learning an orientation, you know, and I've seen this with like EFT folks, they go really deep into it. It's like they have, you know, at least a portion of their practices only EFT. I think that there is there is a and I'm talking about Emotionally Focused Therapy, not Emotional Freedom Techniques. Right? I understand there's two FTEs. But But I think that there's a necessity when you're digging deep into a very specific theory maybe to focus in on it. But I really like this idea of having that palette of orientations and intervention so that you can shift when it makes it makes sense. But what would you say for folks who are single theory that there is a different developmental stage? Or do you feel like it's folks that have a different style? Like, where does that fit? Do you think? Curt Widhalm 18:41 You know, it's interesting that you talk about the DBT therapists, and when I talk with other therapists and in the community, and some of you are listeners of the show it sometimes I get accused of being a DBT therapist, I know I heard that recently. And I liked DBT, I've done some workshops towards, you know, learning DBT a lot of it, a lot of it makes sense. I'm not trained in DBT. But just the way that I understand where these comments are coming from is for a lot of DBT therapists, it's also ways that you run your life, and it's ways that fall into that first category of almost being value based. And with the bonus of things making sense. And also with the the third category here that we'll be leading into in just a moment, but it's a very comprehensive structured package that also immerses the clinician in needing to be in that lifestyle, too. I don't see this with other theories quite to the same extent. You know, I think they you bring up EMDR I think that there's a very big mindfulness component of it that the good EMDR clinicians that I know tends to exhibit as far as their practice. I don't necessarily see it when it comes to some of the more directive therapies that I don't see solution oriented therapists being like, standing in front of the the milk cartons in the grocery store being like, this one is an eight out of 10 solution, but this one over here is a nine out of 10 solution. Maybe they do, maybe it's just internal, I don't know. And, but the people that I really do see, stuck very much into single theories really aren't practitioners, it's researchers. And it's people whose research is based on needing to stay within a particular theory. And, you know, while I do have respect for the CBT therapists out there, it's those people who are like, well, everything's CBT, you know, that's just, you know, CBT with this or equine therapy is just CBT with more horsepower, or, but our third category is that people choose theories because they like techniques, or they like interventions that come from that theory. And it may not be the most comprehensive way of choosing a theory, it might be something that you find that a particular set of interventions works for certain situations. It's from just that description of it go further than that, like yes, yes, you know, you can't be in the middle of psychodynamic and being like, you know, what, we need some intermittent reinforcement right here. But it can be a place that starts you into getting more of that comprehensive look at a theory if what you find is that a certain technique ends up working, learn more about the theory. So that way, you can understand how it fits comprehensively in the explanation for why a client's pattern of behaviors or outlook on the world may be influenced or susceptible to being changed by that kind of an intervention. Katie Vernoy 22:13 As you were talking, the thing that came to mind, for me, was the validity of this kind of construct. So I'm getting really far afield. So we'll see if this bears fruit. But there are some theoretical orientations that feel very rich, they feel like they have a lot to them, that you can really dig your teeth into them. They're a way of conceptualizing a case with potential suggested interventions or ways of being with the client in the room. And there are others that feel a little bit more stilted or really based on someone trying to put stuff together. So they can prove a point with their research or a slight change to something that's already present and all of that. So I guess I'm kind of pushing back on, needing to have a really in depth understanding of all of the orientations. And I know, you didn't say that, but like, there's some of this where I think about how I actually work. And I, it's almost kind of a post hoc description, saying that I'm existential, or I use narrative, or I've got psychodynamic or or CBT, or DBT, or whatever. Like, to me, it's something where and this is potentially more of a later career situation. And I'm sure you experienced this too. I have absorbed so much knowledge from so many different continuing education, things, different clinical consoles, and conversations. That to me, and this kind of talks about, I think what Diane was putting forward is that there's so many orientations at this point that it's gotten ridiculous. And so she's simplifying it doing something and we'll, we'll put Dr. Gehart's episode in our show notes, the link to it, but, but to me, I feel like there's so much I've absorbed so much that is similar. It's so much that goes together. And maybe this is about making sense and having techniques. And so it's not the strongest way to do it. But I don't know that I'm ever consciously thinking, Well, I'm going to approach this client with CBT to start and then we'll see if it goes into something else. Like I feel like I'm meeting the client. I'm hearing what they have to say I'm conceptualizing it probably from two or three or four different theories because they kind of all melded into one. And then I'm doing interventions based on my conceptualization, but it doesn't necessarily tie and maybe this just is lazy. eclectics eclecticism but it doesn't necessarily apply. Like I'm going to start with this orientation and move to this one then move to this one and that feels to in a box for me and how I actually practice. Curt Widhalm 24:52 I think that with practice, it ends up becoming where, when you're versed in a couple of different theories, you see that certain things are going to be better approached in certain ways. If a client's coming to me, the intake phone call is to deal with trauma, I'm immediately going to go to my trauma modalities. First, as far as how I'm listening for the story developing, somebody is coming to me for something like obsessive compulsive disorder, I'm pretty much going to be going to what's an exposure and Response Prevention Plan. Part of these are where research shows some of the effectiveness part of this is really being able to look at how things make sense. And honestly, for me, part of it is how am I going to be most effective at utilizing something that I can be decently good at some theories that research shows, you know, 95% of people who get CBT by this are fixed by this. But if it doesn't fit with how and how I think about the approach, it's something where I may only be 75%, effective using CBT, with something where I might be 93% effective with something else. Yeah. And so part of that also does look at the influence of who I am. And one of the people that really led the way, as far as this kind of thing is one of those people who had a theory, and that was Milton Erickson, who was largely just kind of seen as it was his relationship with his clients. And yeah, he did a lot of strategic therapy work, but it ended up being him pulling from stuff that worked in the moment because that's what worked for him and the relationship that he had with his clients. So I Katie Vernoy 26:49 guess the point that I wanted to make with that a new just kind of set it in a different way. But I want to make sure we're on the same page is it can be very fluid, it doesn't need to be I start with a conceptualization that is tied to one theory. And I make a treatment plan that's tied to that theory. And then if it needs to shift, I shift to a different theory. It's really to me it feels way more fluid than that. And like I said, I'm existentialist I'm, I'm a Yalom existentialist where it's really just about the relationship and being a real person in the room. So it gives me a lot of freedom to conceptualize things differently. But I think it's hard to describe it to someone that's just starting out when they're like, Okay, what do I do in therapy, and it's like, we'll be in the room, see what's happening with the client, and provide them what they need. I mean, like, that's kind of how I that's, that's my orientation. Curt Widhalm 27:45 So I do want to point out that there are a handful of different instruments that are out there that you can look at, take it with a grain of salt. You might talk about the ways that you might view the importance of aspects that might steer you in the direction of looking at theories that might more naturally come to you. A couple that we've come across in preparation for this episode. One is the theoretical orientation scale, developed by Smith in 2010. It's 76 questions that you fill out Likert scale types, you score it, it points you to sub scales that might fall across a couple of different theories that you might want to look at. Another one is a 40 item scale called the counselor theoretical position scale. This was developed by Posnanski. And McClellan, either of these might be things where if you're looking for a questionnaire that is based on where you're kind of already existing, as a person might steer you into some directions to more easily find, I might want to research this more, you get into practicing that way, you might find that it continues to gel with you, you might find that parts of it gel with you. But if you're looking for a little bit more of a direction, if you're not quite familiar with a number of different theories, yet, these might be some starting places for you to look at as well. Katie Vernoy 29:15 And I think the takeaway that I want folks to have or a takeaway that I want them to have is that theoretical orientation actually can be very fluid over over time, you can start with, I really want to dig into narrative and you do narrative therapy with a lot of your clients. you conceptualize it that way. Maybe you have a few other things that you're doing in the background and not just adhering to one theory. But over time, there may be something else that comes down the pike. You do a training on Emotionally Focused Therapy EFT I have a lot of people that they later in their career, start sending EFT and they're like I'm completely changing how I'm working. This is an awesome way to work with couples or even individually EFT or you Find DVT later and you start digging into that, and you really understand the conceptualization, those things. I think people get really freaked out. And part of it is, I think, the interview questions. I've even designed them, like, what is your theoretical orientation? Like, I think people get freaked out that they have to choose an orientation, and that sets them up for the rest of their career. And I don't think that's true. I think that they there there is certainly foundational work that may stick with you forever. And so you don't want to be mindless about what you choose to focus your attention on at the beginning of your career. But I think it is something where it does shift, you're going to be impacted by research that hasn't even been done or theories that haven't even been concocted yet. And so I think find things that gel with you I'll use your word there and and dig into them, but but don't fear that you're going to be locked into a particular orientation for the rest of your career you You most likely won't be, Curt Widhalm 30:54 we'd love to hear how you came up with your theories or further questions that you might have the best place that you can do that is over in our Facebook group, the modern therapist group. You can follow us on our social media and we'll include links to those as well as the articles and measurements and citations in our show notes. You can find those at MTS g podcast.com. And until next time, I'm Curt Widhalm with Katie Vernoy Katie Vernoy 31:22 Thanks again to our sponsor Buying Time Curt Widhalm 31:25 Buying Time's VAs support businesses by managing email communications, CRM or automation systems, website admin and hosting email marketing, social media, bookkeeping and much more. Their sole purpose is to create the opportunity for you to focus on supporting those you serve while ensuring that your back office runs smoothly. The full team of VAs gives the opportunity to hire for one role and get multiple areas of support. There's no reason to be overwhelmed with running your business with this solution available. Katie Vernoy 31:54 book a consultation to see where and how you can get started getting the support you need. That's buyingtimellc.com/book-consultation once again, buyingtimellc.com /book-consultation. Announcer 32:09 Thank you for listening to the modern therapist Survival Guide. Learn more about who we are and what we do at mtsgpodcast.com. You can also join us on Facebook and Twitter. And please don't forget to subscribe so you don't miss any of our episodes.
In this episode, Kirsten Wilson, the Bulldog Educator, shares about a sense of belonging both for yourself and your role within an organization to create a culture of belonging. Resources from podcast mentioned or utilized: Blog post “What is the Sense of Belonging” (https://www.verywellmind.com/what-is-the-need-to-belong-2795393) published in March 2021 from VeryWell Mind by Kendra Cherry Human Resources Gartner Blogpost titled “Build a Sense of Belonging in the Workplace” by Jackie Wiles (https://www.gartner.com/smarterwithgartner/build-a-sense-of-belonging-in-the-workplace) 6 questions, using a Likert scale to rank from strongly disagree to strongly agree: I generally feel that people accept me in my organization: I feel like a misplaced piece that doesn't fit into the larger puzzle of the organization: I would like to make a difference to people around me at work‚ but I don't feel that what I have to offer is valued: I feel like an outsider in most situations in my organization: I am uncomfortable that my background and experiences are so different from those who are usually around me in the organization: For more information on this survey you can find it here: https://www.pentabell.com/article/6-questions-to-mesure-your-sense-of-belonging-in-the-workplace Don't forget to subscribe to this podcast and follow us on Twitter, Facebook, and/or Instagram. You can also email her at thebulldogedu@thebulldogedu.org. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Shallow evaluations of longtermist organizations, published by NunoSempere on the effective altruism forum. Introduction This document reviews a number of organizations in the longtermist ecosystem, and poses and answers a number of questions which would have to be answered to arrive at a numerical estimate of their impact. My aim was to see how useful a "quantified evaluation" format in the longtermist domain would be. In the end, I did not arrive at GiveWell-style numerical estimates of the impact of each organization, which could be used to compare and rank them. To do this, one would have to resolve and quantify the remaining uncertainties for each organization, and then convert each organization's impact to a common unit [1, 2]. In the absence of fully quantified evaluations, messier kinds of reasoning have to be used and are being used to prioritize among those organizations, and among other opportunities in the longtermist space. But the hope is that reasoning and reflection built on top of quantified predictions might prove more reliable than reasoning and reflection alone. In practice, the evaluations below are at a fairly early stage, and I would caution against taking them too seriously and using them in real-world decisions as they are. By my own estimation, of two similar past posts, 2018-2019 Long Term Future Fund Grantees: How did they do? had 2 significant mistakes, as well as half a dozen minor mistakes, out of 24 grants, whereas Relative Impact of the First 10 EA Forum Prize Winners had significant errors in at least 3 of the 10 posts it evaluated. To make the scope of this post more manageable, I mostly did not evaluate organizations included in Lark's yearly AI Alignment Literature Review and Charity Comparison posts, nor meta-organizations [3]. Evaluated organizations Alliance to Feed the Earth in Disasters Epistemic status for this section: Fairly sure about the points related to ALLFED's model of its own impact. Unsure about the points related to the quality of ALLFED's work, given that I'm relying on impressions from others. Questions With respect to the principled case for an organization to be working on the area: What is the probability of a (non-AI) catastrophe which makes ALLFED's work relevant (i.e., which kills 10% or more of humanity, but not all of humanity) over the next 50 to 100 years? How much does the value of the future diminish in such a catastrophe? How does this compare to work in other areas? With respect to the execution details: Is ALLFED making progress in its "feeding everyone no matter what" agenda? Is that progress on the lobbying front, or on the research front? Is ALLFED producing high-quality research? On a Likert scale of 1-5, how strong are their papers and public writing? Is ALLFED cost-effective? Given that ALLFED has a large team, is it a positive influence on its team members? How would we expect employees and volunteers to rate their experience with the organization? Tentative answers Execution details about ALLFED in particular Starting from a quick review as a non-expert, I was inclined to defer to ALLFED's own expertise in this area, i.e., to trust their own evaluation that their own work was of high value, at least compared to other possible directions which could be pursued within their cause area. Per their ALLFED 2020 Highlights, they are researching ways to quickly scale alternative food production, at the lowest cost, in the case of large catastrophes, i.e., foods which could be produced for several years if there was a nuclear war which blotted out the sun. However, when talking with colleagues and collaborators, some had the impression that ALLFED was not particularly competent, nor its work high quality. I would thus be curious to see an assessment by independent experts about how valuable their w...
Max: Hello and welcome back to the Recruitment Hackers Podcast. I'm your host, Max Armbruster, and today on the show, I'm excited to welcome Bas van de Haterd, and not the way he was referred to by the great podcaster Chad Sowash as, well, I don't know, you tell us Bas, how he butchered your name, but Bas, hopefully I get it right. Bas is a professional snoop, is his title on LinkedIn and how he introduces himself. He's a consultant for the talent acquisition professionals who are looking to revisit and improve their process, and today, we agreed we were gonna have a conversation on the world of assessments. And notably, assessments, everybody's been looking into assessments in 2021 and deciding, is this the right time to revisit? So, we wanna dig into Bas's brain to find out when is the right time to change your assessment strategy and what are some case studies that we can learn from. So, welcome to the show Bas.Bas: Awesome to be here, awesome to be here, Max.Max: And sorry to hear about your American friend Chad butchering your name. You were telling me, Bas, for those who don't know your work, you're very present on social media, so, maybe, where can they meet you on the internet? Where's the good place to interact with Bas?Bas: On personal interaction, it's usually LinkedIn. If you just wanna listen to my views, The Talent Savvy Podcast is a great one to subscribe to as well. And of course, I am avid member of [unintelligible] Recruiting Brainfood Group by Hung Lee which we've also digitally met before, Max. I tried to keep it down a little because I was too active there according to some people, but it's a great source of inspiration for me and I try to add a lot of information on assessments and strategies. Max: In valuable resource, I've made it a mandatory reading for anybody in my company as well. The Recruiting Brainfood by Hung, great source, and also an active community on Facebook. So, great place to interact with Bas, and what was the name of the podcast again?Bas: Talent SavvyMax: The Talent Savvy Podcast, so you can find Bas more for more insights there. So, let's jump into the topic and let's talk about assessments. That's a hot topic in 2021, as I was saying, because it seems like a lot of companies have decided to deal with numbers, finally. The balance has changed a little bit, we have more candidates and less recruiters during a part of last year at least. And so, naturally, assessments came to the fore and people, a lot of vendors have also appeared in the last couple of years that are credible vendors that can do all kinds of assessment. I've had a few of them on the show. So, what's been your, you are like, from a consulting standpoint, are companies revisiting the way they do assessments and are they coming in and asking you for help, of that sort, or do you have to really shape those discussions good that you're just happier with? What works, don't break what works and we have an assessment based on it and we need to change. Are you pulling or are they pushing?Bas: No, I'm usually being asked, you know, can you help us? The downside is usually in the budget there is no room for an external consultant, so, I'm most often get asked for free advice and as soon as I'm like, well, how am I gonna make any money off this, and they're like yeah, we never thought of that. But you do see a lot of companies now revisiting their assessment strategies. I actually do see a lot of difference in there. So, in my home country of The Netherlands, a lot of governments are looking at it, both national as well as in the local level because they've now read so many stories in part published by me and a lot of other people, how assessments done well can actually help your diversity and inclusion and be more fair in your selection process, and for governments that's of course very important. So, there's a lot of governments who have actually done amazing cases which is really interesting to see. You know, the most traditional organizations you'll probably think of being the most innovative and piloting cool, proven, and new technologies in a really smart way. And actually, now also, and I love that about them, they feel the need to also go externally with their data with their knowledge, and just share what they did and share what the results are. So that's how case studies are coming out. A lot of them government related. I see, interestingly enough, Scallops don't see a mis-hire as being part of the process, they see a mis-hire as something they need to improve. A lot of Scallops are ditching the resume as their first point of entry very quickly because they have one or two mis-hires, and they're like, yeah this cost us a lot of money and we have a culture in this company that if something fails, that's not a problem, but we should learn from it. So, they don't consider mis-hires as something that is part of the process, that's unavoidable, like a lot of recruiters do. They see it as, okay how do we prevent it from happening again? And you really see an awesome development there and so small companies are implementing all kinds of assessments. Sometimes good, sometimes not, because as you said there's a lot of new vendors on there. A lot of them are awesome, some of them are complete and total crap, to be honest. Max: I'm totally fascinated by what you just said on governments jumping into the foreign, like, innovating, initiated by a consciousness and an awareness on fairness and inclusivity. So, some strong innovation has been driven by this sort of alleviated political discussion which has therefore push the buyer to say, okay well we're gonna remove some of the human error. Bas: Yes, and a lot of them, most of them, let's be very honest, try to do it the traditional way. Oh, we'll do a gender-bias training, and that will at least check the box. But in some cases, for example one of my major clients is the Dutch Ministry of Foreign Affairs, they now have a Head of Recruitment who isn't originally from HR, he just got in there, he worked in an embassy for 25 years. And he was just looking at the selection process and said to me at an event, listen I think it's really strange how we do it, and I'm like, yeah, I totally agree with you. Okay, cool, we're gonna redo this. And he was looking at it with fresh eyes, and of course, there was some push back from the recruiters at first, but we've always done it like this. He's like, does that mean that it's good, the fact that we've always done it? So, he asked all the questions which you should ask. History doesn't mean it's good, it doesn't mean it's bad either, but we need to revisit our original thoughts and they were basically sending the last candidate to an assessment, because they said, listen we need to have an assessment in there et cetera et cetera.Max: Like a QA check at the end of the production line.Bas: Exactly. And he literally said like, listen about, almost nobody gets rejected by the assessment, so we're basically spending a lot of money. I'm getting an external company to sign off on what we've already decided. Max: That sounds like governments, sounds just right.Bas: Yeah, and this guy, although he's been in government forever, he said, now you're telling me, because he saw one of my lectures when we, it was pre-covid when we were still able to go to events and stuff like that, and he said, and now you're telling me boss that by moving it to a different part of the process, basically putting it all the way up front, for the same amount of money, or maybe even less, I can have two or three or four times quality? I'm like, yeah pretty much. He's like, let's do this. Let's do this. He says, I have no idea what it's about. People in my team, if they had an idea, they should have spoken up a long time ago, so obviously they don't have an idea. Please help us.Max: You raised something very important here. You said, for the same price or maybe less. I think that's one of the reasons why people have been quitting the assessments at the end of the line is they say, well I pay on a per assessment basis, so I don't wanna spend that kind of money. I don't wanna spend 10, 20 dollars for a candidate. Is that changed? From your experience, is that a good way to save money?Bas: It depends, which is a very [unintelligible], but we've got a lot of suppliers who now say, listen we're gonna charge you depending on the number of hires you make in a year, and we don't care about the number of applicants, or we're gonna charge you a fixed price anyway, or we're gonna charge you based on a number of candidates which you are never ever gonna be reaching, so who the hell cares? You've got those, and you still got more the traditional suppliers who moved online and they're like, still --Max: Like Berlitz and things like that.Bas: Yeah, and interestingly enough every country has their own set of suppliers because there's a lot of, there's actually interestingly also a lot of bias in a lot of assessments, which the suppliers will deny, but I know which assessments have which risk for bias. And those are also nationally.Max: You know their [unintelligible] once.Bas: I'll give you a simple example. If you do a Likert scale that's like 1-5, a Dutch person will always answer a 2 or 4. We are never on the extremes, we are never extremely bad, we are never extremely good. If you make us choose between two things, we will never say we are not able to do one. If you ask an American, it's always a 1 and a 5. They're either great or terrible at something. But, if you start using this data to match with applicants, you've got a cultural problem in there. And the interesting thing is people with a bicultural breakarm from countries like Turkey and Morocco where we, and The Netherlands have a lot of them, have the tendency also to go to the 1 and the 5. So, despite the fact that that test itself isn't biased, the way people read the test or used the test could be biased or is biased, and because by law it's not allowed to ask somebody for his ethnicity in Europe, you can't correct for that. And funny thing is, every major supplier corrects based on national levels, yet never tells that in public against their clients, to the clients. And they can't do it internally. We know that an Italian will fill out, with the same characteristics, will fill out a test differently from a Swede. That's just, that's been registered a million times. Yet, what about the Italian living in Sweden, that means you get a wrong test. So, those are --Max:: Sounds like an impossible conundrum, I don't think we'll have time to fix it on today's discussion. But give us, can you share, I know you've prepared some examples of people who did a before and after and who had an assessment system that they thought worked, and then they revisited it. So, let's jump into those if you don't mind, Bas, and you had an example from, what was the one that you want to start with? Help me out.Bas: I think the one which I really like because it's the most simplest of assessments is from the Dutch Post and they did it for, basically, for package delivery people. You know just the people driving around in a van all day. And they simply changed from asking a lot of data in a resume to sort of a structured questionnaire, interview-like assessment, application process. First of all, they looked at the questions they were asking, and it turned out that some of the locations were asking, how long do you have your driver's license, and others were asking, how many kilometers a year on average do you drive. Turned out the last one was a lot of more predictive, so they were simply looking, and a lot of applicants actually feedback, because they interviewed recent applicants as well, and they said, listen we get the same question two or three times in the process, which we're annoyed about. Sometimes, in the first interview, the phone screen, they would be asked, how long do you have your driver's license. And then in the interview they'll be asked, how much have you driven this year. For them it's the same question. And they we're like, well if the first one isn't relevant, why is it still in there? So, they made a basic set of questions, both on applications as well as in the phone screen, and they were able to, they piloted it which I loved about their case and that's why I'm sharing it. They started saying, okay, we've got 15 locations in The Netherlands, 5 of them are going to use the new system, 10 of them will continue as is for now. So, they had the perfect quality control of is this really better or is it --Max: Like AB Test?Bas: Yeah, perfect AB Test. And the pilot locations saw the early attrition. So, people leaving within six months of signing up the contract, which is really expensive, dropped from 17% to 12% in a quarter, while the other one saw it increased from 14% to 23%. Because the market was tightening, early attrition was increasing everywhere except for the ones where they did the new selection assessment strategy. Max: So, in this case, they really just changed one question and --Bas: No, no, no, they changed a whole lot. I'm just giving one example, they changed -- Max: That was the one question that made a big impact.Bas: They changed the entire process from basically letting the recruiter decide what questions to ask to having a structured interview for everything and looking at the relation between the questions and being able to tweak it. Max: They centralized the screening process and standardized it rather than leting the recruiter set their own questions, that makes sense. [overlap] It's the same question, like how long you've been driving and how many kilometers you do every year. But one is obviously better because it's gonna, you know, it's closer you to what the job actually is. Nobody cares about --Bas: Exactly and especially if you notice that one reason for early attrition was apparently that people didn't like being in a car all day, which is something you are if you are a package delivery guy. So, another really cool case study comes from completely different market, a stock market trader, and the reason I love this case is that they actually, the good thing about financial institutions is they have a lot of money, so they were able to simply run two assessments side-by-side for two years. And seeing you know, what's the predictive value of [unintelligible] they already had a process. The thing about the stock market trader, you gotta understand Max, you can't really have a bad hire because it's potentially can cause you millions of dollars. They're trading on their own accounts so you really can't make any mistakes. And what they did was they had a more traditional assessment with questionnaires, with cognitive tests, et cetera et cetera. That was pretty good, but it also had a price per assessment. So they're only recruiting from the top universities like Oxford, Cambridge, INSEAD, and if you hadn't been there, you shouldn't be applying. They had to do a CV check which they knew had no predictive value whatsoever. They literally said like, we're hiring students, except the school you went to, what could possibly be on there? Absolutely nothing. But we need to do something because too many people wanna be a stock market trader because it's still a job which inspires a lot of people because you can make a lot of money in a short time. And they, basically, they went parallel in their test for two years and because of all the feedback they had from, okay, this person was hired, we didn't let him go really quickly, this person was hired we let him go. And I'm especially saying “him” because they actually never hired a female stock broker until this year. If you're talking about diversity, they just hired their very first female stockbroker because now what they're doing is they're making brain profiles, as they call them, which is basically a next generation cognitive test by a company called Brains First. They're able to, they've gotten an insane amount of really interesting game-based cognitive tests. I always call it like four different shooting games, I actually love playing them. Yes, they're long, they're 45 minutes, but when I finished, I was like, what finished already? While if I'm doing a 20-minute questionnaire, in 10 minutes I'm like, oh god I'm only halfway there. That might be my gaming background. I know I listen to your podcast with the guy from Activision Blizzard, you have a gaming background too, I know you'll love this game, Max. Max: Okay, I'll check it out. I'm on their website right now, Brains First. Forty-five minutes for an assessment seems like an awfully long time but if you have the kind of career that attracts a lot of candidates that they just want to work for you, then why not? You know, you have that luxury, it doesn't work for every employer.Bas: It doesn't work for every employer but in their case, it worked really well, and they were now able to, first of all, screen everybody so they're seeing diversity, especially in their case, the diversity of all the universities they're recruiting from increased. They now actually, and I love this about it, they say, listen on our career site, there's a button, check if you have the brain of a trader, so you can actually check if you're going to go to the second stage of the process before you actually apply. I mean, isn't that cool? You can take away the anxiety of an applicant like, okay you're good enough or not. And like I said they, for the very first time, were able to hire a female trader this year.Max: I think going back to the, you know, great example and people should check out Brains First if they're hiring for people who are quick, you know, they need a quick mind, right, there are quick reaction time and resilience, so that could be a good solution for them. We started chatting about what's a good time to rethink your assessments. I was thinking some of the symptoms of maybe this is the right time, is when you see examples like HR treating the assessments as a necessary step to get through and like when they're, sometimes you can even see recruiters who are coaching and preparing the candidate before the assessment because they really wanna get through it. They want them to pass, right? So, they say, oh yeah this is how you're gonna pass and then that way we can get over this thing. You know that's a pretty clear sign. Are there other kind of signals people gotta look out for that now is the time to revisit or what's the cadence at which one should revisit his assessment strategy?Bas: Well, I actually think that by definition you should revisit your process every couple of years at least. But right now, what I've been hearing a lot is we can't find anybody. You know there's just not enough good people out there. I've seen a lot of case studies also with these assessments where you're not lowering the bar but you're opening it up to an entire audience which you never would have thought of. I'll give you an example, air traffic control, which by the way, also uses Brains First, and I'm not at any way affiliated with them, but they just have awesome case studies and they publish them, so I love them for that. In the air traffic control, it used to be that you needed an academic degree, then they said college degree is good enough, and now they're actually saying if just finished high school you can apply because with our test, we're able to actually assess if you're good enough. And for example, one of the things which is really important for being an air traffic controller is stress resilience, that's something which isn't tested in college or in a university. And they opened up this entire pool for people with much lesser or no education while, and this is the beauty of it, while increasing the quality of hire by 120%. Max: It's a beautiful time to be in HR and to be in [unintelligible] in recruitment. To have access to these kinds of insights. To say, I'm now hiring an air traffic controller because that person stays cool under pressure, and I can measure that scientifically. These things didn't exist ten years ago. So, for probably the majority of the jobs, if you haven't revisited your assessment strategy in a while, you should do so regularly because it's moving so fast.Bas: I'm not saying that the resume or experience have no value because for some jobs, I love the fact that if I'm flying, my pilot has a pilot license, and I love the fact that if I'm in the ER that the nurse is a registered nurse. I'm not saying it works for every job, but I've seen awesome cases also on hiring recruiters who never got a chance and who are awesome at the job with assessments. I recently saw one where, at one of those cities, one of the local governments, and they said, okay, for this job basically 95% of everybody doing it is gonna retire within the next five, ten years. It's really an old man's job. So, they were like, well we can't hire anybody with experience because then we're gonna be hiring somebody again in a few years. We're only postponing the inevitable, but we have all these experienced in our company, in our organization now. People who are retiring who don't mind sharing their knowledge, who would actually love to share the knowledge, but there's no official education for this job. They call it the digital archive person, basically. It sounds like the most boring job in the world, but a lot of people love it. You're basically the digital librarian of a city, knowing where I can find all the information on who owns what plot of land, what was there historically, could it be contaminated ground. All those kinds of stuff.Max: Sure, some people are like that. That's a job for someone. Bas: Exactly, but, and what they're now doing is also assessing. They're just telling people like, okay, I don't even want your resume because we know you will have no experience which is relevant for this whatsoever because it's such a unique job. These are the qualities we expect from you, here's the test, show us you've got the quality, and the best five from the test will get invited for an interview. They recently did this one and they hired a 24-year-old woman, which was the first woman in that organization doing this job ever and it was the first person under 50 in a long time. Max: Oh, wow. Bas: And everybody is now saying, which is interesting, because of course just hiring at diversity doesn't mean hiring quality, but the feedback from within the organization is, wow this is such a fresh of breath air [sic]. And she learns so quickly because she was screened on having the ability to be able to do the job. Now she's not able to do this job yet but that's why there are like five old folks training her to do the job. Max: It's a lot of optimism I think I get from your stories, and we can avoid a lot of heartaches and hiring mistakes as well. Going back into your personal career, if you think back, somebody you hired or recommended for hire that was a mistake, I don't know if any kind of person comes to mind if I asked you that question. Would you, could you recount us the mistake and what you learned from it and maybe whether an assessment could have prevented it. Bas: Actually, an assessment is now preventing it. Yeah, I actually made the same mistake twice. Basically, hiring somebody I knew, a friend, who was first of all, apparently, not really fit for the job and it took me a while to figure out what qualities were necessary for this job. It's basically a researcher position, but a very simple researcher position. Twice I hired a friend on there. One was really, he just didn't have the cognitive capabilities and the other one was really hard to motivate, and if it's a friend, it's even harder to kick somebody's ass, basically. And they're still friends, but as employees I would never rehire them. Max: You're still friends but 10% less. Bas: No, no, no, no. Max: 100% [unintelligible]Bas: No, no, no. We're still friends but I would never rehire them, and they know that. And they --Max: And the assessment that could have prevented it?Bas: Well, I've actually developed a few tests now that are preventing it. So, for this research, I used to have, I would hire four or five people every summer to do a certain research for me, students. And now, I've got a few tests which is basically measuring your information processing speed, your scanning speed because you're researching websites, you're looking at --Max: And it's a test you built yourself, a home-built?Bas: Well, I took the academics which I knew measures the cognitive traits I needed to have and yeah, I had it built in by a [unintelligible] developer in Russia, because that was so much cheaper than actually buying one. But that's because I actually knew, the moment I realized the qualities I needed in my employees, because I'm an assessment expert, I immediately knew this test, this test and this test would work, and I was able to really --Max: And you're able to put it together very quickly.Bas: Yeah and I mean it's just three really simple academic tests. To give you an example, if you wanna know if somebody can scan a website really fast, you just give them a 20x20 grid of letters, you say there's one x in it, find the x. Max: Two seconds, boom, yeah.Bas: Well, yeah, and you've got two minutes to find these many x's as fast as you possibly can in different situations. Max: That correlates well.Bas: And that correlates really well, and of course, I checked if it correlated, and it did. And since that moment, I introduced and I've got three tests and I've hired better and acceptable people, but I have not had a single one completely misfire and before then I had at least one mis-hire every year.Max: There you go. Great. Thanks for sharing. I'm sure you've given us, our listeners, reasons to rethink their assessments strategy, maybe build their own home tests, because it's not that complicated to build your own tests or go out into the market to find what's available, or reach out to a consultant like yourself, Bas, to guide them to that decision and remember to pay you not all free advice. So, again, the best place for them to get ahold of you is on LinkedIn, Bas van de Haterd, and maybe you wanna share an email?Bas: It's my first name @ my last name dot N L. So bas@vandehaterd.nl you can reach out there, you can reach out on LinkedIn. If you wanna know more about assessments, do a vendor selection, output implementation, or if you wanna build one yourself, I usually don't recommend it because there's just so many awesome tools out there which are usually scientifically much more validated and you really need to know what you're doing in order to make it scientifically sound and there's a lot of law, especially in Europe coming up where you will be held accountable if you are using an unvalidated or not perfectly correct assessment, an AI system.Max: Proceed with caution. Don't try this at home. Okay.Bas: Well, yeah. Max: Okay, great. Thanks a lot, Bas. Thanks for coming in on the show.Bas: All right. Max: Hope you enjoyed my conversation with Bas. There is a lot more of Bas's conversations on Facebook and on LinkedIn, if you joined the white groups and and I think if you don't revise your assessment strategy and you don't take another look at what's available in the market every couple of years, you're definitely going to be missing out.So feel free to turn to Bas for advice or to turn into this show, we also feature a lot of assessments on the show and please subscribe to receive more.
Erica Harrell is the CEO of Erica Harrell Consulting, an education consulting firm created to help dedicated, overworked K-12 principals and district leaders create organized, strategic professional development plans to increase student achievement and improve staff capacity. Erica has over 12 years of experience in K-8 education. She began her career as a special education teacher and has held multiple leadership roles from instructional coach to principal to Director of Leadership Development. In all of her roles, Erica has always had the desire to grow and help others do the same. As a school leader, Erica has led teams to develop strategic and comprehensive project plans for multi-day and multi-week professional development series and coached leaders to ensure high-quality session facilitation. Under her leadership, a team of school-based leaders facilitated 4-week summer professional development with an average of over 90% of participants rating sessions and operations as “Platinum” (highest rating on 5-point Likert scale) multiple years in a row, she doubled third grade ELA scores in one year and coached 2nd year teacher to be the highest performing math teacher among all Achievement Network Schools. Erica is originally from Upstate New York. She attended University of Maryland, College Park for undergrad where she obtained a BA in Sociology and Communications. She holds a Masters of Education in Instructional Leadership from Relay Graduate School of Education. Erica currently lives in Maryland with her husband and one year old son.
In the latest of Learn with SurveySparrow, find out what is a likert scale and get the best tips to create a likert scale.
In this episode of The Campfire Jeremy talks with CGU epidemiologist Nicole Gatto about her ongoing research into the development and distribution of COVID-19 vaccines. Vaccines interact with our immune system in highly complex ways, as do vaccinated individuals with society at large. Discussion around viruses and vaccines has suddenly become part of a larger conversation about the shared risks and rewards of public life as the country slowly transitions to a state of post-COVID normalcy. This transcript was exported on May 19, 2021 - 03_Vaccines_and_the_Vaccinated_With_Nicole_Gatto (Completed 05/19/21) Page 1 of 12 Jeremy Byrum: Hi everybody, and thank you for joining us from wherever you may be. It's a little chilly out there, so welcome to The Campfire. I'm your host Jeremy Byrum, and today I'm excited to be joined by our very special guest CGU's very own Nicole Gatto. Dr. Gatto is an associate professor in the School of Community & Global Health at CGU, is the director of the PhD program in Health Promotion Sciences, and the interim chair of the IRB. She earned a Master of Public Health from the Fielding School of Public Health at UCLA, and a PhD in epidemiology from the Department of Preventative Medicine at USC'S Keck School of Medicine. With experience in communicable disease control and prevention with entities such as the LA County Department of Public Health, she focuses her research predominantly on environmental, genetic and lifestyle risk, and protective factors for chronic diseases. In the last year, she has lent her expertise to work on COVID-19 including a recent collaboration on vaccine hesitancy research with Riverside University Health System. Dr. Gatto is looking forward to traveling to Iceland later this year as a Fulbright scholar now that the previous pandemic restrictions to the program have been lifted. Nicole, thank you so much for joining the show. Dr. Nicole Gatt...: Jeremy, thank you so much for having me. It's great to have an opportunity to speak with you. Jeremy Byrum: Of course, and it's always great to talk to a public health expert and someone with your expertise, especially now that we're in the later stages, I'll say, of the pandemic, although it's crazy that we've experienced it now over the last year. I felt like the year went by, well depending on who you ask, either fast or very slow. Dr. Nicole Gatt...: I agree with that. Jeremy Byrum: Thank you for taking the time. Dr. Nicole Gatt...: You're welcome. Jeremy Byrum: So let's get started a little bit with a brief overview of your background, maybe a little bit on epidemiology for those who aren't aware of what epidemiology entails. So can you go over some of what you do? Dr. Nicole Gatt...: Sure, absolutely. So Jeremy, I think that probably before a year ago, most people had not even heard of epidemiology. And I can say this because when I would introduce myself as an epidemiologist, most people would ask me if I worked with bugs, I think they were thinking entomology, or second I would get if I worked with skin, so I think they were thinking epidermis. But epidemiologists are health scientists, we study diseases in human populations. I often say that we bridge public health and medicine, and essentially epidemiologists our goal is to understand risk factors, so things that make you more likely to get a disease, as well as protective factors, so things that make you less likely to get a disease. We then use our research, and others use our This transcript was exported on May 19, 2021 - view latest version here. 03_Vaccines_and_the_Vaccinated_With_Nicole_Gatto (Completed 05/19/21) Transcript by Rev.com Page 2 of 12 research really as the basis to make recommendations to prevent disease from occurring in the first place, so there's the reference to preventive medicine. So epidemiology, I think, as now most people appreciate, is a data-driven science, so we do depend on data from human populations to be able to do our work. So in doing our work, there's really two parts to it, so usually we begin by observation. So we observe, we characterize, we summarize, so this is what I usually tell my students is the who, what, when and where. We then use our data to ask research questions, so this is where we're asking the why. We want to know what are the explanations for the patterns that we observe in the data, so we go about designing epidemiologic studies to be able to attempt to answer these questions. And as far as my background, epidemiologists usually characterize ourselves either as chronic disease epidemiologists or infectious disease epidemiologists. And my expertise is predominantly in chronic disease epi, but I do have experience in infectious disease when I conducted surveillance of influenza and other respiratory viruses as an epidemiologist for the Los Angeles County Department of Public Health. And I would say because of the nature of this pandemic, I would liken it to all hands on deck moment for epidemiologists. So even those of us who may not have as much of a background in infectious disease epi, I think have been prompted to do what we can to contribute to solving a piece of this puzzle. I've consulted with organizations on developing protocols for safely returning to work, I've provided media commentaries and I've also been active in a number of research projects. So I published an article with Henry Schellhorn from Mathematics on Optimal Control with Uncertainty, so this was using mathematical modeling to predict conditions that are relevant to this pandemic. I also published another article with Wallace Chipidza from Information Systems on Early Media Coverage of the Pandemic, and pointed out opportunities for public health communication. I have a current project with one of our community partners, Pomona Valley Hospital and Medical Center, they were examining predictors of hospital mortality among patients who have been hospitalized with COVID. The project that I'd like to speak about today, addresses a very important issue and an issue that I think essentially relates to us being able to get out of this pandemic, and that is vaccine acceptance. Jeremy Byrum: Yeah. And that's really interesting. And I'm along with that public group of people who honestly didn't know what epidemiology was prior to COVID, so it was one of those new... And I think it was one of those search terms too, probably that when Google compiled the most searched terms in 2020, I'm sure epidemiology was one of them. Dr. Nicole Gatt...: And in the past I would say, well, epidemiology is from the root word epidemics. And so we are scientists who study disease in population, and I think because we hadn't had very many pandemics in our recent history that were of the scale and magnitude of COVID, I would then have to explain, well, there are epidemiologists who study other diseases like chronic diseases like myself. This transcript was exported on May 19, 2021 - view latest version here. 03_Vaccines_and_the_Vaccinated_With_Nicole_Gatto (Completed 05/19/21) Transcript by Rev.com Page 3 of 12 Jeremy Byrum: Yeah. And I think it was interesting what you said too in terms of infectious diseases, you said that you did have some experience in studying influenza. So when COVID started, I guess, when it came into the picture and it was compared a lot with the Spanish influenza pandemic of, I think 1918, how and why, I guess did... When you started learning about COVID, did it come into your radar? Dr. Nicole Gatt...: Yeah. Okay. Yeah. So I remember quite vividly in January of last year, when I first heard of the reports that were coming out of China of a series of cases of pneumonia of unknown etiology, and when we hear the words unknown etiology, usually that's when our antennas come up. And I remember at the time my attention was definitely peaked, but I also remember feeling quite worried. And that was in part because epidemiologists and public health professionals have really anticipated for many years, the potential for a global infectious disease pandemic like COVID-19. To your point of influenza, I think many folks thought it might be an influenza pandemic, but it was not, it was a coronavirus pandemic. Some of the reasons why we've been anticipating this sort of pandemic, have to do with the increased globalization of trade and travel. So really there's many more people mixing around the world, both people and goods than ever before in our history. I would also include the changing climate and our impact on the environment, particularly our continuing encroachment on animal habitat. So really we have the potential nowadays to come in contact with unknown pathogens like we have not had before. So these are some of the ingredients that are there to bring about these conditions for there to be a pandemic on this scale. And I can really remember those first few months of the pandemic, the stay at home order had been issued and I was stuck at home, and I really felt a sense of helplessness, I really wanted to be out there helping however I could. And so even though I have more of a background of chronic disease, I did take this as a personal call to action to do what I could from my training and experience to try to contribute to solving some piece of the problem. Jeremy Byrum: Yeah. You mentioned a little bit on your training when COVID happened, right? You basically had to shift your mindset from chronic to infectious diseases. So what are some of the logistical elements of that? Like, of course in the mainstream we've heard a lot about social distancing, mask wearing, and then now vaccines, which again, we'll get to, but how did some of that inform your consulting that you've been doing with these different entities? Dr. Nicole Gatt...: That's a great question. So one of the main differences between infectious disease epi and chronic disease epi is usually when we're talking about known infectious diseases, epidemiologists will shift their focus to understanding other aspects of how disease spreads. So we might try to understand more about transmission, we might try to understand more about factors on the individual level that might protect people, but in this case, when we're talking about an unknown pathogen, one of the biggest challenges is the unknown. So you've probably heard a number of scientists saying that we're learning to fly the plane This transcript was exported on May 19, 2021 - view latest version here. 03_Vaccines_and_the_Vaccinated_With_Nicole_Gatto (Completed 05/19/21) Transcript by Rev.com Page 4 of 12 as we're flying it, right? So there's a lot of information that we're learning as we're going along that's influencing our recommendations, what we know and understand about the virus. And it can be very challenging to be trying to issue what I would call real time advice and guidance while you're going along and learning about it. So, I mean, if you think about it, we've only really known about this disease for a year. And I like to remind folks that some epidemiologists, some scientists, in fact many, may spend their entire career studying a disease. And in the larger scope of things, we really have not had that much of experience with this particular virus and the disease that it causes. Jeremy Byrum: Where does the history of vaccines come into play when it intersects with the history of infectious disease and, I mean, by extension pandemics? Dr. Nicole Gatt...: Yes. So from a public health perspective, the reality is that vaccines are one of our interventions that are credited with contributing to significant declines in infectious diseases during the 20th century and the accompanying increases in life expectancy. So in the year 1900, the average person lived to be 47 years old, in the year 2000, the average person lived to about 78 years in the United States. So that's a 30 year increase in life expectancy that in part is due to our control of infectious diseases, and one way that we've done that is through vaccinations. So perhaps it might be a good idea to talk first about how modern vaccines work and then cover a brief history of vaccine hesitancy. So to start with, it's really important to know that our immune system is very smart. It can sensitively discriminate between subtle differences for a huge number of pathogens, so not only viruses, but bacteria, fungi, et cetera. And our immune system learns the identity of pathogens when we're naturally exposed to them and we get sick. It fights back against the pathogen by mounting an immune defense to get us back on our feet, but the reality is that this takes time. So as part of this response, our immune system remembers the pathogen, so the next time we're exposed, it will respond more quickly and more intensely to prevent us from getting sick. So what do vaccines do? So vaccines are really teaching our immune system to recognize a pathogen without our having to be exposed. So this effectively bypasses the time that's involved with the natural learning process. So vaccination results in our immune system being able to mount an immune response the first time we're exposed, preventing us from getting sick in the first place. And there's many different types of modern vaccines; there's inactivated, live attenuated, recombinant, viral vector, mRNA, but they're all essentially working off that same premise that I just described. There's a second way that modern vaccines "work", so in addition to protecting the individual, modern vaccines work also by protecting the community, and this we refer to as herd immunity. So essentially the concept is that when enough people in a population are vaccinated to a disease and therefore immune, if that disease were to enter the population, persons within the population who are not vaccinated and not immune are indirectly protected by those who are. So this I think is really important to why we all need to be This transcript was exported on May 19, 2021 - view latest version here. 03_Vaccines_and_the_Vaccinated_With_Nicole_Gatto (Completed 05/19/21) Transcript by Rev.com Page 5 of 12 vaccinated. Vaccines are protecting persons who are not choosing to be vaccinated, so persons who are not choosing to be vaccinated are really being able to enjoy the benefits from those of us who are. And I think it's also important that we not lose sight of the fact that we're all members of a society, we share a common environment, and we're really interdependent on each other. So when we talk about the reasons why people may be hesitant to be vaccinated, I think it's important to keep in mind some of the risks, which hopefully we'll also be able to talk about, but as a punchline, as members of communities, we really need to remember that we're sharing both the benefits and the risks. And if we all do our part, we're going to be able to maximize the rewards that we'll all share. Jeremy Byrum: Right. And I think too, you mentioned in your talk a little bit outlining that community as well, and also how the development of vaccines or I should say the history of not only your anti-vaxxers, but also your vaccine hesitancy has been around as long as the actual vaccines have. So can you go into a little bit of that history as well and where some of that public hesitancy comes from? Dr. Nicole Gatt...: Yeah. So in working on this project, I wanted to understand more about the history of vaccine hesitancy. And I found that it's not a recent phenomenon. So in fact, as I mentioned that health and medical scholars describe vaccination as one of our achievements from a public health perspective, the opposition to vaccination has almost existed as long as vaccination itself. And so some of this has to do with the early inoculation practices dating back to the 1800s, so it was not particularly painless, safe or sanitary to get vaccinated then. Governments in the United States and in Britain imposed steep fines to people who didn't get vaccinated or have their children vaccinated, so it was actually quite a punitive system, it was costly to people who were working then. So fortunately since then, public health has evolved away from these quite heavy-handed approaches to prioritize and favor educational interventions. But in modern times, anti-vaccination movements can be traced back to the 1970s when there was an international controversy over the safety of a particular immunization called the DTP, or diptheria, tetanus, and pertussis vaccination. There was a report from London, from a children's hospital at the time, which claimed that 36 children suffered neurological conditions following the DTP immunization. The opposition to the vaccination was fueled by television documentaries and newspaper reports, a parent advocacy group, and even some members of the medical profession. And unfortunately the controversy affected vaccination rates even though advisory committees reformed and confirmed the safety of that particular immunization. More recently, if we look to the late 1990s, probably a better known controversy is that over the MMR vaccine, so this is the measles, mumps, and rubella vaccine. There was a British doctor in 1998 who published quite a well-known study that suggested a possible relationship between the MMR vaccine and autism. This transcript was exported on May 19, 2021 - view latest version here. 03_Vaccines_and_the_Vaccinated_With_Nicole_Gatto (Completed 05/19/21) Transcript by Rev.com Page 6 of 12 So this was published in a well-known journal, the media picked up on it, igniting public fear. There was confusion over the safety of the vaccine. As it turns out that doctor was later investigated by the British Medical Council and he eventually lost his license, this was because they identified some quite serious violations, including a conflict of interest. There were issues with consent in the study, and he even had falsified some data. The journal eventually retracted the paper over 10 years later, but the damage unfortunately had been done. There was an anti-vax movement, which had grown out of this, it was credited as playing a role in the re-emergence of measles in the United States and other countries. And this was even though there had been a large number of studies which later confirmed that that vaccine was safe, and none of them by the way, found a link between vaccine and autism. So if we look at the history, there are really three major themes that relate to acceptance of vaccines, and I think we can see their roots historically. So the first is governmental authority versus individual liberties. So here, right? The concept is that people have a right to control their own bodies and those of their children and not the government. The second major theme is that from religious objections, so here is the belief that my body is pure, and vaccines are unclean or unchristian, in part because of some of the animal roots of vaccinations. And then third, relates to concerns about safety and the potential harm caused by vaccines and especially to children. So these are themes that are even true today, in people who feel hesitant about being vaccinated. Jeremy Byrum: Right. And I think you said something interesting too, when it comes to the damage already being done. I think probably a modern analog to that is the recent reports of the single dose Johnson and Johnson vaccine, which had a pause, and then now I believe doesn't have a pause. So I can imagine why that still plays into today's society, where one report like that comes out and it has a ripple effect into the community. Dr. Nicole Gatt...: Yeah. I think people are probably coming in with some level of skittishness. Unfortunately there's been a lot of misinformation which has been propelled through social media, which I think is different than in the past. So information is more readily available and so is misinformation. And the Johnson and Johnson vaccine being paused, actually I think is a testament to our system working. So we do have, and we can talk a little bit more about this later, we do have a system that reviews vaccines before they're approved for use. That same system also monitors the use of vaccines once they are being used in the population. And it's set up to monitor potential adverse events in persons, and then take a step back and confirm whether or not those are related to the vaccine itself. So that I think is a bit more of a testament of our system working the way that it should. Jeremy Byrum: Right. And also about that system, so maybe we can go from there and then transition that into some of the work you're doing in vaccine hesitancy research. This transcript was exported on May 19, 2021 - view latest version here. 03_Vaccines_and_the_Vaccinated_With_Nicole_Gatto (Completed 05/19/21) Transcript by Rev.com Page 7 of 12 Now with this vaccine particular, I know that there's several COVID vaccines, and I think for emergency authorization in the US, there's at least several that are available now, probably the two most popular are Pfizer and Moderna. Can you speak to how that process worked and how we were able to get a vaccine so quickly, and perhaps how that might lead to some of the hesitancy among the public? Dr. Nicole Gatt...: Sure, absolutely. So what's going on behind the scenes, which I think may not be quite as much in the public consciousness is that, these vaccines or their platforms have been in development for years. So for example, the mRNA vaccines, the Pfizer and Moderna vaccines, even though they are a new technology in terms of a vaccine that we're using now, those have actually been in development for a decade. So even though they are new, the research that has contributed to building the foundation has actually been around for quite a long time. So before a vaccine can be used in the United States, it has to go through a series of rigorous studies. So these begin in the lab, they continue in animals, and then they progress into humans. And folks hear about clinical trials, so there's actually three rounds of clinical trials in humans that must be conducted before a vaccine or actually any drug, can be approved by the FDA for use in the United States. Emergency use authorization is a designation which can move along more quickly the release of the medication or the vaccine to the public, but it has not shortcut all of that work that I just described that goes into the development of the therapeutic or the vaccine. So there are a team of scientists and medical doctors at the FDA who are reviewing all of the data from those studies, they're looking for safety, they're looking for effectiveness, and that essentially is what goes into the approval process. And then as I mentioned, the FDA is also involved in ensuring the safety of vaccines after they've been approved. So they check vaccine manufacturing sites, they inspect them to make sure that they're following good manufacturing procedures, they also monitor the use of the vaccine in the general population. So clinical trials are large, but they're enrolling on the order of tens of thousands of people. Once the vaccine is used by the public, we're talking about millions of people who now are taking them, so the FDA is monitoring the population for the occurrence of more rare adverse events that we may not have detected in clinical trials. And their work is focused on looking to see whether those potential adverse events may actually be linked to the vaccine or not. And think about it this way, the reality is that when we're talking about millions of people, there is the potential for us to see the occurrence of rare events, just because we have so many people that we're following that the potential is greater. So it could be by chance alone, that we're detecting these adverse events. So think about it this way, it may be that even if those people had not been vaccinated, they still may have experienced that adverse event like a blood clot. So the task of scientists involved in this type of work is to review medical records and try to determine whether there's a link between that medical event This transcript was exported on May 19, 2021 - view latest version here. 03_Vaccines_and_the_Vaccinated_With_Nicole_Gatto (Completed 05/19/21) Transcript by Rev.com Page 8 of 12 and the vaccination, it could just be a coincidence. And so far, the rare side effects that have been detected are the severe allergic reactions, so the anaphylaxis, and this has been recorded to occur at an extremely low rate. So I think it's five cases per 1,000,000 doses for Pfizer and three cases per 1,000,000 doses of Moderna, so that's extremely rare. And it seems that this has mainly occurred in women and people with a history of allergies. And one of the ways that we take steps to keep an eye on folks is, what happened after you got vaccinated? Well, you are to wait for 15 minutes. So wait in the vaccine site for 15 minutes, and this is because we want to check to make sure that folks don't have an allergic reaction. If they did, there would be these steps taken to help them, medical care would be called and we would take steps to treat them for that. The second, which is the reports that have come out for the Johnson and Johnson vaccine, this is the rare occurrence of brain blood clots. And the most recent statistics are showing 15 confirmed cases among nearly 8 million doses, so this is also extremely rare. And we're talking about less than one in a million or thereabouts of these different adverse events for both the Pfizer, Moderna and the Johnson and Johnson vaccine. So if you put this into perspective, the lifetime risk of dying from a motor vehicle accident is one in 107 in the United States but we're still driving, right? There's a background risk that's just associated with living and the different things that we do as part of our life, so there's a risk associated with driving, but yet we all still drive, right? If you're interested in the risk of winning the lottery, in the California super lottery, a jackpot of 41 million, your odds are one in 41 million of winning that. So actually putting this in perspective, the risks associated with these vaccines are extremely low. And scientist's part of the task is to adjudicate whether the benefit is outweighed by the risks, and scientists looking at the Johnson and Johnson vaccine, their conclusion is that the benefits do outweigh the risk of vaccination with that particular vaccine platform. Jeremy Byrum: A calculated risk, right? Okay. So how does that play into some of the research you're doing now in vaccine hesitancy with particularly Riverside University Health System, where does that research come into play? Dr. Nicole Gatt...: Yeah. Good question. So coming back to our project with Riverside University Health System, so this is an integrated health network in Riverside County. It's one of our community partners here at CGU. Riverside University Health System serves about 2.3 million residents of Riverside County. So we had a couple of goals for this particular project, and why would we be even interested in studying this issue in the health system? Well, so first of all, Riverside University Health System frontline workers, so there are doctors, nurses, physician assistants, et cetera, who work there. This was the first group to be vaccinated in California, as well as nationally. These are folks who interact with patients so certainly there is the potential there for them to be exposed and protection from vaccination is important. And also there's the secondary issue, may they This transcript was exported on May 19, 2021 - view latest version here. 03_Vaccines_and_the_Vaccinated_With_Nicole_Gatto (Completed 05/19/21) Transcript by Rev.com Page 9 of 12 have an influence on patients? So in other words, could their personal opinion about vaccines matter when they interact with patients? Riverside University Health System is also a large employer in Riverside County, so not only are we studying frontline workers, but the employees likely represent a cross section of the Riverside County population. So there's the factors that make this an important group to study. There's two important factors also to keep in mind as we talk about this work, the first is that the vaccine is offered at the workplace at RUHS. So in other words, folks who work there do not have to sign up on that My Turn site like the public does, so this greatly facilitates the administration of the vaccine. The second thing is that persons who are working at RUHS are likely to still be employed, so unlike other sectors or employers who laid off their employees, that was for the most part, not the case with Riverside. So we had a couple of objectives of this project, the first was to assess levels of vaccine hesitancy in employees of Riverside University Health System. We wanted to really understand what could be some of the driving factors or the determinants that influenced their decision to either accept vaccination, to refuse it, or to be hesitant about it. And the reason why we were interested in that is because we are a program of health promotion, so the idea is that we may be able to develop targeted interventions that could address those factors. Also, from my review of the literature, this is the most comprehensive survey among healthcare workers and health system employees to date. So there had been another study conducted by UCLA, but this was previous to vaccines even being released. So going back to September and October of last year, UCLA surveyed its employees and found that there was some apprehension over adverse events associated with vaccines. There was a recent Kaiser Family Foundation, Washington Post survey in March of healthcare workers, which also showed that a percentage of them were hesitant. Americans also have some level of hesitancy, and it seems that there's about a 10% to 15% stubborn group over time, I mean, that that number is stubbornly not changing over time who say that they won't get vaccinated. So it is important again, going back to herd immunity, that we maximize the number of people who receive a vaccination because this is going to offer the larger society protection from COVID. Jeremy Byrum: Now what's some of the inferences we can make from the data you've collected so far? Given this is a new project, I'm sure it'll be ongoing. And with how comprehensive it is, I'm sure you're getting a lot of responses for sure. Dr. Nicole Gatt...: Yeah. So we developed a survey, this was a collaborative effort between CGU and Riverside University Health System. They have a research center there that we worked with. We didn't reinvent the wheel, so we did model our survey after a previous published survey, so the WHO has worked quite a bit in this area. We included different questions, we wanted to understand demographic This transcript was exported on May 19, 2021 - view latest version here. 03_Vaccines_and_the_Vaccinated_With_Nicole_Gatto (Completed 05/19/21) Transcript by Rev.com Page 10 of 12 factors, we wanted to understand person's knowledge and experience with COVID, their hesitancy with vaccines, we also wanted to collect data on a number of different influences that might be shaping their opinion. So these were contextual influences, individual and group influences, and then vaccine specific issues. So we started administering the survey, March 15th, we closed it this past Monday on April 26th. We sent it out to about 2,500 Riverside University Health System employees, we heard back from 714 of them, so that's about a 29% response rate. A large proportion of the people who answered the survey were nurses or had administrative positions. We asked them about their weekly level of exposure, and some of them had no exposure, some of them had minimal, moderate or high exposure. So that was relatively equally distributed among the persons who responded. There was a significant percentage of them who reported having underlying health conditions like hypertension, like asthma, like diabetes, and overall probably no surprise because of the health system employees, they were very knowledgeable of COVID symptoms and disease. So at the time of the survey, 83% of the persons who responded had received either one or two doses of COVID vaccine, so 17% had not been vaccinated yet. So we continued to ask that 17% additional questions, we wanted to know when the opportunity arises for them to be vaccinated, will they be? And 12% said they would, 50% said they would not, and about 40%, so 38% said they were unsure. So we continued to probe them with a question of whether they would be vaccinated at a later date, and 19% of them said, yes, we had a proportion who said they were unsure, and we had some that said, no. So overall we have about 7% of the people who we surveyed who said they were hesitant, and about 6% who said they refused. So these percentages are lower than what we find in the US population and other surveys of healthcare workers, but it still does reflect an important 12% of the employees who we may be able to target with interventions. Jeremy Byrum: And I think that, sorry, I didn't mean to interrupt, I was just going to say, I think that's really interesting when you were mentioning about the public perspective versus the expertise perspective of not only diseases, but also vaccines. I think that's an interesting statistic given that these are healthcare workers. Dr. Nicole Gatt...: Correct. Now we do have to keep in mind that this reflects people who responded, so we are missing out on the opinions of persons who didn't respond. And it could be that because such a large proportion of people who are vaccinated responded, this data may not necessarily reflect everybody's perspective within Riverside University Health, but we are going to continue to look at that more closely. We wanted to understand the determinants of vaccine acceptance, hesitancy, and refusal, so we asked all of the people who responded a set of questions to try to understand the reasons that might influence their decision to get vaccinated. So we gave them 17 reasons, we asked them to rate these reasons using a Likert scale, so whether these reasons This transcript was exported on May 19, 2021 - view latest version here. 03_Vaccines_and_the_Vaccinated_With_Nicole_Gatto (Completed 05/19/21) Transcript by Rev.com Page 11 of 12 definitely would, definitely would not, probably would, or probably would not, influence their decision. And we also had a category for not sure. So we asked them questions like, would getting paid time off influence your decision, would an influential religious leader, would assurances that the vaccine was safe, what about if the vaccine was a requirement to attend social or sporting events? So we asked them these set of questions and asked them to rate the importance that they would give to these different reasons. So among the persons who were vaccinated, and this is I think really interesting data so far, among the persons who had been vaccinated, there were some themes that came out in the responses. So these were people who we felt were relatively altruistic. So scanning the reasons that they gave, they were not motivated by money, paid time off, other incentives, but they did report being influenced by knowing somebody who got sick from COVID, or receiving encouragement from a family member to get vaccinated. We also saw themes that related to professional motivations and an indication that they rely on knowledge of medicine and science in their responses. Now, this was different than the group who were hesitant. So among those who said they were unsure, looking at their responses, the responses concentrated really squarely in the middle of the Likert scale. So regardless of the reason that we asked, these folks reported that they were not sure. So the theme that emerges here is a level of uncertainty and indecision across the board. In the third group, those who refused to be vaccinated, it was interesting. So again, looking at where the response is concentrated, regardless of what we asked these folks, they said it definitely would not influence them. So a theme that emerged here, was that nothing may sway these folks. So we really need to take this into consideration when we think about, okay, if we want to develop educational interventions, how are we going to reach both our folks who refuse to be vaccinated and our folks who report being hesitant? So this helps us think about different strategies that we could use with either group. Jeremy Byrum: Now, as a bit of I guess some parting words from your perspective as an epidemiologist, earlier you mentioned that your field bridges the medical sciences and the public health side, how can a study like this inform the public health response for, as you said, getting out, I think is the words you used, of this pandemic or any kind of pandemic or public health issue we may face in the future? Dr. Nicole Gatt...: Yeah. So I think that by looking at the responses, we may be able to develop educational interventions that assure people of the safety and efficacy of vaccines. So if they're hesitant because they're worried about, for example, you brought up, why was it that these vaccines were released so quickly? Well, perhaps if we explain the context and provide greater information that, well, even though it appears that they were released quickly, there was actually years of research that went into the development of these vaccines. And we can look This transcript was exported on May 19, 2021 - view latest version here. 03_Vaccines_and_the_Vaccinated_With_Nicole_Gatto (Completed 05/19/21) Transcript by Rev.com Page 12 of 12 at some of the other responses as well, to see where we may be able to reach other groups that report being either hesitant or refusing a vaccine. Jeremy Byrum: All right. Well, thank you so much, Nicole, your breadth of knowledge, it just seems very... You definitely have the expertise in this field, and I'm glad we were able to talk to you to get some of that perspective for those who might not know again, what epidemiology is. So we really appreciate it. Where can we find your work particularly on COVID, but just in general and what you're doing right now? Dr. Nicole Gatt...: Yeah. Well, thank you again for having me. It was great to have an opportunity to talk about the work that we're doing at CGU, with our community partners, with our students. So if you're interested in finding more about my work, you can check out my website, which is www.nicolemgatto.com. There you'll be able to find copies of my published articles and some of the other areas of research that I'm interested in. Jeremy Byrum: Perfect. And is there anybody else you wanted to shout out in terms of collaborators that helped make a lot of this work possible? Dr. Nicole Gatt...: Certainly, I can mention by name, let me get my page here so I don't forget. Certainly, so I can acknowledge we have a great team at CGU, Debbie Freund is my colleague on this project, a number of students that are working with us, both in public health and economics. Also, our collaborators at Riverside University Health System, Doctor Anthony Firek and Judi Nightingale, as well as a number of their employees and volunteers who, without them, this work would really not be possible. Jeremy Byrum: Fantastic. Well, thank you to them and thank you to you, Nicole, for your expertise and for coming on today. Dr. Nicole Gatt...: Thank you for having me, Jeremy. Jeremy Byrum: From Studio B3 at Claremont Graduate University, you've been listening to The Campfire. We'll see you next time.
The OG duo, ice cream, feedback, sports, first car, Super 70's Sports, evaluations, Drake Relays, IFCA awards
Erica Harrell is the CEO of Erica Harrell Consulting, an education consulting firm created to help dedicated, overworked K-12 principals and district leaders create organized, strategic professional development plans to increase student achievement and improve staff capacity.Erica has over 12 years of experience in K-8 education. She began her career as a special education teacher and has held multiple leadership roles from instructional coach to principal to Director of Leadership Development. In all of her roles, Erica has always had the desire to grow and help others do the same. As a school leader, Erica has led teams to develop strategic and comprehensive project plans for multi-day and multi-week professional development series and coached leaders to ensure high-quality session facilitation. Under her leadership, a team of school-based leaders facilitated 4-week summer professional development with an average of over 90% of participants rating sessions and operations as “Platinum” (highest rating on 5-point Likert scale) multiple years in a row, she doubled third grade ELA scores in one year and coached 2nd-year teacher to be the highest performing math teacher among all Achievement Network Schools. Erica is originally from Upstate New York. She attended the University of Maryland, College Park for undergrad where she obtained a BA in Sociology and Communications. She holds a Masters of Education in Instructional Leadership from Relay Graduate School of Education. Erica currently lives in Maryland with her husband and one-year-old son.
Mark Ritson answered some of the burning questions, specifically around understanding your market and establishing your brand strategy, the U and the E elements of my tuned framework. I asked him about business strategy positioning and brand strategy, what he thinks of Byron Sharps views on differentiation and distinctiveness, whether you can really own a particular attribute and how his brand strategy dealt with in the type of company Mark consults with and I also wanted to know Mark's views on brand purpose. I think you'll enjoy this episode. In this episode Mark explained:It makes a lot of sense to talk to lawyers now and spend a bit of money on them because you know putting it in brand managers terms there's nothing worse than investing two years and 5 million quid and a significant amount of company resources into building brand equity on it to discover that the name that you've created is no longer tenable because you didn't check so you know I think that's absolutely your advice is correct for rebrands and also product naming I would not have lawyers anywhere near the rest of the branding process for reasons you may or may not want to go into but in this one area I totally agreeIt's very rare there are exceptions again where marketing or customers or anything above a basic net promoter score would ever feature in the decision making of the company they're interested in the product they're interested in sales profit and then ultimately in share price and I know it sounds funny but I mean this and I'm not speaking from a place of ignorance the thing about most boards is this they're dumb enough to not follow the money all the way to its origin so they follow it to sales they follow it from the product they follow it to profit they just don't go back all the way to the customer itself and how the customer thinks and feels and I can't explain that but I can tell you that that's an absolute reality of not every you know large boardroom but mostI wouldn't separate brand or marketing strategy, the difference between the two is purely based on organizational structure. And one of the great lessons of brand and all marketing planning is you do it 12 months at a time, you know, the beat of any organization's financial year, quarterly, and you plan in 12 months increments. Volvo had a relative significant strength in terms of perceptions of safety, but there were other automotive brands, if you control the size, etc, that also had goods that you know, it wasn't as if if we had a five-point Likert scale, Volvo was a five or a 4.6 from the market and every other brand was you know, 2.3 in the data never looks like that. In one of the 100 cases, your purpose is really your position in the marketplace? In the other 99 cases, it's total hogwash. And it's the wet dream of brand managers who are ashamed to sell things, and don't really like going to dinner parties in North London and admitting that they sell coffee or beer, or God forbid, petroleum. So they invent something that makes them sound betterI think one of the great lessons is to is absolutely to bed in your name and bed in your strap line and not to mess around with them I think there's some you know I think there's a terrible tendency of marketers to change and alter and update these things when in reality the customer hardly notices so I would absolutely say we are guilty in the marketing of changing things too often on a whim when in reality we should maintain themMini MBA in Brand Management by Mark Ritson - https://mba.marketingweek.com/brand-management/ Brand Tuned Newsletter sign up - https://www.brandtuned.com/
During this episode, Dr. Janet Patterson, Chief of the Audiology & Speech-Language Pathology Service at the VA Northern California Health Care System talks with Dr. Rebecca Hunting Pompon, assistant professor in the Department of Communication Sciences and Disorders at the University of Delaware in Newark, Delaware, about depression, the effect it can have on people with aphasia and their care partners, and how speech-language pathologists can recognize and address depression during aphasia rehabilitation. Guest Bio Rebecca Hunting Pompon, Ph.D., is an Assistant Professor in Communication Sciences and Disorders at the University of Delaware, and director of the UD Aphasia & Rehabilitation Outcomes Lab. Prior to completing a Ph.D. in Speech and Hearing Sciences at the University of Washington, she earned an M.A. in Counseling at Seattle University and worked clinically in adult mental health. Dr. Hunting Pompon’s research focuses on examining psychological and cognitive factors in people with aphasia, and how these and other factors may impact aphasia treatment response. She also trains and advises clinicians on interpersonal communication and counseling skills adaptable for a variety of clinical contexts. In today’s episode you will learn: about the similarities and differences among sadness, grief, and depression, and sobering statistics of their prevalence in persons with aphasia and their care partners, how the behavioral activation model can assist clinicians during planning an aphasia rehabilitation program for an individual with aphasia and his or her care partners, 5 tips to use in starting conversations about depression with persons with aphasia and their care partners, and fostering their engagement in the therapeutic enterprise, the value of community support groups for persons with aphasia. Janet: Rebecca, I would like to focus our conversation today on your work investigating depression, and other psychosocial factors that patients with aphasia and their care partners may experience. Let me begin our conversation by asking how we define and think about depression, because I think everyone has an idea about what depression is, and how it may manifest itself in an individual’s interaction with family and friends, and certainly in the past year, as we've moved through this worldwide pandemic, focus on depression has increased. You have studied depression in persons with aphasia, and how depression affects their care, so first, let me ask, how do you define depression? And then how often does it appear in persons with aphasia? Rebecca: Depression is a concept that so many of us are familiar with. In one way or another, so many people have experienced depression themselves, or alongside a family member, so I think it's such a common concept. Likewise, many people know that the definition of depression that we use most often is about a mood disorder. Usually, the two fundamental ways we think about depression, clinically, is that it is either low mood, or it can be a loss of interest, or pleasure. So of course, we all experience this from time to time, but depression is really a much more marked, persistent low mood or loss of pleasure, or interest, and it can span across days and daily life and make a tremendous impact. Those two features go with some other features like a change in appetite, fatigue and energy loss. Some people experience a slowing of thought or slowing of physical movement, or experience trouble with concentrating, or trouble with focus. It also could include feeling worthless or excessive amounts of guilt, and it also can be accompanied by recurring thoughts of death, which can be with a plan or more abstractly without a specific plan. Those are the constellation of symptoms that can go with that formal depression diagnosis. Of course, aphasia, as we all know, comes with some significant changes in functioning after stroke or other types of brain injury. Loss and grief are commonly experienced by many people with aphasia and their families as well. Unfortunately, those losses that are experienced with aphasia can lead to depression in a significant number of people. Let me give you a little bit of context on that. In the general adult population, maybe like 9% of the population or so may experience a mild to major depressive disorder at some point; the number goes up for people that have experienced stroke to about 30% or so. In studies of stroke survivors with aphasia, the number is significantly higher. We recently completed a study with about 120 people with aphasia, and about half of them reported symptoms that were associated with a depressive disorder, mild to major. And I think it's really important to note that this is based on 120 people that were motivated to participate, to volunteer for research. We really believe that actually, depression may be experienced by a quite a greater number of people with aphasia, because we're not capturing those people that are at home, they're not engaged in speech therapy, and we really wonder if rates of depression in aphasia might be quite a bit higher. Janet: That is a stunning set of statistics when you think about all the people who don't report, can't report, or don't come into the clinic, and their feelings; their ideas are pretty much lost in the world. I appreciate the comment that the people participating in your study are motivated, and they experienced depression. It's out there, and we need to pay attention to it. As a clinician, how might one recognize the presence of depression in a client? Rebecca: Depression can be really hard to observe at times. A lot of people with depression can mask their depression and seem to be doing fine. I've had this experience working with a number of people who seem to be really thriving after their stroke, but then getting into the details and discussing their life and their reactions, we come to find that they're struggling far more than we perceive that they are. Other times we may get some sense of an experience of depression, maybe we observe a lack of initiative or motivation during treatment or get some sense that our client is just not enjoying his or her activities the way that they used to, or the way that we hear from their loved ones, how they used to participate in their life. What do we do if we're wondering, “Hmm, depression? Is this a factor for this particular person?” It can be helpful to ask about the specific symptoms of depression, sometimes more than asking, “Are you depressed?” I that's true for a couple of reasons. First, some of our clients may associate the label of depression as having a lot of stigma. Stigma around mental health has been with us for a very long time, unfortunately, and it's really a barrier to making sure that we can provide care and address issues like depression in many people, not just people with aphasia. Of course, the other thing about the label of depression is that some people just feel very disconnected from that label. They might hear depression and say, “Well, that's not me, I don't really feel sad.” But again, as we talked a little bit ago about those features and symptoms of depression, it's not necessarily just a sadness, it's about mood and so many other things that go with depression. It can be helpful to talk about those specific symptoms instead of just the label itself. I wanted to throw this in there too, sometimes I've been asked this by a number of clinicians, “How do I tell the difference between depression and grief?” The short answer is that grief doesn't come with feelings of worthlessness or guilt or shame. It's not the turned-inward type of experience, whereas depression can be turned inward. Ultimately speech-language pathologists do not need to feel like they need to be mind-reader's; they do not need to feel like, “I am not a mental health expert, so therefore I cannot ask.” We can ask about depression and depressive symptoms. We can ask ourselves, “Does this person's mood appear to influence their everyday life or their recovery?” That might be the thing that will push us forward to ask a little bit more about what their experiences are like. Helpfully, there are a couple of screening tools that are really useful for clinicians, regardless of type of clinician. One is the Patient Health Questionnaire. It's a depression scale, vaguely named. It's also called the PHQ. The PHQ is a nine, or there's also an eight, item version. They're very simple scales. They've been developed for clinical populations, so the phrasing is quite short and straightforward. They use a Likert scale and they're very well validated screening tools that are also free. I believe we're going to have the pdf of the PHQ-9, which is nine items scale, in the Show Notes. Janet: Right Rebecca: Great. Another scale that's been developed specifically for aphasia, though, it's really addressing caregivers or other proxy reporters, is the Stroke Aphasic Depression Questionnaire, or the SADQ, and it's available also for free. There are a couple of different versions. Again, that's been created for people with aphasia in mind, specifically their caregivers. So that's really helpful tools. In Short, these are great tools to use, and just give us a little more information as we're having a conversation about depression. They then give us some ideas about what next steps to take, including referrals that we might be thinking about. Janet: Rebecca, those are excellent ideas. And indeed, those two resources you mentioned will be in our show notes. You speak about depression in patients with aphasia, but I believe that depression also affects the care partners of a person with aphasia. What do you see is the role of a clinician in recognizing depression in a care partner? Rebecca: This is really, unfortunately, true. Depression is experienced by caregivers, including stroke caregivers and aphasia caregivers, and depression symptoms align, and maybe not surprisingly, with the degree of caregiving effort that's required by the family members. In other words, caregiver depression, can be higher when caregivers are working with a loved one who has more severe functional impairment. Here are even more sobering statistics. There was a study conducted, it's a few years back, about caregiving adults, ages 66 and up, so it's a lot of our clients, family members, and spouses, etc. Those caregivers who reported mental or emotional strain had a 63% increase in mortality risk compared to caregivers who did not report strain. That's a really shocking and sobering to think about. The takeaway here is caregiving burden, as it's often called, that s just a very, very real problem with us. Given that caregivers are such an important part of our client's recovery, their health and well-being are just incredibly important. So how can we support them? They're not our primary concern, because our client is, so what do we do? What do we do for caregivers to support them? Of course, we can ask how they're doing, certainly. Then we can also provide some support resources, support groups, counseling services, and the fact that we are doing much more online now has opened up opportunities for both caregivers and clients to participate in lots of different ways, to connect virtually, and so that's great. Another really great tool that can be used is called the Caregiver Questionnaire. It's a questionnaire that has 17 items and was developed by the American Medical Association. It just goes through a listing of common caregiver experiences that can really be illuminating for caregivers. I've given this questionnaire to caregivers in different contexts, including in caregiver support groups. What I hear from caregivers, once they go through those 17 questions, is often they're surprised. They're often not thinking a lot about how they're doing themselves, because they're very focused on supporting their loved one. It can be really illuminating for them to answer the questions and realize, “Wow, I am really fatigued I'm really tired. And maybe I need some extra support”. What I sometimes recommend to clinicians is having this questionnaire on hand and providing it to caregivers while you're working with the client, and then maybe checking in at the end of the session to say, “You know, how was that for you?” And it's an opportunity, again, to provide some support resources that they can explore on their own. I think it's a really handy way to just shine a light for caregivers, saying, ”Hey you're doing a lot, we recognize that and we know you need support, too.” Janet: I think that's very important. It reminds me of the message you see on the airlines, you know, put your own oxygen mask on first, so that you're better able to help the other people. If you're a caregiver, you must take care of yourself, and we must help the caregivers take care of themselves so that they can better care for our patients with aphasia. Rebecca: Oh, my gosh, so true. Janet: Depression typically does not appear by itself. You've alluded to that and mentioned that earlier. In your experience and investigation. How does depression interact with coping skills, resilience or motivation? Are there other interactions that we may see in persons with aphasia? Rebecca: Oh, my gosh, depression, part of the reason that I studied depression, among other things, is that it's a really interesting experience. It's part of a grouping of some biophysiological processes that are so intimately linked together. I hope you don't mind if I geek out a little bit here. Janet: Geek away Rebecca: Geek away - All right. We know that when we perceive something stressful, like, let's say we're near a potentially dangerous animal or something like that, it's classic example. It triggers systems in our body that helps us respond, right, we've heard of the fight or flight response, where our adrenaline system jacks up so that we can move quickly, right or get away from the danger, or if we have to, fight it off. Then once the danger is gone, our body goes back to its normal functioning state, the adrenal system stops pumping out adrenaline and our heart rate slows to a normal rate, all that good stuff, right? So of course, our body does pretty much the same thing when we're not in danger, per se, but we are experiencing or we perceive stress; that could be public speaking for some, or a big job interview. Then thinking about people with aphasia, maybe it's really stressful to make that phone call to somebody, even someone they know well. They don't feel confident about their communication ability, and that can be incredibly stressful. Even though it's not danger, it still can kick our body's stress systems into gear, activating that adrenal response, etc. Here's the thing, though, if our body is entering that stress state pretty regularly, it gets regularly flooded with these stress biochemicals that can impact multiple systems. We can handle those biochemicals, we were built to handle those biochemicals. But we weren't really built to handle them all the time, or often over a long period of time. If those biochemicals are circulating in our blood, they can really have a damaging effect on our body, and they have a damaging effect on parts of the brain, that are really important for us as speech language pathologists thinking about treatment, right? So those biochemicals, and cortisol is among them, can diminish functioning of regions of the brain that we need for things like attention and memory, things that are really important for learning, right? What do we do in treatment - we learn. At the same time, these biochemicals can increase parts of the brain, like the amygdala, that are really central for emotion. In other words, if we're experiencing persisting stress over a period of time, we may have impairments in memory and focus to a degree, and we may also experience depression, anxiety, and other mental health challenges. I got really, really interested in stress and depression a few years ago, and as you mentioned at the beginning, we created a scale for chronic stress for people with aphasia. Using that scale we found, just as we would in the general population, that there are very close associations between reports of perceived chronic stress and reports of depressive symptoms. The bottom line is that chronic stress is significantly connected to depression, and it's significantly experienced by our clients with aphasia. You asked about coping skills and resilience and that's another area that I've been really, really interested in. We know that there's an association between depression and resilience, or how people cope with stress. As resilience goes up, depression tends to go down. But we also have seen that this relationship is more complex than I anticipated. We are currently validating a scale of resilience for aphasia. We really want to understand better how resilience and depression and other mental health challenges fit together, and then how we address them. Janet: I think that's very important work because we're, when we engage on the therapeutic endeavor, when we begin treatment, it is a partnership. And both the clinician and the patient with aphasia, but also the caregiver, we have to be in there engaged in that process and moving forward to achieve whatever communication goals we have in mind for the patient. If a patient is not engaged because of low coping skills or low resilience, because of depression, that can certainly affect our treatment, Rebecca: Agreed. It's things that we don't really understand. I mean, we understand to a degree, for sure, but I think with some time and some additional research, we'll be able to understand much more clearly how depression and resilience impact treatment, and also how we can capitalize on resilience and build it. I'm looking forward to uncovering some of these associations and understanding them better. Janet: Oh, I look forward to reading your work on that. I want to ask you now the next logical and perhaps obvious question, which is how may depression experienced by a person with aphasia adversely affect the treatment, as well as the quality of life in that person, and with the person's caregivers? Rebecca: We've talked about people who have experienced depression in one way or another, and depression is really mean. It is really a mean, mean process, that can sap our interests in things that we like to do and screw up our sleep and our appetite. It impacts others around us, of course, but yes, absolutely, depression can dampen motivation. That's one of its features, it can dampen motivation to get out of the house, or for our clients with aphasia, it can diminish how much initiative they want to take with activities, especially social interactions that really help with language function and recovery. It may diminish their initiative to seek support or to reach out and start speech therapy. Then, even when a person has decided to actively engage in therapy, depression may also limit how much he or she can take away from that therapy experience to a degree, given that it's harder to attend to things, it's harder to concentrate, it's harder to remember, when you are also struggling with depression. Then it's also that all of those things that contribute to how well we can engage in treatment and adhere to treatment recommendations. We need a level of motivation and initiative and energy to tackle assignments that our therapists might have given us to work on in between our sessions. There are just multiple ways that depression could influence treatment, either through those diminished cognitive processes, or the impact on engagement, and adherence. There are just a lot of questions that we have, still about these impacts on treatment, and how they influence the outcomes of treatment. Janet: One of the things we've observed in some work we've done recently is that people talk a lot about motivation, or resilience or coping, but people haven't yet figured out what that means or how to identify it. I'm very glad that you're doing some of this work to help us understand how we can best approach the treatment effort and really assure maximum engagement of the patients to achieve the goals that we want to achieve. Rebecca: It is really interesting. There is some really interesting work going on in some other allied health disciplines that is, I think, helping us to pave the way in thinking about how to ask these questions about engagement. It's for our clients as well. I am excited to move forward on that. Janet: You're right about that! Speech-language pathologists are by nature, compassionate individuals, and would be responsive to a person with aphasia or a care partner who seems to show depression. What guidance can you offer for clinicians as they plan and implement a rehab program for a person with aphasia, who shows signs of depression? Rebecca: Oh, first of all, Janet, I agree. Speech-language pathologists are such a big-hearted bunch and that is just a real plus for our clients. There are a number of things that we can do to consider depression and treatment planning. In addition to being aware of the impact of depression, and those engagement and motivation issues, the cognitive issues, and the screening that we already talked about, we of course, can make appropriate referrals. This can be easier for some clinicians and more difficult for others. Some clinicians who work in an environment like an acute care or rehab environment, may have access to a psychologist or social worker, rehab counselor, someone like that who can help step in and provide support or other resources. For other clinicians who work in outpatient settings, the best referral might be to the client's primary care physician. Unfortunately, as we know, there are just not enough mental health professionals with aphasia expertise; we need so many more of those. That's a whole other discussion, isn't it? The primary care physician and support groups can be some of the first people that we refer to, if we are working in an outpatient setting. In addition to those things we can also provide some information and training to family members, and our colleagues and our clinical teams about supportive communication techniques. Interestingly, people with aphasia have talked about how interacting with people that know a little bit about aphasia and know how to support communication really can not only facilitate the conversation, but also help improve their mood, and give them a little boost. They also talk about how important it is to both acknowledge their experiences and perspectives and struggles, and to have at the same time, a positive outlook, to use humor, to celebrate goals. All of those things have been things that people with aphasia have talked about as elements that really help in working with clinicians and others for that matter. Another thing that has come up, and you and I have talked about this a little bit, is also about the tremendous impact of mental health challenges for people with aphasia. We talked a bit ago about the very high incidence of depression in aphasia. And so, people with aphasia have said in previous work that they really wanted more information about low mood and changes that can come with stroke, around mood and mental health, and wanted an open forum to talk about that, and continue those conversations with caregivers as well. That open discussion about depression, about other kinds of mental health struggles, can really help normalize it, help destigmatize it so that we can address it more readily. Janet: That makes sense. And you know, one of the key points I heard you just say is that, as a clinician, it's important for us to be aware of the community resources that are around us, whether they're specific individuals like neuropsychologists or mental health workers, or support groups or community groups. Bearing that in mind that we're not alone, as clinicians working with patients with aphasia, we have a whole group of people who can contribute to this rehabilitation effort. Rebecca: Absolutely. And I was going to add, in addition to the myriad of people that can be around and supporting people with aphasia who are struggling with mood issues and other mental health challenges, support groups are really amazing. I would say if I gave a couple of tips for clinicians, but I had three things that I was thinking of, that we can really encourage for our clients, and one is to really seek out those support groups and other opportunities for connection with each other. I mean, I think we all know that groups can be so amazingly effective at not only providing some opportunities for social connection, but also that emotional support, and kind of perspective-checking opportunities for our clients can realize, “Oh, I'm not alone, others are also struggling in a similar way.” I'm the biggest cheerleader for support groups, as I think we all are, This is one of those broken record things. Exercise is another incredibly, useful tool. We all know, of course, that exercise is good for our health and our cardiovascular functioning, all that good stuff. But it also so helpful in improving mood and cognitive functioning. Getting outside and moving around is just so important. There is just scads of research across many health disciplines that talks about this and reminds us about the importance of exercise. Here's the other thing that I think is really cool to suggest to clients. And that is, in simple terms, do more of what you like to do. There's been some work around behavioral treatment approaches for stroke survivors, including those with aphasia, using a framework called behavioral activation. Thomas and colleagues in the UK have done a little bit of work around this. The basic notion is that by doing more of what you like to do, provided it's healthy and not detrimental, of course, can really help improve mood. When we do things we enjoy, it releases endorphins, and it gives us some sense of satisfaction and well-being. That's exercise for some people, not for everybody. Other people may find doing creative things, or learning something new, or engaging in something that feels like it's contributing in some way. Those can all be things that can over time, help improve mood and outlook. This can be a little challenging for folks with aphasia; the things that they think about or reach for, or things they enjoy, are maybe no longer available to them because of their language and communication impairment, or other impairments that have come with stroke. So again, the support groups are so helpful. They can be places where people have an opportunity to learn about new activities or connect with opportunities that may fill that hole of things that they like to do, new things that they hadn't discovered before. I always have more plugs for support groups. Janet: The things that you mentioned, they're simple, they're easy, but they're so powerful. Sometimes we forget that the simple things can often have the biggest change or make the biggest change, or the biggest difference for us. It's a good thing that you have been reminding us of those things today. Rebecca: Simple things, and sometimes combinations like a couple of simple things together can make a huge impact. Janet: As important as the treatment techniques are to address specific linguistic and communication goals, an individual's mental health state and their feelings of engagement with the clinician and the process are just as important, as we've mentioned several times today, What advice or suggestions or lessons learned, can you describe for our listeners that will help them become better clinicians, and address the whole person in aphasia therapy, including our role as clinicians in counseling, and I don't mean the professional counseling that is reserved for degreed mental health professionals. I mean the communication counseling and quality of life communication counseling. Rebecca: Yeah, even though speech-language pathologists are not mental health experts, there really are a number of very simple counseling skills that can help connect with our client s and more fully understand how they're doing, where are their struggles are, how are they doing in terms of mental health. When we understand them more fully, what's important to them, what they're struggling with, then it's easier to build treatment plans that fit them as individuals. So, if I'm putting on my counseling hat, I have a couple of things that I would prioritize, I think I have five, five things that I would prioritize as a speech-language pathologist using some counseling skills. Janet: I will count them. Rebecca: The first one is really to consider their stage post event or post stroke. If the stroke or the event is new, we may be working more with the family; they may be in shock, they may be overwhelmed and struggling to take in the information that we and our clinical team are providing to them. Those conversations differ tremendously from the conversations we might have with clients and families that are in the chronic stage, because they have a better sense of aphasia and of what it means for them, what their everyday needs are, etc. I think considering first of all, the stage post stroke or post event is really important. The second thing I would say is to find empathy and unconditional positive regard. It is good to know that depression is complicated, and it can come with emotions, a lot of different emotions and experiences from anger and frustration and shame, and so sometimes our conversations around depression can be uncomfortable. I would say, approach these conversations in an open and honest way about the client's challenges and maintain that unconditional positive regard even when we're feeling that discomfort ourselves. If they are angry and frustrated, we also may feel angry and frustrated or defensive or something else that doesn't feel very good as clinicians, or for anybody for that matter. Just remembering that unconditional positive regard, that we really all want the same thing. We want improvement. We want improvements in life and to face things like depression and find some answers that will really help push clients forward. The third thing that I would say is giving clients and family members our full attention and listen really actively and carefully. Sometimes this can be just an extra 30 seconds, an extra 60 seconds of listening using some reflective techniques that can really provide some critical information about our client, their needs and priorities that we can use in treatment planning. At the same time, this act of listening very deeply, and reflectively can help build our connection with their client and that's going to help promote engagement, adherence, and trust, which is just so essential for the therapeutic alliance. The fourth thing I would say is communicate multi-modally. I would say this not just for clients, but for family members as well. I myself have been the caregiver in situations where a clinician, never an SLP I will say, has come in and talked to a loved one and it was wasted words and time because nobody could take in that information. It was feeling overwhelmed and that that information might have come in as just some noise; maybe we remember one or two words from it and couldn't take the rest of it away, just given everything else that we were processing in that moment. I always say, never just say something, say it and write it or diagram it. This is just again, so important with clients and families who are stressed, who are depressed or anxious in some way. It is just so hard to remember when we're feeling overwhelmed. We can really support our clients and families by communicating in a multi-modal way. Even almost as important as summarizing what we've said and providing information again, I had a caregiver once say never tell us more than three things at once, because the fourth thing is going to be lost. I took that to heart; I understand that that makes perfect sense. And of course, providing a lot of opportunities for questions is helpful. That number four had a lot of pieces to it. Here's number five, and this is really obvious, developing mutual goals with our client and revisiting them. Sometimes when our client is struggling with depression, we might find their treatment plan seemed like a great idea, seemed like a great fit for our client, and just falls flat. If our client is really struggling to concentrate or engage in an activity because of depression, it just makes sense to stop and revisit those goals and make sure they really line up with the client's interests and priorities, but also how they're doing and how they're able to engage given everything else that's going on - mental health-wise and otherwise. Janet: Those are five excellent tips, Rebecca, excellent. And again, they're not difficult things to do, but they're so important, especially if you do all five of them together. I think our listeners are going to be quite pleased to learn about these five ideas that you have. Depression experienced by persons with aphasia is not new, we've talked about this earlier, certainly as long as there has been aphasia, there have been people with aphasia and depression. But although it's not new, it has not been well recognized or really well studied, as you mentioned earlier on. During the past year, as a result of changes due to the pandemic, such as the stay-at-home orders, limitations on in-person activities, and the increase in virtual care, I believe depression and associated mental health and self-care concerns have increased and have come to the forefront of our thinking. Have you found this to be the case? Rebecca: It's interesting. We are in the midst of a study right now, that's looking at how our research participants are doing during the pandemic as compared to pre-COVID, pre-pandemic. We're not done, we're midway through, but so far, we're seeing some really interesting challenges that people are reporting with everyday functioning during the pandemic, which it doesn't surprise us, of course, we're all struggling with functioning, I think, during the pandemic. We're not necessarily seeing greater levels of stress for the group we've done so far. Some people are reporting more stress, and some people are reporting less, which is fascinating. I'm going to give you some examples. Some people have said that they're not really that bothered by not being able to leave the house. Then other people are talking about how they're not able to do the things that they've always done, and that's been really difficult and stressful for them. So clearly, there's a lot of variety of experiences that we've heard so far. I'm really looking forward to finishing up that study and just looking at all the data together. Maybe the next time we talk we'll have some better news or a clearer picture about what people's experiences are like. Janet: I'll look forward to hearing about that. Rebeca: Separately, a couple of months ago, we chatted with our friends with aphasia and just asked, “Hey, what's been helping you during these lock downs, during this time of isolation?” And here's what they said: they said things like games and puzzles and dominoes were helping; listening to music every day. One person found brain teaser books were helpful and fun right now; several people were cheering for support groups that they were attending online; playing with pets; connecting with family over FaceTime. One person talked about chair yoga. Those are the things that our friends with aphasia are doing that they say are really helping. I think we're all thinking about self-care right now. It's just so important, of course exercise and getting outside and learning something new. I think we've all heard of countless people that have learned to bake bread this year, me among them. Taking care of things like a new plant, and then just finding ways to connect with each other, though a little bit different than we were doing it before. Janet: That is so true. I think we've all been finding those new ways and new things and new ways of connecting with people. Rebecca, you've given us much to think about today. Depression may not always be easy to recognize in an individual, and certainly its management is multifaceted. As we draw our conversation to a close, what are some words of wisdom that you have to offer to our listeners who interact with persons with aphasia every day? And who may be wondering, “How do I start a conversation about depression with my clients, or my clients’ caregivers?” Rebecca: I would say first, be yourself, be genuine. When we are able to genuinely connect with our clients and their families, it really does strengthen the trust, and build our relationship for some good clinical work together. Then ask about depressive symptoms, as we've talked about before, and communicating openly about depression; not something that we should, you know, hide away, but actually discuss and regularly check in on, as well as providing some resources and support for what to do when someone's feeling depressed or struggling with mental health. Then listening fully and acknowledging the experiences of our client, the good stuff, the difficult stuff, all of it. They're really the experts on life with aphasia and they are such a critical part of our clinical decision making. Then keeping our eye on the literature as there is more clinical research on depression, and other psychological challenges in aphasia right now than I think ever before, which is incredibly exciting. So just keep an eye on that. And then I think this is a really important one - take care of yourself. Clinicians working with people with communication disorders are also experiencing depression. It can be a lot over time, and no one can be a great clinician if their own health, their own well-being is compromised, so do what you can to take care of yourself. Again, simple things, several simple things we can do to just make sure we're our most healthy and going to be the best supporters for our clients and their families. Janet: Those are some very, very good suggestions. If I'm right, you have a paper coming out in Perspectives soon, about counseling skills, is that correct? Rebecca: Yeah, there should be a paper coming out soon about counseling skills, and also about stages using those skills, depending on the stages post event or post stroke, hopefully, that'll be coming out really soon. Janet: This is Perspectives for the Special Interest Groups within the American Speech-Language-Hearing Association. I have to say, I remember, oh gosh, many, many years ago, I wrote a paper for Perspectives on depression and aphasia, and at that time, there was not very much written about it; people were thinking a little bit more about quality of life. As I reread that paper before talking to you today, I found myself thinking how much more information is available now, how much more in the forefront is the topic of depression, and mental health and psychosocial skills, and how pleased I am that there are so many people who are really recognizing the importance of having these conversations with our clients and caregivers. Rebecca: I'm so glad that there's more available now, but I have to say thank you, Janet, for blazing that trail those years ago, you have been an inspiration clearly and I'm glad that we are picking up the pace on these important topics. Janet: And you indeed are. This is Janet Patterson and I'm speaking from the VA in Northern California, and along with Aphasia Access, I would like to thank my guest, Rebecca Hunting Pompon, for sharing her knowledge, wisdom, experience and guidance about this most important topic, the effect depression can have on persons with aphasia, and their care partners. You can find references and links and the Show Notes from today's podcast interview with Rebecca, at Aphasia Access under the Resources tab on the homepage. On behalf of Aphasia Access, we thank you for listening to this episode of The Aphasia Access Conversations Podcast. For more information on Aphasia Access, and to access our growing library of materials, please go to www.aphasiaaccess.org. If you have an idea for a future podcast topic, please email us at info@aphasiaaccess.org. Thank you again for your ongoing support of Aphasia Access. Links and social media Lab website: UDAROLab.com Facebook: “UD Aphasia & Rehabilitation Outcomes Lab” AMA Caregiver Self Assessment Questionnaire (free pdfs; 5 languages): https://www.healthinaging.org/tools-and-tips/caregiver-self-assessment-questionnaire Citations Modified Perceived Stress Scale: Hunting Pompon, R., Amtmann, D., Bombardier, C., and Kendall, D. (2018). Modification and validation of a measure of chronic stress for people with aphasia. Journal of Speech, Language, and Hearing Research, 61, 2934-2949. doi.org/10.1044/2018_JSLHR-L-18-0173 Patient Health Questionnaire depression scale (PHQ) PHQ9 Copyright © Pfizer Inc. All rights reserved. Reproduced with permission. PRIME-MD ® is a trademark of Pfizer Inc. (open access) Stroke Aphasic Depression Questionnaire (SAD-Q) https://www.nottingham.ac.uk/medicine/about/rehabilitationageing/publishedassessments.aspx
Listen to this interview of Bill Cope and Mary Kalantzis, creators of the website newlearningonline.com and also professors at the College of Education, University of Illinois. We talk about monastic instruction in the sixth century, we talk about textbook learning in the sixteenth century, and we talk about cybersecurity education in the twenty-first century, but overall we talk about imbalances in self agency. Interviewer: "Could you describe one pedagogical affordance of the technology on your learning platform CGScholar?" Bill Cope: "So, what we're doing is we're using big data and learning analytics as an alternative feedback system. So, what we say, then, is, okay, well: 'The test is dead! Long live assessment!' We have so much data from CGScholar. Why would you create a little sample of an arrow or two at the end of a course, when we can from day one be data mining every single thing you do? And by the way, by the end of the course, we have these literally millions of data points and for every student. Now, the other thing, as well, is, our argument is––and we call this recursive feedback––is that every little data point is a piece of actionable feedback. Someone makes a comment on what you do, you get a score from somebody on your work against a Likert scale...so what we're doing is, we have this idea of complete data transparency, but also, we're not going to make any judgments for you or about you, or the system's not going to do it, without that feedback being actionable, so that you can then improve your work. It feeds into your work. So, the difference is, instead of assessment being retrospective and judgmental, what we're doing is making micro-judgments which are prospective and constructive and going towards your learning." Visit the Learning Design and Leadership Program here and visit CGScholar here. Learn more about your ad choices. Visit megaphone.fm/adchoices
Listen to this interview of Bill Cope and Mary Kalantzis, creators of the website newlearningonline.com and also professors at the College of Education, University of Illinois. We talk about monastic instruction in the sixth century, we talk about textbook learning in the sixteenth century, and we talk about cybersecurity education in the twenty-first century, but overall we talk about imbalances in self agency. Interviewer: "Could you describe one pedagogical affordance of the technology on your learning platform CGScholar?" Bill Cope: "So, what we're doing is we're using big data and learning analytics as an alternative feedback system. So, what we say, then, is, okay, well: 'The test is dead! Long live assessment!' We have so much data from CGScholar. Why would you create a little sample of an arrow or two at the end of a course, when we can from day one be data mining every single thing you do? And by the way, by the end of the course, we have these literally millions of data points and for every student. Now, the other thing, as well, is, our argument is––and we call this recursive feedback––is that every little data point is a piece of actionable feedback. Someone makes a comment on what you do, you get a score from somebody on your work against a Likert scale...so what we're doing is, we have this idea of complete data transparency, but also, we're not going to make any judgments for you or about you, or the system's not going to do it, without that feedback being actionable, so that you can then improve your work. It feeds into your work. So, the difference is, instead of assessment being retrospective and judgmental, what we're doing is making micro-judgments which are prospective and constructive and going towards your learning." Visit the Learning Design and Leadership Program here and visit CGScholar here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm
Listen to this interview of Bill Cope and Mary Kalantzis, creators of the website newlearningonline.com and also professors at the College of Education, University of Illinois. We talk about monastic instruction in the sixth century, we talk about textbook learning in the sixteenth century, and we talk about cybersecurity education in the twenty-first century, but overall we talk about imbalances in self agency. Interviewer: "Could you describe one pedagogical affordance of the technology on your learning platform CGScholar?" Bill Cope: "So, what we're doing is we're using big data and learning analytics as an alternative feedback system. So, what we say, then, is, okay, well: 'The test is dead! Long live assessment!' We have so much data from CGScholar. Why would you create a little sample of an arrow or two at the end of a course, when we can from day one be data mining every single thing you do? And by the way, by the end of the course, we have these literally millions of data points and for every student. Now, the other thing, as well, is, our argument is––and we call this recursive feedback––is that every little data point is a piece of actionable feedback. Someone makes a comment on what you do, you get a score from somebody on your work against a Likert scale...so what we're doing is, we have this idea of complete data transparency, but also, we're not going to make any judgments for you or about you, or the system's not going to do it, without that feedback being actionable, so that you can then improve your work. It feeds into your work. So, the difference is, instead of assessment being retrospective and judgmental, what we're doing is making micro-judgments which are prospective and constructive and going towards your learning." Visit the Learning Design and Leadership Program here and visit CGScholar here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/digital-humanities
Listen to this interview of Bill Cope and Mary Kalantzis, creators of the website newlearningonline.com and also professors at the College of Education, University of Illinois. We talk about monastic instruction in the sixth century, we talk about textbook learning in the sixteenth century, and we talk about cybersecurity education in the twenty-first century, but overall we talk about imbalances in self agency. Interviewer: "Could you describe one pedagogical affordance of the technology on your learning platform CGScholar?" Bill Cope: "So, what we're doing is we're using big data and learning analytics as an alternative feedback system. So, what we say, then, is, okay, well: 'The test is dead! Long live assessment!' We have so much data from CGScholar. Why would you create a little sample of an arrow or two at the end of a course, when we can from day one be data mining every single thing you do? And by the way, by the end of the course, we have these literally millions of data points and for every student. Now, the other thing, as well, is, our argument is––and we call this recursive feedback––is that every little data point is a piece of actionable feedback. Someone makes a comment on what you do, you get a score from somebody on your work against a Likert scale...so what we're doing is, we have this idea of complete data transparency, but also, we're not going to make any judgments for you or about you, or the system's not going to do it, without that feedback being actionable, so that you can then improve your work. It feeds into your work. So, the difference is, instead of assessment being retrospective and judgmental, what we're doing is making micro-judgments which are prospective and constructive and going towards your learning." Visit the Learning Design and Leadership Program here and visit CGScholar here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm
Listen to this interview of Bill Cope and Mary Kalantzis, creators of the website newlearningonline.com and also professors at the College of Education, University of Illinois. We talk about monastic instruction in the sixth century, we talk about textbook learning in the sixteenth century, and we talk about cybersecurity education in the twenty-first century, but overall we talk about imbalances in self agency. Interviewer: "Could you describe one pedagogical affordance of the technology on your learning platform CGScholar?" Bill Cope: "So, what we're doing is we're using big data and learning analytics as an alternative feedback system. So, what we say, then, is, okay, well: 'The test is dead! Long live assessment!' We have so much data from CGScholar. Why would you create a little sample of an arrow or two at the end of a course, when we can from day one be data mining every single thing you do? And by the way, by the end of the course, we have these literally millions of data points and for every student. Now, the other thing, as well, is, our argument is––and we call this recursive feedback––is that every little data point is a piece of actionable feedback. Someone makes a comment on what you do, you get a score from somebody on your work against a Likert scale...so what we're doing is, we have this idea of complete data transparency, but also, we're not going to make any judgments for you or about you, or the system's not going to do it, without that feedback being actionable, so that you can then improve your work. It feeds into your work. So, the difference is, instead of assessment being retrospective and judgmental, what we're doing is making micro-judgments which are prospective and constructive and going towards your learning." Visit the Learning Design and Leadership Program here and visit CGScholar here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm
Listen to this interview of Bill Cope and Mary Kalantzis, creators of the website newlearningonline.com and also professors at the College of Education, University of Illinois. We talk about monastic instruction in the sixth century, we talk about textbook learning in the sixteenth century, and we talk about cybersecurity education in the twenty-first century, but overall we talk about imbalances in self agency. Interviewer: "Could you describe one pedagogical affordance of the technology on your learning platform CGScholar?" Bill Cope: "So, what we're doing is we're using big data and learning analytics as an alternative feedback system. So, what we say, then, is, okay, well: 'The test is dead! Long live assessment!' We have so much data from CGScholar. Why would you create a little sample of an arrow or two at the end of a course, when we can from day one be data mining every single thing you do? And by the way, by the end of the course, we have these literally millions of data points and for every student. Now, the other thing, as well, is, our argument is––and we call this recursive feedback––is that every little data point is a piece of actionable feedback. Someone makes a comment on what you do, you get a score from somebody on your work against a Likert scale...so what we're doing is, we have this idea of complete data transparency, but also, we're not going to make any judgments for you or about you, or the system's not going to do it, without that feedback being actionable, so that you can then improve your work. It feeds into your work. So, the difference is, instead of assessment being retrospective and judgmental, what we're doing is making micro-judgments which are prospective and constructive and going towards your learning." Visit the Learning Design and Leadership Program here and visit CGScholar here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm
In this episode of Academia Lite, Sean and Zak get into two thought-provoking papers: - The celebrity appeal questionnaire: Sex, entertainment, or leadership. by Stever, G. S. - The biasing effects of scale-checking styles on response to a Likert scale. by Friedman, H. H., Herskovitz, P. J., & Pollack, S. Examining the irregular, the surprising and the downright funny of each paper, there is something for the academic in all of us. Website: academialite.com Twitter: @academialite Facebook: Academia Lite Instagram: academialite Email: Hello@academialite.com Music by Softly Softly - https://open.spotify.com/artist/7x5ZnnlIGAtbRrlj2La2Yl?si=iuNAXt7c * Stever, G. S. (2008). The celebrity appeal questionnaire: Sex, entertainment, or leadership. Psychological reports, 103(1), 113-120. https://journals.sagepub.com/doi/abs/10.2466/pr0.103.1.113-120 * Friedman, H. H., Herskovitz, P. J., & Pollack, S. (1994). The biasing effects of scale-checking styles on response to a Likert scale. In Proceedings of the American statistical association annual conference: survey research methods (Vol. 792, pp. 792-795). https://rangevoting.org/FrDirBias.pdf
High-stakes meeting facilitator Kristin Arnold shares a virtual polling feature to help team members check in and communicate clearly about their team goals.
It's the inaugural pod! Will and Armaan provide a brief background on how they met and what the podcast is going about (2:00), followed by a random tangent on Likert scales (6:00). After each takes 60 seconds to introduce themselves (7:30), they bring on their guest, Brittany Ramos (10:30). Show Notes: Guest's Book Recommendation -- The Energy Bus by Jon Gordon
Have you always felt that you could make of your life pretty much what you want to make of it? Once I make up your mind to do something, do you stay with it until the job is completely done? And when things don’t go the way you want them to, do you just work harder? And one last question – are your poor, or working class, or live in a highly segregated area? If you strongly agree with the first questions, and answer yes to the last one, your coping is likely putting you at greater risk for a raft of health problems. That’s a key finding of Duke University epidemiologist Sherman James, who describes what he terms ‘John Henryism’ in this Social Science Bites podcast. The health effects, which James has studied since the 1980s, have come into sharper focus as the Coronavirus pandemic exacts a disproportionate toll on communities of color in the United States. Based on the John Henryism hypothesis, James tells interviewer David Edmonds, members of those communities are likely to develop the co-morbidities which help make COVID more deadly. And since many of them have to physically go to work, John Henryism helps “elucidate what some of these upstream drivers are.” James defines John Henryism as “strong personality disposition to engage in high-effort coping with social and economic adversity. For racial and ethnic minorities … who live in wealthy, predominantly white countries – say, the United States – that adversity might include recurring interpersonal or systemic racial discrimination.” It can be identified by using James’ John Henryism Active Coping Scale, (JHAC12, pronounced ‘jack’), which asks 12 questions with responses from ‘strongly agree’ to ‘strongly disagree’ on a 5-point Likert scale. High-effort coping, over years, results in excessive “wear and tear” on the body, damaging such things as the cardiovascular system, the immune system, and the metabolic system. Focusing on the cardiovascular system, James notes that this “enormous outpouring of energy and release of stress hormones” damages the blood vessels and the heart. James notes that the damage doesn’t occur solely because someone is a Type A personality – it’s the interaction with poverty or segregation that turns someone from a striver to a Sisyphus (with the attendant negative effects on their cardiovascular health). In fact, James says, research finds that having resources and a John Henry-esque personality does not lead to an earlier onset of cardiovascular disease. The eponymous John Henry is a figure from American folklore. The ‘real’ John Henry probably was a manual worker, perhaps an emancipated slave in the American South, James explains. His legendary doppelganger was a railroad worker, “renowned throughout the South for his amazing physical strength,” especially when drilling holes into solid rock so that dynamite could be used. A boss challenged John Henry to compete against a mechanical steam drill. It was, says James, “an epic battle of man – John Henry – against the machine. John Henry actually beat the machine, but he died from complete mental and physical exhaustion following is victory.” A folk song memorializes the battle. As one version (there are many, but all telling the same story) recounts: John Henry he hammered in the mountains His hammer was striking fire But he worked so hard, it broke his heart John Henry laid down his hammer and died, Lord, Lord John Henry laid down his hammer and died That narrative – dying from the stresses of being driven to perfection but in a dire environment – the Jim Crow South – gave its name to James’ hypothesis. James himself grew up in small town in the rural American South, beginning his higher education in the early 1960s at the historically Black Talladega College near Birmingham, Alabama. Birmingham was the heart of the civil rights struggle in the Civil Rights era, and James was an activist, too. He decided then that “whatever I did would have to have some bearing on social justice, on working to make America a more just society in racial and social class terms.” He trained as a social psychologist with a special emphasis on personality, earning his Ph.D. Washington University in St. Louis in 1973, and focused his career on identifying social conditions that drive health inequalities. His own studies conducted amid the farmers, truckers and laborers of eastern North Carolina provided early, and strong, confirmation for John Henryism. While John Henryism seems focused on African-American men, other research – in Finland, on African-American women, and more – bears out John Henryism’s premise in the global population. In the podcast, James discusses a real John Henry – John Henry Martin – he met while doing research, and offers some societal prescriptions that would allow African Americans and others to “pursue their aspirations in ways that do not accelerate their risk for cardiovascular disease, morbidity and mortality” James is the Susan B. King Distinguished Professor Emeritus of Public Policy and a professor emeritus in the Sanford School of Public Policy at Duke, where he is also a core member of the Center for Biobehavioral Health Disparities Research. He was elected to the National Academy of Medicine of the National Academy of Sciences in 2000. James was president of the Society for Epidemiologic Research in 2007-08. He received the Abraham Lilienfeld Award from the Epidemiology section of the American Public Health Association for career excellence in teaching epidemiology in 2001, and in 2016 received the Wade Hampton Frost Award for outstanding contributions to epidemiology from the same section. He is a fellow of the American Epidemiological Society, the American College of Epidemiology, the American Heart Association, and the Academy of Behavioral Medicine Research. In 2016, he was inducted into the American Academy of Political and Social Sciences as the Mahatma Gandhi Fellow, and in 2018 was a fellow of the Center for Advanced Study of Behavioral Science.
1' How sales reps increased revenue by up to 89% after gaining real-time insights from customers 3' How relative feedback scales such as a Likert scale can help sales reps set the right learning priorities 7' What 5 key traits Martin looks for when hiring sales reps 11' Why sales people should adopt the same mentality of constant iterations and learning as in agile, scrum etc. 14' How sales people using the right technology and tools have a competitive advantage 19' How remote sales forces sales people to transition to a value-delivering advisor 22' How people fixing 2-3 key improvement areas instantly increase their conversion rates 28' How sales people in DACH are fundamentally working against a relatively change-resistant culture 31' Why Martin thinks the sales profession will gain more recognition and attractiveness in the years to come 35' What value he got from having worked with sales coaches and when leveraging tools make sense
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.07.17.208017v1?rss=1 Authors: Suresh, T., Roy, A., Shaikh, A. I. A., Rajkumar, J. L., Mathew, V., Prabhakar, A. T. Abstract: Background: Visual mental imagery or seeing with the mind's eye is an everyday phenomenon. Visual mental imagery and visual perception share common neural networks. Hence deficits that affect the visual perception may also affect visual mental imagery. Aim: We aimed to study the effect of refractive blur on the vividness of mental imagery. Methods: Subjects were recruited from volunteers and divided into two groups; individuals with refractive errors Ametropes(AM), and individuals without refractive errors Emmetropes(EM). After filling in the Verbalizer-Visualizer-Questionnaire (VVQ), the subjects were asked to perform a mental imagery task with and without refractive blur. The participants were asked to generate a mental image of a specific object initially with eyes closed, eyes open and then with refractive blur in random order, and then judge the vividness of the mental image on a Likert scale ranging from 1 (low vividness) to 5 (good vividness). The EM participants had to wear a + 2D spectacles to produce refractive blur. Results: A total of 162 participants were recruited to the study. Of these 73 were EM and 89 were AM. Of the AM, 30 had additional astigmatism. The mean VVQ score was 64.9(11.2). The mean refractive error was 1.8(1.3)D. Following the mental imagery task, at baseline with eyes closed, 138 (85.5%)subjects had vivid mental imagery close to visual perception(Likert scale:5). With the opening of the eyes, the vividness dropped by at least 1 point in the Likert scale in 139(85.8%). With the introduction of refractive blur, 153(94.4%) subjects had a drop in the vividness of the image by at least 1 point and 22(13.6%) subjects by at least 2 points. Conclusion: Introduction of refractory blur results in the reduction of the vividness of mental imagery. Copy rights belong to original authors. Visit the link for more info
The Likert scale often found on survey forms or questionnaires, measures how people feel about something which can be useful in many different situations. It was invented by American social scientist Rensis Likert invented in 1932.Businesses can use it now to learn about what's important in the minds of the team, partners and customers before they start their PR. Some services include: polldaddy, survey monkey, mentimeterRead these survey results and see if they are what you expect, and more importantly how they would impact your PR activity?Deloitte survey in Forbes shows consumers in 13 countries see themselves in the middle of a two-front crisis involving their health and finances.Across the 13 countries surveyed, 42% of respondents worry about job loss, led by respondents in Spain at 62%, India at 54%, and South Korea at 51%.Only 35 % said that they feel safe going to the store, 25% that they feel safe staying in a hotel, and 22% that they feel safe taking a flight.How will these stats impact your business communications, and what more detail do you need to tell people the message they need to hear; because they are not all the same.Please visit our blog post on PR for business please visit our site:https://www.eastwestpr.com/blogs/I also talk about SPEAK|pr - our 5 Step Methodology for entrepreneurs to manage their own PR. Do please come and download a free copy along with our Technology Applications Director with over 100 free marketing apps listed. http://www.eastwestpr.com/speakprSubscribe to our newsletter hereFind us on Twitter @eastwestprEASTWEST Public Relations Group was founded in Singapore in 1995 and has a company in China and the UK. Jim James is an award-winning British entrepreneur who has spent the past 25 years building businesses using PR, whilst running a multi office Agency serving over 500 clients.Please Support the show (http://www.eastwestpr.com)Support the show (https://www.eastwestpr.com/podcast-speakpr)
The Inventory of Social Supportive Behaviors (ISSB) is a 40-item self-report measure that was designed to assess how often individuals received various forms of assistance during the preceding month. Subjects are asked to rate the frequency of each item on 5-point Likert scales (1=not at all, 2=once or twice, 3=about once a week, 4=several times a week, and 5=about every day).
Zach has the honor of speaking with Great Place to Work CEO Michael C. Bush about GPTW itself and the process of creating a great place to work. Michael generously shares what he believes executives should be thinking about when it comes to building better trust within organizations and talks about where he sees Great Place to Work continuing to grow and expand to capture more marginalized voices and experiences.Connect with Michael on LinkedIn and Twitter.Check out Great Place to Work's website. You can review their most recent lists by clicking here.Follow GPTW on social media. They're on LinkedIn, Twitter, Instagram and Facebook.Interested in Michael's book? Find out more about it on Amazon.Find out how the CDC suggests you wash your hands by clicking here.Help food banks respond to COVID-19. Learn more at FeedingAmerica.org.Visit our website.TRANSCRIPTZach: What's up, y'all? It's Zach with Living Corporate, and man, you know what we do. We center and amplify underrepresented voices in the workplace by having authentic, available, and frankly incredible conversations with some incredible guests, and, you know, today is no different, right? Like, we've had who, we've had Robin DiAngelo on, we've had Ruchika Tulshyan, we've had--we've had professors, we've had executives, we've had activists--we've had DeRay Mckesson--we've had all types of folks on the podcast, on the platform, and today is just incredible because we have Michael C. Bush. Michael C. Bush is the CEO of Great Place to Work, the global research and analytics firm that produces the annual Fortune 100 Best Companies to Work For lists. So you know when y'all, you know, see companies and they have, like, the little badge and it'll say, "Oh, we're, like, #5 great place to work," this person we're speaking to is the CEO of Great Place to Work, y'all. This is a big deal. I'm not trying to overhype it. I don't think I can overhype it. I'm just trying to give proper context to who we have on the show. You know, the 100 Best Workplaces for Women list, the Best Workplaces for Diversity list, and dozens of other distinguished workplace rankings around the world. Since 2015, Michael Bush has expanded Great Place to Work’s global mission to build a better world by helping organizations create Great Places to Work not just for the some, but For All. Under his leadership, the firm has developed a higher standard of excellence that accounts for fair and equitable treatment of employees across demographic groups, as well as executive leader effectiveness, innovation, and financial sustainability. His book, A Great Place to Work For All, outlines the compelling business and social benefits that come from these efforts. Michael, first of all, how are you doing?Michael: I'm doing great. Thank you, and honored to be with you today.Zach: It's a pleasure. Now, I'm asking--you know, we're in the midst of a global pandemic, and I would be remiss if I didn't ask how are you doing with your family. Is everyone safe and well? Friends and family, loved ones?Michael: Thanks for asking. Yeah, the world has really changed in the past 45 days, but I'm doing well. I'm sheltering in place here in Oakland, California, with family nearby, so everything's good, and I hope the same for you.Zach: You know, everything is good. It's interesting. It's an interesting time. My wife and I just welcomed our first child into the world just a handful of weeks ago, and it's just an interesting time to be new parents, right, with so much chaos, you know, seemingly all around us, or uncertainty around us, but life is beautiful nonetheless.Michael: Well, congratulations to you and your wife, and yeah, you couldn't have brought, you know, a baby into the world at a crazier time, you know, but things are always a little bit crazy, and what a story you're gonna be able to share with your baby, you know, and we're just gonna do what we're always gonna do, which is make the world a lot better from here.Zach: I love it, absolutely. So let's get into it, right? We talked about it a little bit in the bio that I read. You've been the CEO of Great Place to Work for over 5 years, going on 5 years. Can we talk about your first 100 days as the CEO and, like, what did that look like, you just kind of stepping into that role. And then, you know, in these past five years--I guess Part B to the question is what have you been most proud of since taking the helm?Michael: Yeah. Well, when I stepped into the role in 2015, I got into the role in a strange way. I was actually hired by the founder of Great Place to Work to sell the company, and I had done a lot of turnaround work in the past, and so I came in and worked to do that, and to make a long story short I ended up getting an investment partner and buying the business. So that's how I got into it, and then one of the things that I knew is that I felt like having the analytics of what really was going on for working people all around the world and knowing that there are a lot of working people who never really get a fair shot at being developed, never get a fair shot at being promoted, never get a fair shot at being recognized and rewarded, that I could use--I hoped--the data and the analytics to use recognition to get organizations to change, and so that's really when we made the change, almost instantly, to Great Place to Work for All. I thought that we'd have a platform, and at that time, you know, you never know how things are gonna work out. The business was technically bankrupt, so the first 100 days were what you have to do when you're turning around a company that's bankrupt, which is you have to stop all the money flowing out of the company. So a lot of tough decisions, a lot of tough days where you're just pruning the rose bush so that you can grow, and those times are very difficult, but that's really what the first 100 days were about. Not too much about the future. A lot of pain in trying to cut costs, but we got through it.Zach: When you talk about, like, Great Place to Work for All, like, clearly that's a point of pride for you and, like, kind of continuing to shift and expand the platform or the position that you stepped into. Can we talk a little bit about what it was about that particular--like, why you took that angle, and, like, why was that your point of determined growth for Great Place to Work?Michael: Yeah. Zach, I think the thing that helped me was having a lot of business experience and having been a CEO before as well as working with CEOs. One of the things I knew is that most CEOs, while they talk articulately and clearly and passionately about diversity and inclusion, it's not something they think about that much, you know? They think about it during Black History Month, you know, or other things like that, but beyond that they really don't think about it that much, so it's kind of a head fake because you can hear these things that are very optimistic and passionate, but in fact they just don't think about them that much, and so--they're CEOs, which means they're thinking about other things like shareholder value, stakeholder value, but this one isn't one of 'em. They delegate it, and so they typically delegate it to a chief of diversity and inclusion or maybe a chief of people or a CHRO, but it's delegated, you know? It's not something that they lose a lot of time thinking about, and so I knew that and knew a lot of people, you know, doing diversity and inclusion work, and the common experience was "If you get to a CEO and you say, "Hey, I'd like to talk to you about diversity and inclusion," and they go, "Oh, talk to my chief of diversity and inclusion and I'll see you later." And so they're gone. So I was trying to find a way of keeping them in the conversation by not bringing up diversity and inclusion, and we did that. So when you talk about Great Place to Work for All, they don't leave the room because they're like, "Hey, I'm into that because, you know, that includes me," and also Great Place to Work for All has superior financial business performance. We've got all the data on that, so now they hang in the room, and now they're there and they're present, and now you have an opportunity to share data and information with them to get them into the conversation and hopefully leading the conversation. So it's really--for me it was a Trojan horse. It was how to get into the castle walls and not have somebody come out the castle walls, you know, that was delegated to talk about diversity and inclusion. I felt that the CEO needed to be in that conversation just like they're in the conversation when they're buying the company. They have a head of M&A, but they're in that conversation, so I thought that we could make that happen, and so far so good.Zach: Well, no, it's a great point, and something that you just said rung true with me. I think another example is, like, HSC, right? Like, you talk about health and safety environments, like, the CEO is going to be involved in that conversation by some degree because they recognize the business value and just, like, the imperative of safety for their workplace. Like, they may not be in every single part of the conversation, but they're going to be engaged. If there are other parts of the organization that executive leaders, that CEOs want to be plugged into, I think it's interesting. As much growth as diversity and inclusion has seen, I think that certain language and buzzwords kind of, like, trigger disengagement from the senior-most people. So I find that really interesting and powerful that you were able to figure out kind of, like, I don't want to say the cheat code, but, like, the way to kind of mitigate that a bit.Michael: Yeah, yeah. Cheat code. I hadn't thought about it like that, but that's kind of what it is, and whatever works, you know? Kind of by any means necessary, and so we found that this works, and it not only works in the U.S. When I first did for the for All and started moving it around the world, the first thing we got was resistance because, first of all you're coming from the U.S., and the racial issues are--in the U.S. they are on display for everyone to see and the rest of the world looks at it, but the rest of the world doesn't look at themselves, and so the very resistance was "Well, you're coming from the U.S. We don't have racial issues," which is crazy, because it doesn't matter which country you go to, there's racial issues. But they're not seen the same way. They don't--people don't really self-reflect in the same way. And then, you know, so I was bumping into that, and then what began to happen was people in Sweden started talking about, "Well, really, you know, women aren't treated family," and so for them for All meant that. And so wherever you were in the world - Japan, you know, women, and so there was always some group of people in every country that was treated differently in terms of opportunity and promotion and getting into the C-Suite for example than others. So then it just took off. Then it just took off and really, outside the U.S., it's been embraced more strongly than inside the U.S., 'cause in the U.S., you know, people do say, "Are you using a cheat code?" You know? They're kind of more suspicious, but around the world the thing has really just taken off, and, you know, the book is now in I think 11 different languages and so on just because of that, and CEOs now want to be linked a message that gets them a lot of brand value, and so Great Place to Work for All gets them a lot of brand value. If they talk too much about diversity and inclusion, you know, they actually get blowback from the dominant group in the workforce, and so this is a way that they can get out in front and be totally, totally inclusive without saying inclusive.Zach: It's interesting too that, like, you know, the amount of work that goes into that, right? How can we be inclusive while at the same time not oversignaling to the point where we actually lose the folks in the room who we need to be engaged to create, you know, systemic change and a sense of belonging for everybody? That really kind of leads me to my next question. You know, you're the first--yes, you're the first black male CEO of, like, a major organization or company that we've had on Living Corporate, right? So we've had, like, different senior leaders and executives and directors, but you're the first CEO that we've had. Can we talk a little bit about the role that your previous experience--'cause you talked about it before, about you were a CEO before this, you had industry experience before coming to Great Place to Work--and how your identity plays a role in some of the things that you do and the relationships that you have to make and maintain in your current position?Michael: Yeah. A lot of times people will ask, you know, "How do you get to a CEO?" And the answer is I started, you know, my own company in 1994, and so it really began by breaking out of corporate America. So it wasn't being within it, it was breaking outside of it. There are other journeys. I'm familiar with them. I have, you know, close friends who have done the corporate journey and been able to get to the CEO role. That's one path. It's a very different path than the one that I know the most about, which is the entrepreneurial path. And being an entrepreneur isn't for everybody, just like being a corporate CEO isn't for everybody. It takes two different personalities and two different skill sets really. But for me, on the entrepreneurial path, it was getting a feeling that I was never gonna really be comfortable in the corporate environment. I was never gonna be comfortable. I was always gonna be doing some shapeshifting in that environment, and so once I broke out, okay, then it was great, because I was able to break out and do the things that I needed to do to be successful, and the thing about, you know, so then how do you grow and how do you get to do more, what you gotta do is make rich people more money. So it's--the key is that, you know? It's you better be delivering that value. And so if you create value for people, you have friends for life, and so then you can start to be able to use that momentum. So all of the things that I've done, just like Great Place to Work, what I talk about is profitability. What I talk about is cash flow. So I talk to CEOs about the things that matter to them most. It's all about that. Now, this is the way you do it, but I always go through that door, and I've always gone through that door so people know it's about profitability, it's about EBITA, it's about cash flow, it's about growing market share, and this is the way you do it. You know, this is a way to do it, but it's a business helping another business do a lot more business. I have the data to prove if you make it a Great Place to Work for All you're gonna crush your competitors, you know? The companies that are on our list that are Great Places to Work for All outperform the S&P 500, the Russell 2000 and 3000 by a factor of 3:1, including today, you know, as the market drops. Our companies don't drop as much and they rebound quicker during recovery. So having the data and the analytics, always leading with those numbers, never going to the morally right thing to do but always being about the business enables the CEO to stay there so I can actually--the CEO doesn't leave the room because there aren't a lot of D&I people talking about EBITA, earnings and cash flow. They're talking about other things. So, you know, I'm not saying that there's anything wrong with that. I'm just saying--Zach: It's just the reality of the environment, right?Michael: It's just the reality of the environment, and if you're talking to a CEO about the things they care about, which are those financial metrics, you can begin to talk to them about a lot of things, because they know they're talking to somebody that everything I say is gonna be about enhancing those metrics.Zach: You know, that leads me--Michael, it's almost like you do this a lot, right? It's almost like you talk to folks and you do meetings, interviews, quite a bit, 'cause you're just--you're helping me out. Without getting too much into the secret sauce, like, we understand that Great Place to Work, like, y'all's list is not something that's, like, qualitative, but it's a variety of quantitative analytics, points of measurement. Can you talk a little bit about how the data analytics behind the Great Place to Work rankings has evolved over time and what influenced, if anything, the way that Great Place to Work determines if a company is indeed a Great Place to Work.Michael: Yeah. So we ask the same 60 questions of every company we do business with in 98 countries around the world, so that's one thing that makes us different. Other companies kind of tailor the question set. We're like, "No. We know people. We've got 30 years of data on people." People, you know, the norms might be different, the willingness of a worker to say what they think and what they want might be--they might be more willing and open to it in one country versus another due to social norms, but at the bottom people want the same thing, and so we measure those things. People want to be respected by the people that they work for, so we ask 11 questions that let us know whether you feel respected or not. People want to work for somebody who they feel is transparent with them, so we ask about 9 questions about that. And people want to be treated fairly more important than anything else, so we ask 14 questions about that. And then people want to enjoy the people that they work with and people want to be proud of their work, which means they feel cared for and they care for the people around them, that's what really drives high-performing work, people caring about one another. It's not stock options. Those things don't have the stamina of people. They have to feel like they're doing something they couldn't do on their own and be connected by some sense of purpose. So we measure those things. We ask these questions. We're an analytics company. It's all about the numbers, and we do this with 10 million employees and 10,000 companies every year, so across every single industry. There's not an industry that we don't survey in. So therefore we've got a huge data set to let people know when your people are feeling that in this part of the world things aren't fair, we tell you what that's gonna do to EBITA and profitability and earnings and revenue in that part of the world. We can go straight to the correlation between the employee experience and revenue and these financial metrics, and in some metrics we can go to some causation. We can actually tell you if people aren't feeling emotionally or psychologically or physically safe, those, what I just said, safety defined with those other three attributes drives earnings, you know? It drives earnings is how safe people feel, so we measure those things and therefore can let you know, "Hey, when we see this set of data, we know these people are updating their LinkedIn profiles. They may still be working for you, but they are looking for the next thing to do. So we call it presenteeism. They're present, but they are looking for a way out. So now the data can be used with artificial intelligence to predict what's gonna happen with people. You can see that a person pulls on their--the economy is going good and a person pulls on their 401K and then doesn't return in time not to pay a penalty on that. This person's undergoing some financial pressure, and the financial pressure they're going through affects through employee experience, so we can alert a company that "Hey, you've got a problem here because we can see in this data." So it's all about the data, it's all about the 60 questions, and we measure the employee experience, how they feel about the people they work with, whether they feel like management involves them in decisions that affect them, whether they trust management, whether they have confidence in management. So we ask a set of questions where we can let a leader know exactly what's going on and then compare that so we can--if you're a tech company and you get the data and you don't really know what to think, well, we have a benchmark against other tech companies, and then you go "Whoa, okay, these companies are actually outperforming me in these areas. I want to do something about it." So benchmarking is very important. You can see how Latin America is doing versus South America versus North America or men versus women or people of color versus majority or members of the LGBTQ community versus the majority. You can do all the demographic cuts. The biggest change we made in our methodology since I got involved were these demographic comparisons to see if it was a great place to work for all versus a great place to work for some. That's the revolutionary breakthrough that we've made, and so our lists today are different from the lists in the past because we reward companies that treat everyone the same, where employees are having the same experience and the same in an equitable way, which we're able to measure.Zach: You know, you talked a bit about--you mentioned, like, predictive analytics there, and I'm curious, how far away--and if we're already here, then let me know, but how far away are we from predicting, like, lawsuits or, like, legal action by employees who feel, like, psychologically, emotionally, physically unsafe, who feel, like, discriminated against and things of that nature and then, like, present that to organizations and say, "Hey, look, you have a serious problem, and here's the likelihood of X happening, and then here's the amount of damages that would cost to your brand over X amount of time." Do you think we're anywhere close to that? Do you think that's anything that would be relevant or pertinent for organizations to have?Michael: Well, for some companies they're able to do it right now, and you're talking about where we're heading, absolutely where we're heading. So if you've got an HR system of record on Oracle or a [?] or an SAP or Ultimate software, if you've got an HR system of record--which is a platform that has the payroll information on the employee, the use of benefits on the employee, something around the performance management of the employee, and you have an employee engagement tool that's doing the measuring, and those two are nested and the data can flow between them, you have what you need. And so there are other companies who have what they need and others are heading there now. This is the movement to be able to ask an employee a set of questions and predict what's going on with them and what you need to create a better experience for that employee, which is usually around development and opportunities and promotions and feedback. That's mainly what most people need. Sometimes tailored benefits around things that are going on in their life, like everybody's kind of living through right now. So this is happening at companies now. I'm very much aware of it. We're involved in it by nesting our tool on top of these other platforms, but I would say big companies, Fortune 500 companies, will be totally in this game in 5 years, you know, 100%, and then products will be developed for medium-sized companies and will be in the marketplace--you know, start to enter in about 3 years.Zach: I just find that so intriguing, right? Like, I think about the fact that there's already tools out there that are being mobilized within the next, like, half--within this decade, right? We're gonna start seeing--Micheal: Easily, yeah. At the end of the decade this will--we won't be talking about this.Zach: It won't even be a point of discussion. It's gonna be "Hey, look, no. Your data says this. There's an X percent chance of this happening, and we need to make some adjustments now."Michael: It's absolutely gonna happen, and so machines are already now--at Amazon machines are recommending people for promotion. Machines are recommending people for termination. Machines are doing that. So they're kind of on the cutting edge. Not saying that they're doing that in a great way, I'm just saying--Zach: The technology is out there and it's happening.Michael: It's out there. They're using machine learning tools to make those decisions. Others are going to move on that, and the key is how do you do those things in a way that employees can trust it? Which is a big difference between machine learning and artificial intelligence when there is no trust and a big difference between machine learning and artificial intelligence when there is trust, and if you think about the 60 questions we answer, what are we really, really measuring? It's trust. That's really what we're measuring. Now, we can define it in all its dimensions, but it's trust. Respect is a part of trust. Credibility, transparency is a part of trust. Fairness is a part of trust. So trust is really what we're measuring. We could just double-click all over it to get you additional information, but it's all about trust.Zach: You know, I think--and for me, I'm always curious about when it comes to these lists--and I say this as somebody, of course I love what y'all are doing. I love Great Place to Work. It's the definitive listing space, right? I think it's also interesting because as a black man who has a network of a ton of black and brown people, right, like, we'll look at some of these lists and like, "Dang, okay." I recognize that the overall maybe brand of a company may be really strong, and it's ranked or whatever, but then I wonder like, "Okay, how do I reconcile that with, like, stories that I'm hearing from marginalized people who have had, like, real challenges at these companies?" And I'm curious to know, like, where do you see Great Place to Work continuing to grow and expand to capture, like, marginalized voices and experiences?Michael: Yeah. So Zachary, that's where I was in 2015, exactly where you were, meaning looking at a company--at that time thinking about buying it, looking at the list of the places that were ranked as Great Places to Work, and I knew people of color having horrible experiences in those companies. That's why I bought it, because I'm like, "I think we can do something about this. We can reorder it." And if you look at, you know, 2014, 2013, the companies at the top of that list, they're not at the top now, okay? They're not at the top now, so that's really what happened, but I was exactly where you were and definitely driven to do that. So what it has enabled is, you know, I'm not satisfied by any means. I'm satisfied by the progress, but not by where we are. You know, the thing I talk about it, the bullseye all the time for me is 2030, that that's when we need to get this right, which means--you know, our analytics are driven by algorithms, and so you've got to continually modify the algorithms, and when you modify the algorithm, you've got to live with that algorithm and its output for a year, then you modify it and you've got to live with it for a year. So it's frustrating because it takes a long time, but, you know, we're at the place now where we can say to a company that "Hey, we've measured the experience of different demographic groups, people of color, and we can double-click on it and so on, and their experience is very different from these other groups, therefore you're falling down or off the list." We can do it on that basis now, which that wasn't happening in 2015. There was no way of doing it. We do it now. So we call it maximizing human potential. That's another cheat code, but what it is is we compare one demographic group to another. We reward companies where the gap is small and we penalize companies where the gap is huge. So you can no longer be "80% of our people are having a great time." We go into the people who have given a one or a two response on the Likert scale, you know, that are saying, "My manager involves me in the decisions that affect me? Never or almost never." Okay, well, we grab that group and compare it and put--we give weight to that, a group that it was never done. Another thing is--you know, in terms of there's other lists out there that are recognizing companies, none of them are surveying employees. So really those are marketing-driven exercises.Zach: Right. Those are smiley faces, right?Michael: They are. You know, they're just doing something very different, and so for us, we can let you know--like our diversity lists. You know, there's a few diversity lists, you know, kind of out in the world that are well-known. There's only one that measures and scores the experience of under-represented people. That's Great Place to Work. Our list is driven by their experience, so it doesn't matter, you know, frankly, what white males think about their work experience. We don't measure it for those lists. You know, we don't measure it for those lists. We look at underrepresented people. That's what drives that list. We look at their experience, because that's what it is. For the 100 Best we look at everybody, but we don't for that. So it took us a while, because if I had done that immediately I'd be out of business. So, you know, you've got to build some brand strength and get people to, you know, understand what you're doing and that you're a rational person who wants to grow their business. So it took some time, but we're almost there. I don't feel like we're there right now. We're almost there where we are just pulling in representation into our final ranking criterion. So I feel like we're just about there, and it's enabling us to have some great CEOs who loved being on our list, but now we're able to say, "Hey, guess what?" Even though, you know, we have some companies that, you know, 60, 70% of their workforce are people of color, and they're having a great experience, which is great, but then we look at the top team and we're like, "That doesn't look like them." But the good news is you can have that disconnect and a group of people having a great experience. So that's wonderful, but just think how much better they could be if they could look up and say, "Hey, if I keep working real hard, it's possible for me to get there." "I feel respected now, but I'd really feel respected if that's true." So we're able to talk to CEOs and say, "I know you're happy now." Nobody in hospitality is happy today, but [?] they were happy, 90 days ago they were happy, and you could say, "I know this is great, and I know you're providing a great experience for these people, all these people. That's incredible. We think, you know, the world of you, but you need to do something about this because you'll really unlock them," and the kind of CEOs we deal with, which are the ones who get how this drives their profitability and earnings--and most of these have some moral connection as well in the way that they want to be seen and the way that they want their families to see them. That's kind of another lens that affects a CEO's mindset. So then they go, "Okay, look, I got it," and they don't have to do it, but they choose to do it. So that's when I know, "Okay, this is working now," that this is enabling them to be who they want to be. And a lot of CEOs, I've done a lot of work on the following where you have a CEO moving through their career and just having a great career, a lot of power, a lot of influence, they're happy and satisfied, and then they have a daughter. It changes 'em forever, because then they're like, "I want my daughter to get paid equal pay," but at the company they're running, it's not happening. All of a sudden they start to look at equal pay differently because they had a daughter. I've seen this time and time again, a CEO with a daughter, a CEO with a kid with autism, a CEO with a kid with mental health issues. It modifies the behavior of that CEO and how they--which is great, but that shape-shifting move blows the door open for being a great place to work for all. Now it becomes their thing. They start saying it because they have this new desire to do something and to change the way that others view them and the way that they view themselves.Zach: So first of all this has been an incredible conversation, and, you know, we're coming up on time, Michael, but what I want to do is I want to go back to a word that you used earlier, trust, and really that a lot of these questions go back to--the rankings and the analytics go back to--quantifying trust, and I'm curious to know if you could give us, like, three points of thought that executives should be thinking about when it comes to building better trust within organizations? What would those three points be?Michael: I think that fairness is the most important. So the way you treat a group of people, whether they be analytics versus non-analytics, accountants versus engineers, you need to treat people the same. People read when you're not doing that. They are paying attention to whether you're doing that or not. So being consistent in the way you talk to people, respond to people, what you tweet and what you don't, it matters. So fairness is what's most important, and then making sure your actions--if you say that, you know, diversity drives innovation, people are gonna look and see if you really think that's true. So if you're saying diversity drives innovation and your executive team is not diverse, then now you lose credibility and you're not being transparent and people think it's not fair. The whole pyramid collapses based on you saying one thing and you're actually doing another, and then you want to take a look at your board of directors. You want to take a look at your executive team. You want to take a look at your pipeline and make sure that in 2023 things are going to be different. You want to make sure companies now are restructuring or laying people off. Well, look at the pool that you're laying off. Look at the pool you're restructuring. If you're not careful you're gonna erase ten years of gains is what you're doing right now. So these are the things that build trust. These are the things, fairness more important than anything else. The reason there's resistance to D&I efforts is somehow white men--some--feel like money's being taken out of their pockets.Zach: Right, this scarcity mindset, right?Michael: Yeah, the zero sum game, so you have to--if you have an ERG for African-American professionals, Asian-American professionals, you need to have one that a white male says, "I identify with this one. I identify with this one." They gotta have one too. You can't ignore anyone. It has to be for all.Zach: Michael, this has been great. I just gotta thank you again. Before we go, I'll give you a chance - any shout-outs or parting words, man?Michael: I think that entrepreneurism is a journey that's not for everybody. If you're thinking about it explore it, you know? Talk to some entrepreneurs and see what it's like, but do an honest check with you as to whether or not it's good for you. And then if you're in the corporate environment, lead with the data. You know, the data is what you're gonna need. And know that even if you have all the data, if there are people who aren't interested in diversity and inclusion, you know, the data's not gonna get it done. So, you know, get the data, use the data, make your case with the data, and if you find things are still slow, that's because the leader you're talking to just doesn't want to make a difference. You know, they don't want to change, and so then I'd update my LinkedIn profile and try to find some place where people are using data in the way that they use it for every other decision, whether it be M&A or anything else. You don't want anything different in the D&I area. You just want the consistent behavior, but don't bang your head too long or you're gonna find yourself with a headache.Zach: Michael, thank you so much, man. Look, we're gonna talk to you soon. We consider you a friend of the show. Honored, pleasure to have you. All right, y'all, so that does it for us. This has been Zach with Living Corporate. You know what we do. We're having these authentic conversations even during the rona. I pray that everyone is staying safe out there. You know where to check us out. You can just Google us. We're all over the place, okay? Living Corporate. You type that in and we're gonna pop up on something. You make sure you check us out on our website, living-corporate--please say the dash--dot com, or livingcorporate.co, livingcorporate.org, livingcorporate.net, livingcorporate.tv, livingcorporate.us, okay? Livingcorporate dot... shoot, all the livingcorporates except for livingcorporate.com. We've already talked about this. So if you type in livingcorporate.com it's gonna take you to some Australian website. [?] Australia, but we don't have that domain, okay? So livingcorporate.co, .us, .tv, or living-corporate.com. 'Til next time, y'all. This has been Zach. You've been listening to Michael C. Bush, CEO of Great Place to Work. Catch y'all next time. Peace.
Remember those surveys where you respond by selecting one of five options between Strongly agree and Strongly disagree? They all are based on a popular tactic called "Likert scale".The Likert scale is a scientifically proven method to help you understand more about your customers’ feelings, attitudes, or behavior. And in this episode, we're giving you the ultimate guide to using it for your online business. You'll learn:What exactly the Likert scale isWhat variations of the Likert scale existWhen should you use the Likert scaleHow to create the Likert scale for your websiteReady to add the Likert scale to your website? Read a more detailed step-by-step tutorial here.You may also want to check out: diverging stacked bar chart - a common visualization technique for analyzing Likert scales.
EDtalk - Your Unique Assessment - Understanding Self It's FREE! As a trainer and manager with The Coca-Cola Company for over 27 years and a certified behavioral trainer (DISC Behavioral System), I gathered together a team of behavioral experts and we created a brand-new cutting edge assessment. Over the years there have been all types of rating scales developed to measure attitude, the most popular being the Likert scale (rate something from 1 to 10) which was created in 1932. The problem is, it has major accuracy issues because of how people view what these numbers mean. This outdated piece of technology lacks precision and does not accurately reflect your true sentiment. That's exactly what we thought, and we partnered with behavioral experts to create a brand-new, high definition behavioral assessment solution. The predictive analytics engine removes subjectivity by allowing you to accurately quantify your priorities and provides valuable insights about yourself. The more you know about yourself, the better quality decisions you will make for your future. Get your FREE assessment at https://lnkd.in/ekgm64Y
Quinton Barrett and Jay Green of People Element join us to talk about measuring employee engagement, collecting data and the importance of sharing it, and how Nussbaum measures up. Jay (left) and Quinton of People Element Who Is People Element People Element specialize in partnering with businesses and organizations to help them build on their most crucial asset… their people. They offer many services from personality assessment to new hire onboarding, employee engagement surveys, and more. Nussbaum partners with People Element to conduct an annual employee engagement survey. People Element provides an excellent way for us to allow employees to share their thoughts and opinions anonymously so that we can get an accurate measure of how engaged employees are and see what we can focus on to improve as a company. How Do the Surveys Work If you’ve ever taken a survey at all, chances are you have answered questions utilizing something known as the Likert Scale. You know, the questions that ask you to rate whether you agree, or disagree with a statement on a 5 point scale. People Element designs a survey using the Likert scale combined with positive, to-the-point statements/questions about every part of the business. They then administer the survey by providing employees with both online and phone-based means to participate. The goal is not only to measure where the company stands but to determine what drives engagement at the company and in what can be done to help bring engagement up. Then share the results with the whole company to help drive transparency at every level. “We can say Nussbaum is 80% engaged and maybe that’s helpful, but what’s really helpful is that we ask a lot of other questions that can help us determine what drives engagement.” Quinton Barrett How Does Nussbaum Measure Up “There is no way to say it without it sounding really good, because it is.”Quinton Barrett To hear Quinton and Jay talk about Nussbaum’s numbers, one might think there is nowhere to go but down. In fact, Quinton admits to saying as much after last year’s survey. Yet we continue to evaluate and make incremental improvements each year. Quinton highlights how we stack up on some of the standard questions they use industry-wide and notes how we tend to score above average. Lines represent the difference between Nussbaum scores and the Industry Benchmark per question. Blue is above average and red is below. We think these results are pretty impressive and tell the story of a workforce who are passionate about what they do and want to be an active part of building a great company. But this is only a small part of the picture. There are always things we can work on and be better at. So we dig into the data to identify those weaker spots and take action over the year. “It would be really easy to say we have 89% favorable data, we’re doing good… but I think where you get the most value is saying… what categories are below 89%, lets see if we can get those up into that 90% favorable percentile.”Quinton Barrett Links People ElementLikert Scale
This JCO Podcast provides observations and commentary on the JCO article Gonadal Functioning and Perceptions of Infertility Risk among Adult Survivors of Childhood Cancer: A Report from the St. Jude Lifetime Cohort Study by Lehmann et al. My name is Leslie Schover, and I am retired from the faculty of MD Anderson Cancer Center and currently Founder of Will2Love.com a digital health company in Houston, Texas. My oncologic specialty is cancer-related problems with reproductive health, i.e. sexual function and fertility. Damaged fertility is unfortunately quite common in survivors of childhood cancer. A variety of chemotherapy drugs, as well as surgery affecting parts of the reproductive system or radiation therapy focused on the pelvis or brain, can damage spermatogenesis, reduce ovarian reserve, or interfere with uterine function. In general, males are more at risk than females for cancer-related infertility. Some survivors do not undergo puberty without hormonal support. For others, fertility may recover over time. However, many young women who have menstrual cycles in their teens or twenties are at risk for premature ovarian failure, leaving a narrowed window of time to become pregnant. Men do not know whether they have normal sperm counts, motility, or form unless they have had a recent semen analysis. People diagnosed with cancer before puberty may never have been counseled about fertility. Even survivors treated as teens or young adults typically do not know their fertility status unless they have consulted an expert in reproductive endocrinology or andrology. Surveys of young survivors suggest that the majority want to have children, particularly those who are childless. A number of studies have documented markers of infertility or reduced rates of offspring in survivors of cancer, but little has been known about their perceptions of their fertility status. In the paper that accompanies this podcast, Lehmann and colleagues present novel data about the risk perceptions for infertility compared to indicators of actual gonadal function in over a thousand long-term survivors of childhood cancer participating in the St. Jude Lifetime Cohort. None of the participants already had children or a previous pregnancy. The mean age of the sample was 29, with a mean follow-up of 22 years since cancer diagnosis. 85% were white and 32% had at least a 4-year college degree. 52% were married or in a relationship. Only 10% of men and 16% of women had been tested for infertility outside of the study. Gonadal function was measured by a semen analysis in 56% of men and by a panel of hormones in the others. In women under age 40, status as fertile vs. sub-fertile was assigned by chart review based on menstruation, diagnosed premature ovarian failure, or hormonal assays. Perception of risk for infertility was based on one question with a Likert scale of 5 response options, comparing one’s own fertility to that of peers who had not had cancer. Answers were dichotomized into two categories: perceived at risk for fertility or perceived normal fertility. 62% did perceive themselves as at risk for infertility. Those who perceived their fertility as damaged had characteristics that would indicate potentially more knowledge about cancer and fertility, including being older, white, in a relationship, having a college education, a history of gonadotoxic treatment, having tried unsuccessfully to conceive, or having sexual dysfunction. In actuality, 24% of women and 56% of men had evidence of impaired gonadal function. However, actual medical status had no significant relationship to perceptions of risk. The most common discordance was that the survivor believed him or herself to have damaged fertility when medical tests appeared normal. This included 20% of men and 44% of women. Inaccurate perceptions were more common in respondents who were white, had more education, had more gonadotoxic cancer therapy, were very concerned about their fertility, and had sexual dysfunction. In contrast only 16% of men and 5% of women overestimated their fertility potential. In terms of clinical implications, it is common for young survivors to overestimate their risk of infertility. Such beliefs can diminish quality of life. A young person who feels like “damaged goods” may be distressed about the future and perhaps reluctant to date or to enter into a committed relationship. For women, risky drinking was another factor associated with overestimating fertility risk. Risky drinking, and the notion that pregnancy is impossible, may contribute to findings in other studies of excess rates of unintended pregnancies and failure to use consistent contraception in young adult female survivors. Those unaware of their damaged fertility maybe in for distress and disappointment if they try for a pregnancy. Clearly a greater effort should be made to inform young survivors about risks to fertility and to refer them for testing at intervals of fertility status. It appears that women are much more likely than men to perceive themselves as potentially infertile, despite the fact that men are more likely to be infertile. However, the measures of gonadal function used in women were not sensitive enough to predict the likelihood of diminished ovarian reserve in the future. Many young survivors of cancer can conceive in their teens or twenties, yet have a steeper than normal drop-off in ovarian reserve with aging, so that their menopause occurs far before the average age of 51. In fact, women’s fears in this study that they will have trouble getting pregnant in the future may be more accurate than they appear. Age at first pregnancy has steadily increased in our society, with women postponing childbearing until they have completed educational goals or established a working life. More cancer survivors are likely to run out of time before they are ready to have a child. One solution may be commercial egg banking before age 25-30, if fertility preservation was not accomplished before starting cancer treatment. Egg banking is expensive, however, and does not guarantee a future pregnancy. This survey adds to our knowledge of the informational needs of survivors of cancer in childhood or teen years. Both medical and counseling support should be more readily available. Survivors with lower health literacy would be particularly good targets for such services. This concludes this JCO Podcast. Thank you for listening.
Dear readers and listeners, In this episode of the Works in Progress podcast, we discuss our experiment 4: the writing journal. We describe what we noticed about the journal, what we liked, what we didn’t like, and whether the journal influenced our writing progress or habits. Topics include: Complaining about and also praising using numbers to represent how you feel about something (e.g., how we used ‘1’ to represent not wanting to write, and ‘5’ to represent really wanting to write. These are also called Likert scales)How we organize our writing notebooks and documentsAdapting the writing journal to suit the needs of different kinds of projects The episode is 33 minutes long, and you can listen on Apple Podcasts, Google Play, TuneIn, Stitcher, or wherever else you get your podcasts. If you prefer, here's the transcript for episode 4. As always, we welcome your feedback! Love, Noah and Indigo
How to mitigate the risks associated with rating scales in market research questionnaires? Know all your scale options. Likert scales are great for many cases, but so are semantic differential scales. Not familiar with semantic differential scales? Tune in for some quick tips that will help your professional market research instruments shine.
Today we use a case study to understand the typical responses we get from parents, friends, coworkers, and supervisors when we are emotionally elevated. Cheri uses a 5-point Likert scale to explain how she feels when the typical response is actually harmful.
Today we use a case study to understand the typical responses we get from parents, friends, coworkers, and supervisors when we are emotionally elevated. Cheri uses a 5-point Likert scale to explain how she feels when the typical response is actually harmful.
Show Notes for Podcast Five of Sex & Why Host: Jeannette Wolfe Topic: Stress Response For Acute Care Medicine and Introduction to Sex and Gender Based Medicine CME Cruise Opportunity click here Part 2 on biological sex differences in the stress response with special guest Justin Morgenstern We started out with a discussion on different ways to frame potential sex and gender based research using a method described by Dr. M McCarthy A full discussion of this framework can also be found on my website McCarthy MM et al, The Journal of Neuroscience: the official journal of the Society for Neuroscience. 2012;32(7):2241-2247. There appears to be a significant amount of individual variation in how some individuals respond to and recover from similar stresses. Some of these differences may be influenced by our biological sex. Understanding how we react and respond to stress and how this may perhaps differ from other individuals around us may help us better communicate and lead under stressful situations. Study #1 This was a follow up study to an infamous study the same team did three years before in which they looked at sex differences in reward collection on a computer balloon game (Balloon Analogue Risk Task or BART). In this game, players got 30 balloons and the farther they pumped them up the more points they got however, each balloon was also set to randomly pop somewhere between 1- 128 pumps and if the player popped their balloon before they cashed it in they lost points for that balloon. Study participants were randomized to control vs stress condition (placing hand in neutral versus ice water for 3 min) and then played the game. They found that in neutral conditions there was no significant difference in risk taking (number of pumps 39 for women versus 42 for men, but under stress women decreased their pumping to 32 while men increased to 48). In this 2012 study, Lighthall's group adjusted its protocol so that BART could now be played in an MRI scanner. Unfortunately, the new BART design subtly changed the game because now instead of going through 30 balloons, participants played the game for a set amount of time with unlimited balloons. This inadvertently added a second strategy to get lots of points as the new design allowed participants to get points by either pumping additional air into an individual balloon or rapidly moving through a greater number of balloons while pumping only a few pumps per balloon. Stress intervention was again either a cold or neutral temperature water bath and after submersion the researchers collected cortisol samples and scanned participants while they played the game. Results- no difference in control conditions (room temp water) between men and women in number of balloon pumps or points earned But under stress men acted more quickly and got increased rewards while women appeared to slow down their reaction time and decrease their rewards. Men had higher baseline and stimulated cortisol but there was no difference b/w men and women in the amount of cortisol change between baseline and stressed condition. Under basic non stress conditions- during the control testing it appeared that overall men and women utilized the same brain regions to complete the balloon task (i.e. suggesting that males and females approach the task by using similar neural strategies), however once stressed men and women seemed to use different areas of their brain. Men used their dorsal striatum and anterior insula more. Anterior insula has been associated with switching tasks from a riskier to a safer option (and in both sexes higher activity in this region correlated with higher collection rate) and the dorsal striatum is believed to be associated with obtaining predictable rewards and with integrating sensory, motor, cognitive and emotional signals. Did not find that men had increased risk taking in this study but it may have been masked in that there was now a lower risk strategy available to them that still was associated with an increased reward (pumping balloon a small amount and quickly cashing in to get to next balloon). Concept discussed is that under stress men may possible go into type one systemic thinking (automatic) while women may favor type 2 (deliberate cognitive inquiry). Lighthall, N. R., Mather, M., & Gorlick, M. A. (2009). Acute stress increases sex differences in risk seeking in the balloon analogue risk task. PloS One, 4(7), e6002. https://doi.org/10.1371/journal.pone.0006002 Lighthall, N. R., Sakaki, M., Vasunilashorn, S., Nga, L., Somayajula, S., Chen, E. Y. Mather, M. (2012). Gender differences in reward-related decision processing under stress. Social Cognitive and Affective Neuroscience, 7(4), 476–84. https://doi.org/10.1093/scan/nsr026 Study #2: Goal to determine if: Under equal subjective sensations of stress (i.e. men and women objectively rate their subjective level of stress the same on a 1-10 point scale) do men and women use the same brain circuitry to process stress or do they use different circuitries. What they did: Collect cognitive, psychiatric, and drug use assessments on 55 men and 41 women aged 19-50 Exclusions TBI, psychoactive meds, history of substance abuse, preg, DSM-IV mental health disorder and currently menstruating or oral contraceptive use (to try and mitigate additional hormonal influences) Over course of 2-3 sessions put them into a MRI scanner and asked them to visualize neutral or stress inducing images (this technique has previously been validated and involved the subjects own audiotaped accounts of stressful –rated as greater than 8 on 1-10 Likert scale- or neutral experience) which was later played back to them in MRI scanner Asked them to rank their level of stress Looked to see which areas of the brain lit up under different conditions Results Men and women appeared to have different strategies for guided visual tasks in general regardless of whether listening to neutral or stressful recordings: Men: More likely to light up areas associated with motor processing and action. Caudate, midbrain, thalamus, and cingulate gyrus and cerebellum Women: More likely to light up areas associated with visual processing, verbal expression and emotional experience Right temporal gyrus, insula and occipital lobe Women were also more likely to increase their HR regardless of condition (likely from having increased autonomic arousal- though other studies suggest that women have increased HR at baseline compared to men in general) Under stress men and women had firing in opposite directions: Men dampened while women increased firing in: Dorsal Medial pre-frontal cortex, parietal lobes (including inferior parietal lobe and precuneus region) left temporal lobe, occipital area and cerebellum. Believed functions of these different regions Dorsal medial frontal cortex – executive functioning of cognitive control, self-awareness of emotional discomfort, strategic reasoning, and regulation Precuneus- part of the parietal lobe associated with self-referential and self-consciousness Inferior parietal lobe- cognitive appraisal and consideration of response strategies (also area often associated with mirror imaging) Left temporal gyrus- processes verbal information Occipital area- processes visual information Cerebellum- besides coordinating motor movement also is involved in emotional and cognitive processing “Taken together, the observed differences in these regions suggest that men and women may differ in the extent to which they engage in verbal processing, visualization, self-referential thinking, and cognitive processing during the experience of stress and anxiety.” They also suggest that under stress men may feel anxious due to “hypoactivity” while women may feel stress due to “hyperactivity” in above noted regions. Conclusion: Men and women use different neural strategies under stress even with similarly reported stress levels This research is still clearly in its infancy but suggests that under stress some men, may turn down activity in areas of their brains involved in executive functioning and that this might increase their vulnerability to impulsivity. Conversely, under stress some women may actually turn up activity in these regions that could lead to excessive rumination and possibly depression. The authors then extrapolate their data to suggest that men and women might possibly benefit from different stress reduction techniques in that some men might benefit more from cognitive behavioral therapy which enhances frontal lobe firing and some women from mindful meditation which dampens it. Seo, D., Ahluwalia, A., Potenza, M. N., & Sinha, R. (2017). Gender Differences in Neural Correlates of Stress-Induced Anxiety. Journal of Neuroscience Research, 125, 115–125. Study #3 This study literally looks at what conditions men and women might seek out increased physical interaction with their dog after an agility competition. The background here is that in 2000 Dr. SE Taylor questioned whether the flight of fight response which has classically been described as a “universal” stress response, was actually applicable to both males and females. She questioned how realistic it was for a female who might be physically smaller and less muscular than her male peer to successfully fight or run away from a potential attacker. She suggested an alternative response of “tend and befriend” which suggests that under stress that women may naturally migrate towards their children as well as others within their intimate circle with the belief that a larger group may offer protection and a pooling of resources. Additional support for this theory is the idea that oxytocin, which has receptors throughout the brain and is usually found in higher amounts in women, may be released during this affiliative behavior and help to dampen the physiological cortisol stress response. This study was done to see if men and women seek out physical contact with another being (in this case their dog) in similar fashion when they are stressed. They chose to study human contact with a dog versus an interaction with another human to try and mitigate the influence of any “gender expectation” violations. Which in English means that if Rob would normally seek out Carol when he is stressed, he might decide not to do so in public (and in this case being videotaped) because he doesn't want to appear “less masculine”. As public affection with one's dog is considered less gender biased, the authors chose this interaction as a marker for affiliative behavior. What they did: Videotaped and took cortisol saliva levels from 93 men and 91 women after they had run their dog through a competitive agility course. Recording and samples were taken as participants waited for their official score (although subjectively most participants pretty much already knew whether or not their dog had scored high enough to move on.) The researchers measured cortisol levels and how much participants petted their dog while waiting for this score. Results: 36 of results excluded because dogs did not finish course and were disqualified Overall there was no sex difference in total affiliative behavior Of first 180 seconds of video tape women petted dog on average 27 seconds and men 25 seconds When men and women perceived they lost, their cortisol level increased more than those who perceived they had advanced. Differences occurred however as to when men and women were more likely to pet their dogs Women petted them more when they sensed defeat- an additional 12 seconds compared to women who had won Men petted them more when they sensed victory- an additional 7 seconds when compared to men who had lost Conclusions: women sought out affiliative behavior when they lost, men sought it out when they won. Justin and I use this paper as a discussion point as to understanding how two people may get exposed to the same stressor and respond quite differently and importantly how they sort of bounce back from a stressful situation may also differ. This paper suggests that emotional debriefing after stressful experiences may be more helpful to some individuals than others. For more on the stress response please see Justin's new post on First10EM Sherman G, Rice L, Shuo Jin E, et al: (2017) Sex differences in cortisol's regulation of affiliative behavior. Hormones and Behavior 92, 20- 28
Isabel talks with Dr. Conrad Fivaz, the clinical director of Priority Solutions, and Gigi Marshall Knight, Emergency Communication Nurse System (ECNS) program administrator. They discuss the goals of secondary triage, the importance of studying low acuity calls, and how ECNS affects emergency dispatchers. For Your Information: Priority Solutions Inc™: http://www.prioritysolutionsinc.com/ Ambulate: to walk or move about The abstract for the ALPHA study mentioned by Isabel at 17:58: https://www.cambridge.org/core/journals/prehospital-and-disaster-medicine/article/using-onscene-ems-responders-assessment-and-electronic-patient-care-records-to-evaluate-the-suitability-of-emdtriaged-lowacuity-calls-for-secondary-nurse-triage-in-911-centers/138A2F44D4B7E9E4F0EFA316D744EE8F REMSA (Regional Emergency Medical Services Authority): https://www.remsahealth.com/ MedStar: http://www.medstar911.org/ Likert scale: used for measuring customer satisfaction (https://www.simplypsychology.org/likert-scale.html) Want to get involved in a study? Have a question? Email us at dispatchindepth @ emergencydispatch (dot) org
Isabel talks with Dr. Conrad Fivaz, the clinical director of Priority Solutions, and Gigi Marshall Knight, Emergency Communication Nurse System (ECNS) program administrator. They discuss the goals of secondary triage, the importance of studying low acuity calls, and how ECNS affects emergency dispatchers. For Your Information: Priority Solutions Inc™: http://www.prioritysolutionsinc.com/ Ambulate: to walk or move about The abstract for the ALPHA study mentioned by Isabel at 17:58: https://www.cambridge.org/core/journals/prehospital-and-disaster-medicine/article/using-onscene-ems-responders-assessment-and-electronic-patient-care-records-to-evaluate-the-suitability-of-emdtriaged-lowacuity-calls-for-secondary-nurse-triage-in-911-centers/138A2F44D4B7E9E4F0EFA316D744EE8F REMSA (Regional Emergency Medical Services Authority): https://www.remsahealth.com/ MedStar: http://www.medstar911.org/ Likert scale: used for measuring customer satisfaction (https://www.simplypsychology.org/likert-scale.html) Want to get involved in a study? Have a question? Email us at dispatchindepth @ emergencydispatch (dot) org
This week’s hard question comes from – and there are two names on here; I’m not sure which one is correct – it’s Mara or Nara. But this individual asks a very good question; it’s actually a series of questions. I’ll just try to quantify. It says:“I received an email asking me as a parent to fill out a satisfaction survey. I was curious what the student survey looked like... so I clicked on the link for the ‘School Quality Survey for Elementary Students.’ The questions all looked pretty generic... until #12. Students who misbehave receive consequences?” And, of course, this survey is a Likert scale so it goes from Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree. The question was “Students who misbehave receive consequences?” “This seems like a question that elementary students shouldn't know the answer to because if the students with behavior issues are being dealt with on an individual basis and not a blanketed ‘shaming’ approach, their consequences shouldn't be evident or any of the other students' concern. (Unless, of course, there is an altercation on the playground or it directly affects another student.) Which leads to my question: Has FCPS come up with a more effective way to redirect the children in the class that are acting outside the set class rules other than time out or isolation or public shaming?” And then another question: “Are the children that are not excelling with the academics and misbehaving and the children that are excelling but bored and need more of a challenging approach being disciplined in the same way? If so, is being forced to chant the alphabet when you are ready to read not punishment enough?”
This week’s hard question comes from – and there are two names on here; I’m not sure which one is correct – it’s Mara or Nara. But this individual asks a very good question; it’s actually a series of questions. I’ll just try to quantify. It says:“I received an email asking me as a parent to fill out a satisfaction survey. I was curious what the student survey looked like... so I clicked on the link for the ‘School Quality Survey for Elementary Students.’ The questions all looked pretty generic... until #12. Students who misbehave receive consequences?” And, of course, this survey is a Likert scale so it goes from Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree. The question was “Students who misbehave receive consequences?” “This seems like a question that elementary students shouldn't know the answer to because if the students with behavior issues are being dealt with on an individual basis and not a blanketed ‘shaming’ approach, their consequences shouldn't be evident or any of the other students' concern. (Unless, of course, there is an altercation on the playground or it directly affects another student.) Which leads to my question: Has FCPS come up with a more effective way to redirect the children in the class that are acting outside the set class rules other than time out or isolation or public shaming?” And then another question: “Are the children that are not excelling with the academics and misbehaving and the children that are excelling but bored and need more of a challenging approach being disciplined in the same way? If so, is being forced to chant the alphabet when you are ready to read not punishment enough?”
Intro Music: Ryan Little “Get Up” Hey everybody this is Myers Hurt with another edition of “Countdown to Match Day,” the official podcast of the Match Gurus, and the only podcast aimed at helping applicants shine on interview day. Remember to send any questions you want answered on the show via twitter @theMatchGurus or snapchat thematchgurus and we will get your questions answered. In true countdown style, this season we’ll release one podcast each week for the 40 weeks leading up to Match Day 2017. This is season 1 episode 1 - 40 weeks to go until Match Day 2017. Let’s jump into today’s topics of discussion. The Alphabet Soup of Residency Applications: ERAS - Electronic Residency Application Service - this is your application portal, and a service provided by the AAMC. You will be issued a token to register. Here is a good example of a YouTube tutorial for how to do that. For timeline, deadlines, and other reminders, follow ERAS on twitter @ERASinfo NRMP - National Resident Matching Program - this is a third party organization that conducts the actual matching process. The R3 system - register, rank, results. Here is their calendar. Here is their checklist. Follow them on twitter @TheNRMP FREIDA - Fellowship and Residency Electronic Interactive Database - list of all fellowship and residency programs published by the AMA, number of available spots, contact information, and other ECFMG - Educational Commission of Foreign Medical Graduates - for international and foreign medical students applying to the US NRMP Match - you will go through them to get your token and upload documents. OASIS - Online Applicant Status and Information System AAFP Strolling through the Match 2016 PDF Who’s Who in the residency application process? Chairperson: Head of an entire department - oversees medical student education, residency education, research, patient care, surgical simulation, finance, hiring and firing, and all other department-wide logistics. Program Director: Head of the residency education slice of a department. This is who will be overseeing residency interviews, applications, and submitting the final rank list. Program Coordinator: Administrative assistant to the residency department. Deals with new applicant communication, interview scheduling, current resident licensing, and even logistics for residency alumni. Question of the Day: Can you elaborate on the specific systems that residency programs use to evaluate residents? Dr. Michael Olson’s: The academic aspect of evaluation will never go away, but more departments are interested in a holistic approach. Make sure your Step scores are competitive, be open, honest, and clear about gaps in education. Other evaluation method is social - from first email to relaxed moments with residents, to how you interact with the other applicants. Dr. Myers Hurt’s answer: Agree with the social and “EQ” components of evaluation. There is no standardized evaluation form, but there is often a form that your interviewers will complete with Likert scale (1-5) of certain qualities to help “objectify” subjective data, along with a “comment” section. Please subscribe to catch each new episode as they are uploaded each week. If you find the content valuable please take a bit of time to leave a review on iTunes to help get the word out to other med students looking for answers. Also feel free to give us some feedback on what you think we could improve on. Check out our book on Amazon, and leave a review if you find it helpful. Thank you to everyone for listening, remember to send you questions to us through our website at www.thematchgurus.com, twitter @theMatchGurus, or snapchat.
Background: Problem-based Learning (PBL) has been suggested as a key educational method of knowledge acquisition to improve medical education. We sought to evaluate the differences in medical school education between graduates from PBL-based and conventional curricula and to what extent these curricula fit job requirements. Methods: Graduates from all German medical schools who graduated between 1996 and 2002 were eligible for this study. Graduates self-assessed nine competencies as required at their day-to-day work and as taught in medical school on a 6-point Likert scale. Results were compared between graduates from a PBL-based curriculum (University Witten/Herdecke) and conventional curricula. Results: Three schools were excluded because of low response rates. Baseline demographics between graduates of the PBL-based curriculum (n = 101, 49% female) and the conventional curricula (n = 4720, 49% female) were similar. No major differences were observed regarding job requirements with priorities for "Independent learning/working" and "Practical medical skills". All competencies were rated to be better taught in PBL-based curriculum compared to the conventional curricula (all p < 0.001), except for "Medical knowledge" and "Research competence". Comparing competencies required at work and taught in medical school, PBL was associated with benefits in "Interdisciplinary thinking" (Delta + 0.88), "Independent learning/working" (Delta + 0.57), "Psycho-social competence" (Delta + 0.56), "Teamwork" (Delta + 0.39) and "Problem-solving skills" (Delta + 0.36), whereas "Research competence" (Delta - 1.23) and "Business competence" (Delta - 1.44) in the PBL-based curriculum needed improvement. Conclusion: Among medical graduates in Germany, PBL demonstrated benefits with regard to competencies which were highly required in the job of physicians. Research and business competence deserve closer attention in future curricular development.
HistoryThe earliest known observation of possible links between maternal alcohol use and fetal damage may have been made in 1899 by Dr. William Sullivan, a Liverpool prison physician who noted higher rates of stillbirth for 120 alcoholic female prisoners than their sober female relatives and suggested the causal agent to be alcohol use (Sullivan, 1899). This view contradicted the predominant theories of the day, which were that genetics caused mental retardation, poverty, and criminal behavior. A case study popular in the early 1900s by Henry H. Goddard involved the Kallikak family and shows the bias of the time period (Goddard, 1912), though later researchers conclude that the Kallikaks almost certainly had FAS (Karp, R.J., et al, 1995). Fetal Alcohol Syndrome, or FAS, was named in 1973 by two dysmorphologists, Drs. Kenneth Lyons Jones and David W. Smith of the University of Washington Medical School in Seattle. They identified a pattern of "craniofacial, limb, and cardiovascular defects associated with prenatal onset growth deficiency and developmental delay" in eight unrelated children of three ethnic groups, all born to mothers who were alcoholics (Jones, K.L., et al, 1973). While many syndromes are eponymous, or named after the physician first reporting the association of symptoms, Dr. Smith named FAS after alcohol, the causal agent of the symptoms. His reasoning for doing so was to promote prevention of FAS, believing that if people knew maternal alcohol consumption caused the syndrome, then abstinence during pregnancy would follow from patient education and public awareness. Nobody was aware of the full range of possible birth defects from FASD or its prevalence rate at that time, but admitting alcohol use during pregnancy can feel stigmatizing to birth mothers and complicate diagnostic efforts of a syndrome with its preventable cause in the name. Over time, the term FASD is coming to predominate. Diagnostic SystemsSince the original syndrome of Fetal Alcohol Syndrome (FAS) was reported in 1973, four FASD diagnostic systems that diagnose FAS and other FASD conditions have been developed in North America: The Institute of Medicine's guidelines for FAS, the first system to standardize diagnoses of individuals with prenatal alcohol exposure (Institute of Medicine (IOM), Stratton, K.R., Howe, C.J., & Battaglia, F.C. (1996). Fetal Alcohol Syndrome: Diagnosis, Epidemiology, Prevention, and Treatment. Washington, DC: National Academy Press.),The University of Washington's "The 4-Digit Diagnostic Code," which ranks the four key features of FASD on a Likert scale of one to four and yields 256 descriptive codes that can be categorized into 22 distinct clinical categories, ranging from FAS to no findings,The Centers for Disease Control's "Fetal Alcohol Syndrome: Guidelines for Referral and Diagnosis," which established general consensus on the diagnosis FAS in the U.S. but deferred addressing other FASD conditions, andCanadian guidelines for FASD diagnosis, which established criteria for diagnosing FASD in Canada and harmonized most differences between the IOM and University of Washington's systems. Each diagnostic system requires that a complete FASD evaluation include assessment of the four key features of FASD--prenatal alcohol exposure, FAS facial features, growth deficiency, and central nervous system damage. A positive finding on all four features is required for a diagnosis of FAS, the first diagnosable condition of FASD that was discovered. However, prenatal alcohol exposure and central nervous system damage are the critical elements of the spectrum of FASD, and a positive finding in these two features is sufficient for an FASD diagnosis that is not "full-blown FAS." Diagnoses and diagnostic criteria will be described in detail in the next podcast. Feedback or comments may be sent to: Michael__at__FASDElephant__dot__com. My Podcast Alley feed! {pca-6ab64b0bda8df39635beb79ecf0e0585}