POPULARITY
Compensation practices are constantly evolving, and staying ahead requires a keen understanding of the latest trends and data. In this episode, we're sitting down with Amy Stewart, Principal of Content Strategy; Sara Hillenmeyer, Senior Director of Data Science; and Lulu Seikaly, Senior Corporate Attorney, to pour over the latest insights from the Compensation Best Practices Report 2025. We'll explore the hottest trends in pay strategy, the bold flavors of pay transparency, and the strongest shifts impacting compensation programs today. Key Highlights: —Analysis of the 2025 Compensation Best Practices Report, labeled as "The Year of Contention." —Examination of trends in pay transparency and the legislative impacts on organizations. —Discussion on the increased adoption of AI and compensation technology in HR processes. —Insights into employee demand for transparency and the role of managers in pay discussions. Quotes: —"Organizations are communicating more about compensation practices, and this transparency is being followed by higher investments in compensation data." – Lulu Seikaly —"Companies are increasingly using AI to supercharge their existing HR and compensation teams with real-time data and improved workflows." – Sarah Hillenmeyer Episode Resources: —Get the CBPR 2025 Report: https://www.payscale.com/research-and-insights/cbpr/ —Watch our expert panel: https://www.payscale.com/events/2025-compensation-best-practices-panel/
Global health systems have long been shaped by Western frameworks that separate health from land, environment, and community. But for Indigenous communities worldwide, health is holistic—deeply rooted in ancestral knowledge, cultural traditions, and reciprocal relationships with nature.Yet, Indigenous ways of knowing have been overlooked and undervalued within research, policymaking, and health interventions. How can we shift this paradigm and centre Indigenous-led approaches in global health?In this episode, we speak with Dr. Walter Flores, Dr. Rebecca Rae, and Dr. Lorenda Belone about Indigenous communities in health research, examining systemic barriers, the importance of Indigenous knowledge in health equity, navigating differences between Indigenous and Western research approaches, and how policy shifts impact Indigenous communities. We also discuss the connection between research, activism, and advocacy.Our guests:Dr. Walter Flores - Research Professor, Accountability Research Center, American University, Washington DC, USADr. Walter Flores is a social scientist and human rights advocate with over 25 years of professional experience. He holds a PhD and a Masters of Community Health from the Liverpool School of Tropical Medicine, UK. Dr Flores' professional work has been carried out in more than 30 countries from Latin America, Africa, Asia and Europe. His areas of expertise are health systems and policy, right to health and indigenous populations, democratic governance, social accountability, legal empowerment and community participation. Currently, Dr Flores is research professor at the Accountability Research Center, American University, Washington DC and a research associate at the Center for the Study of Equity and Governance in Health Systems.Dr. Lorenda Belone – Professor, University of New Mexico College of Population Health / Center for Participatory ResearchDr. Belone (Diné/Navajo) is from Naakaii Bito' located on the Navajo Nation and has been engaged in community-based participatory research (CBPR) with an Indigenous paradigm focused on health disparities with southwest tribal nations. Her research includes partnerships with Tribal Research Teams (Apache, Navajo & Pueblo) on an Indigenous family prevention program called the Family Listening Program (FLP). As an Indigenous CBPR researcher, Dr. Belone integrates her own cultural and tribal knowledge to overcome historical negative research experiences and tribal community members' perceptions of research exploitation.Rebecca Rae, MCRP, MWR - Research Lecturer III, University of New Mexico College of Population HealthRebecca Rae (Jicarilla Apache), MCRP, MWR, is a Research Lecturer III at the University of New Mexico's College of Population Health. She is an Indigenous scholar, with eighteen years of implementing community-based participatory research (CBPR) projects and Indigenous participatory evaluation in partnership with Tribal communities. She works closely with multiple tribal community partners to mentor, strengthen, and enhance community members' skills in program development, implementation, data collection, data analysis, grant writing, research, and evaluation. Useful links: Want to hear more podcasts like this?Follow Connecting Citizens to Science on your usual podcast platform or YouTube to hear more about current research and debates within global health.The podcast cuts across disciplines, including health systems strengthening, gender and intersectionality, tropical diseases (NTDs, TB, Malaria), maternal and child healthcare (antenatal and postnatal care), mental health and wellbeing, vector-borne diseases, climate change and co-production approaches. If you would like your project or programme to feature in an episode or...
Today I am delighted to share this imperfectly perfect, perfectly imperfect conversation with Professor Melissa Walls, an Indigenous researcher who works with American Indian and First Nations communities to promote health equity through culturally centered projects.Melissa Walls, PhD (Bois Forte and Couchiching First Nation Anishinaabe) is Director of the Great Lakes Hub for the Johns Hopkins Center for American Indian HealthDr. Walls is an Indigenous social scientist committed to collaborative research with Indigenous communities to promote health equity. Her involvement in community-based participatory research (CBPR) projects to date includes mental health epidemiology; culturally-relevant, family-based substance use prevention and mental health promotion programming and evaluation; and examining the impact of stress and mental health on diabetes. Dr. Walls's collaborative work has received funding from the National Institutes of Health and the Public Health Agency of Canada.In this episode, we explore her amazing work, her journey and how when any one encounters indigenous thought, the invitation is maybe not to interrogate it and, maybe just support it. An invitation to be curious and be open.Melissa's request is also: 'please stop making us justify who we are, because a lot has been taken already, and this is taking a lot of energy away from us."And so much more...I invite you to take a listen. I am sure you will love every minute of this. I know I did!
L-R: Biju Suresh-Babu, Head of Banking & Financial Services, Fiorano; Akhil Rao, Managing Director, Nth ExceptionMost financial institutions are working on their ISO 20022 migration plans with some in advanced stages and already adopting the new rules for specific use cases. Others are still assessing their needs, but time is pressing. Structured data is not just a technical requirement but also a regulatory one. Robin Amlôt of IBS Intelligence discusses the progress on CBPR+ migration with Biju Suresh-Babu of Fiorano and Akhil Rao of Nth Exception.
Today on the show we welcome Genine Coleman!We discuss documenting the Legacy of California Cannabis.Proposition 64, the California cannabis legalization ballot initiative passed in 2016, created cannabis-specific taxes. A portion of these cannabis tax revenues are used to fund cannabis research initiatives through California's public universities. On April 25th, 2023 the California Department of Cannabis Control awarded $2.7 million dollars to a group of academic researchers, scientists, and community based organizations to develop a multidisciplinary, community-based participatory research (CBPR) study that will identify, document, and help to preserve the history, value, and diversity of California's legacy cannabis genetics and the communities that steward them.Genine Coleman is the founder and Executive Director of Origins Council, a nonprofit education, research and policy advocacy organization that serves some 800 members of California's legacy cannabis growing regions. Origins Council is dedicated to sustainable rural economic development within legacy cannabis producing regions and establishing nationally and internationally recognized, legally defensible, standards-based, geographic indication systems for cannabis. A former grower, Genine has over 20 years of cannabis cultivation experience. In 2012, she stopped cultivating cannabis to take up cannabis patient and policy advocacy. She is the co-founder of the Mendocino Appellations Project, which is now a regionally sponsored project of Origins Council, and serves on the Board of Directors for the 420 Archive which is devoted to collecting, preserving and sharing the history of cannabis culture and prohibition in the United States. Genine is also a founding board member of the Mendocino Cannabis Alliance, formed in 2019. From 2017–2020, she served on the Board of Directors of the California Growers Association.
Because of the nature of SAM, this is more video heavy than usual. See our YouTube!Because vision is first among equals in multimodality, and yet SOTA vision language models are closed, we've always had an interest in learning what's next in vision. Our first viral episode was Segment Anything 1, and we have since covered LLaVA, IDEFICS, Adept, and Reka. But just like with Llama 3, FAIR holds a special place in our hearts as the New Kings of Open Source AI.The list of sequels better than the originals is usually very short, but SAM 2 delighted us by not only being a better image segmentation model than SAM 1, it also conclusively and inexpensively solved video segmentation in just an elegant a way as SAM 1 did for images, and releasing everything to the community as Apache 2/CC by 4.0.“In video segmentation, we observe better accuracy, using 3x fewer interactions than prior approaches. In image segmentation, our model is more accurate and 6x faster than the Segment Anything Model (SAM).”Surprisingly EfficientThe paper reports that SAM 2 was trained on 256 A100 GPUs for 108 hours (59% more than SAM 1). Taking the upper end $2 A100 cost off gpulist.ai means SAM2 cost ~$50k to train if it had an external market-rate cost - surprisingly cheap for adding video understanding!The newly released SA-V dataset is also the largest video segment dataset to date, with careful attention given to scene/object/geographical diversity, including that of annotators. In some ways, we are surprised that SOTA video segmentation can be done on only ~50,000 videos (and 640k masklet annotations). Model-in-the-loop Data Engine for Annotations and Demo-first DevelopmentSimilar to SAM 1, a 3 Phase Data Engine helped greatly in bootstrapping this dataset. As Nikhila says in the episode, the demo you see wasn't just for show, they actually used this same tool to do annotations for the model that is now demoed in the tool:“With the original SAM, we put a lot of effort in building a high-quality demo. And the other piece here is that the demo is actually the annotation tool. So we actually use the demo as a way to improve our annotation tool. And so then it becomes very natural to invest in building a good demo because it speeds up your annotation. and improve the data quality, and that will improve the model quality. With this approach, we found it to be really successful.”An incredible 90% speedup in annotation happened due to this virtuous cycle which helped SA-V reach this incredible scale.Building the demo also helped the team live the context that their own downstream users, like Roboflow, would experience, and forced them to make choices accordingly.As Nikhila says:“It's a really encouraging trend for not thinking about only the new model capability, but what sort of applications folks want to build with models as a result of that downstream.I think it also really forces you to think about many things that you might postpone. For example, efficiency. For a good demo experience, making it real time is super important. No one wants to wait. And so it really forces you to think about these things much sooner and actually makes us think about what kind of image encoder we want to use or other things. hardware efficiency improvements. So those kind of things, I think, become a first-class citizen when you put the demo first.”Indeed, the team swapped out standard ViT-H Vision Transformers for Hiera (Hierarchical) Vision Transformers as a result of efficiency considerations.Memory AttentionSpeaking of architecture, the model design is probably the sleeper hit of a project filled with hits. The team adapted SAM 1 to video by adding streaming memory for real-time video processing:Specifically adding memory attention, memory encoder, and memory bank, which surprisingly ablated better than more intuitive but complex architectures like Gated Recurrent Units.One has to wonder if streaming memory can be added to pure language models with a similar approach… (pls comment if there's an obvious one we haven't come across yet!)Video PodcastTune in to Latent Space TV for the video demos mentioned in this video podcast!Timestamps* [00:00:00] The Rise of SAM by Udio (David Ding Edit)* [00:03:07] Introducing Nikhila* [00:06:38] The Impact of SAM 1 in 2023* [00:12:15] Do People Finetune SAM?* [00:16:05] Video Demo of SAM* [00:20:01] Why the Demo is so Important* [00:23:23] SAM 1 vs SAM 2 Architecture* [00:26:46] Video Demo of SAM on Roboflow* [00:32:44] Extending SAM 2 with other models* [00:35:00] Limitations of SAM: Screenshots* [00:38:56] SAM 2 Paper* [00:39:15] SA-V Dataset and SAM Data Engine* [00:43:15] Memory Attention to solve Video* [00:47:24] "Context Length" in Memory Attention* [00:48:17] Object Tracking* [00:50:52] The Future of FAIR* [00:52:23] CVPR, Trends in Vision* [01:02:04] Calls to ActionTranscript[00:00:00] [music intro][00:02:11] AI Charlie: Happy Yoga! This is your AI co host Charlie. Thank you for all the love for our special 1 million downloads Wins of AI Winter episode last week, especially Sam, Archie, Trellis, Morgan, Shrey, Han, and more. For this episode, we have to go all the way back to the first viral episode of the podcast Segment Anything Model and the Hard Problems of Computer Vision, which we discussed with Joseph Nelson of Roboflow.[00:02:39] AI Charlie: Since Meta released SAM 2 last week, we are delighted to welcome Joseph back as our fourth guest co host to chat with Nikhila Ravi, Research Engineering Manager at Facebook AI Research and lead author of SAM 2. Just like our SAM 1 podcast, this is a multimodal pod because of the vision element, so we definitely encourage you to hop over to our YouTube at least for the demos, if not our faces.[00:03:04] AI Charlie: Watch out and take care.[00:03:10] Introducing Nikhila[00:03:10] swyx: Welcome to the latest podcast. I'm delighted to do segment anything to our first, one of our very first viral podcasts was segment anything one with Joseph. Welcome back. Thanks so much. And this time we are joined by the lead author of Segment Anything 2, Nikki Ravi, welcome.[00:03:25] Nikhila Ravi: Thank you. Thanks for having me.[00:03:26] swyx: There's a whole story that we can refer people back to episode of the podcast way back when for the story of Segment Anything, but I think we're interested in just introducing you as a researcher, as a, on the human side what was your path into AI research? Why, you know, why did you choose computer vision coming out of your specialization at Cambridge?[00:03:46] Nikhila Ravi: So I did my undergraduate. Degree in engineering at Cambridge university. The engineering program is very general. So first couple of years, you sort of study everything from mechanical engineering to fluid mechanics, structural mechanics, material science, and also computer science.[00:04:04] Nikhila Ravi: Towards the end of my degree, I started taking more classes in machine learning and computational neuroscience, and I really enjoyed it. And actually after graduating from undergrad, I had a place at Oxford to study medicine. And so I was. Initially planning on becoming a doctor, had everything planned and then decided to take a gap year after finishing undergrad.[00:04:28] Nikhila Ravi: And actually that was around the time that sort of deep learning was emerging. And in my machine learning class in undergrad, I remember one day our professor came in and that was when Google acquired DeepMind. And so that became like a huge thing. We talked about it for the whole class. It kind of really stuck.[00:04:48] Nikhila Ravi: And I was kicked off thinking about, okay, maybe I want to try something different other than medicine. Maybe this is a different path I want to take. And then in the gap year, I did a bunch of coding, worked on a number of projects. Did some sort of freelance contracting work. And then I got a scholarship to come and study in America.[00:05:06] Nikhila Ravi: So I went to Harvard for a year, took a bunch of computer science classes at Harvard and MIT, worked on a number of AI projects, especially in computer vision. I really, really enjoyed working in computer vision. I applied to Facebook and got this job at Facebook, and I've now at Facebook at the time, now Meta, and I've been here for seven years, so very circuitous path, probably not a very unconventional, I didn't do a PhD, I'm not like a research, typical research scientist, definitely came from more of an engineering background, but since being at Meta, Have had amazing opportunities to work across so many different interesting problems in computer vision from 3D computer vision.[00:05:50] Nikhila Ravi: How can you go from images of objects to 3D structures and then going back to 2D computer vision and actually understanding the objects and the pixels and the images themselves. So it's been a very interesting journey over the past seven years.[00:06:05] swyx: It's weird because like, I guess with segment anything too, it's like 4D because you solve time, you know, you started with 3D and now you're solving the 4D.[00:06:14] Nikhila Ravi: Yeah, it's just going from 3D to images to video. It's really covering the full spectrum. And actually, one of the nice things has been, so I think I mentioned I, Wanted to become a doctor, but actually Sam is having so much impact in medicine, probably more than I could have ever had as a doctor myself. So I think, you know, hopefully Sam too can also have a similar sort of impact in medicine and other fields.[00:06:39] The Impact of SAM 1 in 2023[00:06:39] swyx: Yeah. I want to give Joseph a chance to comment. Does that also mirror your, we know your story about going into, into vision, but like in the past year, since we did our podcast on Sam what's been the impact that you've seen?[00:06:51] Joseph Nelson: Segment anything. Set a new standard in computer vision, you know recapping from from the first release to present Sam introduces the ability for models to near zero shot meaning without any training identify kind of perfect polygons and outlines of items and objects inside images and that capability previously required a Lots of manual labeling, lots of manual preparation, clicking very meticulously to create outlines of individuals and people.[00:07:25] Joseph Nelson: And there were some models that attempted to do zero shot segmentation. of items inside images, though none were as high quality as segment anything. And with the introduction of segment anything, you can pass an image with SAM1, SAM2 videos as well, and get perfect pixel perfect outlines of most everything inside the images.[00:07:52] Joseph Nelson: Now there are some edge cases across domains and Similar to the human eye, sometimes you need to say, like, which item maybe you most care about for the downstream task and problem you're working on. Though, SAM has accelerated the rate at which developers are able to use computer vision in production applications.[00:08:13] Joseph Nelson: So, at RoboFlow, we were very quick to enable the community of computer vision developers and engineers to use SAM and apply it to their problems. The principle ways of using SAM, you could kind of use SAM as is to like pass an image and receive back masks. Another use case for SAM is in preparation of data for other types of problems.[00:08:37] Joseph Nelson: So, for example, in the medical domain, let's say that you're working on a problem where you have a bunch of images from a wet lab experiment. And from each of those images, you need to count the presence of a particular protein that reacts to some experiment. To count all the individual protein reactions, You can go in and lab assistants to this day will still like kind of individually count and say what are the presence of all those proteins.[00:09:07] Joseph Nelson: With Segment Anything, it's able to identify all of those individual items correctly. But often you may need to also add like a class name to what the protein is. Or you may need to say, hey, like, I care about the protein portion of this. I don't care about the rest of the portion of this in the image.[00:09:26] Joseph Nelson: And, or what it encourages and asks for the user to do is to provide some visual prompting to say, hey, which part, like, Sam says, hey, I can find segments of anything, but which segments do you care about? And so you can do visual prompting, which is kind of a new primitive that Sam introduced. And so at RoboFlow, we have one portion of our tool stack enables users to very quickly label data.[00:09:48] Joseph Nelson: With segment anything, Sam can already provide, hey, here's where I see the outlines of objects. Or a user can click to prompt to say, Hey, here's where the outlines of objects matter. And I recently pulled statistics from the usage of SAM in RoboFlow over the course of the last year. And users have labeled about 49 million images using segment anything on the hosted side of the RoboFlow platform.[00:10:12] Joseph Nelson: And that's like 5 million in the last 30 days alone. And of those images, We did kind of like a rough bafka napkin calculation of like how much time that has saved. Because, again, the alternative is you're clicking individual points to create a polygon, and with SAM you just click once and it guesses where the polygon is.[00:10:32] Joseph Nelson: And I'm sure in a bit we can maybe screen share and show some examples of what this experience is like. And in that time estimation, it's like, On average saves, you know, maybe a dozen or so seconds. And we estimate that this is probably saved on the order of magnitude of 35 years of time for users.[00:10:53] Nikhila Ravi: That's incredible.[00:10:54] Joseph Nelson: So, I mean, basically like in the first, the first year of a model being available, not only can you say, Hey, I'm just going to go use this model, those numbers that like 49 million images. is an estimate directly related to just the hosted side. So imagine all of the users that are self hosting or using SAM for robotics applications or out in the field or offline where it's not even, like, the time or the image counts are tabulated.[00:11:20] Joseph Nelson: And we're probably talking about, you know, just a fraction of the amount of value that's actually being produced for a number of downstream tasks. So to say that the impact has been You know, people use terms like game changing and these sorts of things. It has changed the industry. It's set a new standard.[00:11:36] Joseph Nelson: And with the release of SAM 2, I think we're about to see an acceleration of those capabilities for a lot of reasons.[00:11:42] Nikhila Ravi: That's really great to hear. I think one of the, really SAM 1 was. How many fields actually rely on manual segmentation? I think we're not really exposed to that. Maybe you are at Roboflow because you get to see all the users of these tools.[00:11:57] Nikhila Ravi: But for me, it was, you know, people working on understanding coral reef bleaching or farmers counting their cows and so many different applications that as a researcher. You never get exposed to, but you can have impact towards. So I think that was really awesome to hear.[00:12:15] Do People Finetune SAM?[00:12:15] swyx: So as sort of audience surrogate, who knows less than the two of you, I'm going to ask a really dumb question maybe, but is everyone using stock, a segment, anything?[00:12:23] swyx: Are they fine tuning for the medical domain? Like how on earth could it work for the medical field without fine tuning, right? Like, is that a thing?[00:12:32] Nikhila Ravi: So I mean, I can give a quick perspective from the research side. So one of the things, design decisions we made in SAM was to not have class labels. And so all the data is annotated in a class agnostic way.[00:12:48] Nikhila Ravi: So anything that has a boundary, we consider to be an object. So for example, in any image, there's lots of small objects. We might not know what the name of them are, but they're If you can draw a boundary around it, so you can imagine that we have 11 million images in the SA 1B dataset, we annotated all the objects, there's many, many small objects.[00:13:12] Nikhila Ravi: And so if you think about cells, they're also kind of small objects, there's probably things in the training data. That looked like it, but we didn't have to label it. And so that means that even when you use SAM for applications that it wasn't really trained for, because we didn't restrict it to a certain set of categories, you can actually use it out of the box without custom adaptation.[00:13:35] Nikhila Ravi: But having said that, there's probably certain domains where you need some expertise in order to be able to segment something properly. And for those use cases, Having some extra fine tuning data would probably help, and we've sort of seen that there's some papers that have come out that do this, and, you know, we'd love to hear, Joseph, how people are collecting data with SAM and fine tuning for their use cases.[00:13:59] Joseph Nelson: Once SAM came out, there were adaptations that said, could we use SAM to be, you know, like, efficient SAM? Like, basically take SAM and maybe accelerate it. And then there were domain adapted SAMs, like CellSAM, for example, out of the UC system. Now, what's interesting is, there's, like, adapting SAM to a domain, there's kind of two ways by which that's done.[00:14:21] Joseph Nelson: One is, as you mentioned, like, potentially SAM doesn't have a good concept of The objects of interest. And so you need to do domain adaptation and increase the accuracy for zero shot prediction. The second way though, is it's not fine tuning. It's actually just prompting. It's just guiding the model existing knowledge.[00:14:42] Joseph Nelson: to say which segments you care about. And both those are actually kind of equally important on the application side. You need to, like, a priori ensure that the objects of interest can be correctly segmented and maybe collect data to do that. But even if you had, like, a perfect SAM, like an omniscient SAM that could see every segment in every domain with all pixels perfectly outlined, in production, you would still need some way to Almost like signal to the model what you care about like to paint this picture if you are like a retailer and you are providing Photos of models wearing your clothing on your retail site You may care about you know only the shirt and Sam by default might segment the full person And so there's you know visual prompting that you can do to ensure that you only outline Maybe the shirt for the purposes of swapping in and out different shirts for displaying a given model on a retail page You And so I think what's interesting is that's where, like I wouldn't call it domain adaptation, but that's where, like, when you apply to industry, like, one thing that's particularly important with tooling and enabling SAM to reach its full potential.[00:15:51] swyx: That's really encouraging to hear. I should also think, like, you know, the last time we talked about this, we wanted to, the very natural addition on the class labeling side is the grounding Dino work, right? So I think people, built a grounding SAM and all the other extensions.[00:16:05] Video Demo of SAM[00:16:05] swyx: I think it's, it's probably a good time to cut to a quick demo of SAM2 for people who are, who are tuning in for SAM2 and who better to demo SAM2 than Nikki.[00:16:15] Nikhila Ravi: Sure. So I'll try to narrate what I'm what I'm doing. So audio listeners can also understand. So we have a web demo where anyone can try SAM2 on a video. Here we have a video of someone kicking a football, and I'm going to click on the football to select the object in the first frame. But you can actually select the object in any frame of the video, and this will work.[00:16:40] Nikhila Ravi: The next step is to hit track. So the model's now tracking this in real time. We don't save any of this, it's all running in real time. And now you can see the ball has been tracked throughout the entire video. There's even like a little bit of a challenging case here where the shoe covers the football.[00:16:59] Nikhila Ravi: And actually, you know, the model makes a little bit of a mistake, but that's okay. Because we can actually, here, the model makes a little bit of a mistake here. But you know, we can actually add a refinement click. You can add negative clicks until we get the mask that we want on this frame. And then you can hit track again, and the model will track the object, taking into account the additional information I've provided at that frame.[00:17:25] Nikhila Ravi: We've also added a couple of other fun things you can do on top of the track, like add effects. We can add you know, foreground effects, background effects. And these are just ways of showing how we can use the output from SAM2 as part of other tools like video editing tools. Other systems, so this is just a preview of what you can do with SAM2, but the really cool use cases are places where we might not have even imagined SAM2 being useful.[00:17:54] Nikhila Ravi: So we have a number of examples of things you might want to use it for. There's like underwater videos that it works actually really well for even though we, models never really seen an octopus before and octopus have a lot of moving parts that SAM2 can actually quite effectively. Keep track of all the different tentacles and we can probably see it more clearly if I desaturate the background.[00:18:18] Nikhila Ravi: We can see that actually the tracking of all the different tentacles is Quite accurate. Another challenge with video is that objects can actually become occluded. They can disappear from view and reappear. And a really fun example here is the shuffling cup game, which many of you might have seen. And so here I can click on the ball in the first frame.[00:18:41] Nikhila Ravi: I can also, You know, click on a different cup. And so here, the additional challenge is that there's three cups that look exactly the same. And then there's the ball that will get occluded by the cup. So the ball's no longer visible, the cups are all moving around, they all look the same. But the model actually keeps track of the cup that we selected.[00:19:02] Nikhila Ravi: And, as you can see at the end, here I'll jump to the end so you can see. It actually finds the cup again. I wanted to point out a couple of fun demo UX features that we added that actually really helped with this. So if you can see at the bottom, there's these swim lanes and then the swim lanes, actually the thickness of the swim lane tells you if the object's visible or not.[00:19:22] Nikhila Ravi: So at the beginning, the object's visible,[00:19:25] swyx: the object[00:19:26] Nikhila Ravi: disappears, and then the object comes back. So you can actually visually tell. When the object's being occluded and when it's not, and so it's a nice way of like, knowing if you need to go in and fix the model prediction or not. And so these are some of the UX innovations that we came up with, as well as the model innovations.[00:19:46] Joseph Nelson: One thing that I think is really notable here, there's two things. One is that like, I'd love to have a little bit of a discussion about how the models keeping track of the embedded scene to keep track of the ball and the cup in different places. Put a pause on that for a second.[00:19:59] Why the Demo is so Important[00:19:59] Joseph Nelson: One thing that Meta has put an emphasis on here in a much greater degree than other model releases is the demo experience of recognizing that in addition to having a model that can do zero shot segmentation, you've created a web experience that allows folks to kind of experience both the video effects but the types of UX innovations that encourage usage and adoption.[00:20:23] Joseph Nelson: It's actually kind of reminiscent of The underlying technology of ChatGPT was available prior to the web experience of ChatGPT. Can you talk a bit about why that was a consideration to your team and how you thought about the creation of The demo experience in tandem with training and releasing a new model.[00:20:41] Nikhila Ravi: Yeah, absolutely. I think that's a really great example of how, you know, Chad, GPT was really more of a UX innovation. Obviously it was like a number of research innovations that helped to get to this point. But as you said, like the underlying technology was around for a while. And, you know, putting this UX around as a chat interface helped tremendously with the.[00:21:03] Nikhila Ravi: Adoption and people understanding how it could be useful for real world use cases. And in computer vision, especially, it's so visual. The best way to show how these models work. Is by trying it on your own image or your own video with the original SAM, we put a lot of effort in building like a high quality demo.[00:21:23] Nikhila Ravi: And the other piece here is that the demo is actually the annotation tool. So we actually. Use the demo as a way to improve our annotation tool. And so then it becomes very natural to invest in building a good demo because it speeds up your annotation and improves the data quality and that will improve the model quality.[00:21:43] Nikhila Ravi: With this approach, we found it to be really successful. And obviously externally, people really liked being able to try it. I think, you know, people in fields outside of machine learning would never have tried SAM if we didn't have that demo. And I think that definitely led to a lot of the adoption in, like, diverse fields.[00:22:05] Nikhila Ravi: And so because we saw that with SAM 2, like, the demo was a priority first class citizen from day one. And so we really invested in making that. And I think with SAM2 as well, we wanted to have like a step change in the demo experience. Interactive video segmentation, I think that experience is something that maybe has not had much thought given to it.[00:22:27] Nikhila Ravi: And we really wanted to be like, okay, if we are to design a step changing video segmentation experience, what would that look like? And that really did influence our model. And annotation design as well.[00:22:40] Joseph Nelson: It's a really encouraging trend for not thinking about only the new model capability, but what sort of applications folks want to build with models as a result of that downstream.[00:22:49] Nikhila Ravi: I think it also really forces you to think about many things that you might postpone, for example, efficiency.[00:22:55] Joseph Nelson: Yes.[00:22:55] Nikhila Ravi: For a good demo experience. Making it real time is super important. No one wants to wait. And so it really forces you to think about these things much sooner and actually makes us think about how to, what kind of image encoder we want to use or like other hardware efficiency improvements.[00:23:13] Nikhila Ravi: So those kinds of things, I think, become a first class citizen when you put the demo first.[00:23:19] SAM 1 vs SAM 2 Architecture[00:23:19] Joseph Nelson: That's one thing I was going to ask about, and this is related to the architecture change. So SAM1 and the SAM1 demo experience. You have the encoder that's creating the embeddings of all the potential spaces.[00:23:31] Joseph Nelson: That needs to be run on a GPU. That's a relatively intensive operation. But then the query of those embeddings can be run independently and on a cheaper process. So in the SAM1 demo, the way that it was structured, and also this is the way that we have our SAM tool structured in Robloflow as well, is images go to a GPU to get all the SAM based embeddings.[00:23:53] Joseph Nelson: But then for querying those embeddings, we do that client side, in the browser, so that the user can very quickly, you know, you can move your mouse over and you get the proposed candidate masks that Sam found for that region of the image. In SAM 2 you dropped that in the web demo. And I think that's because you made some notable improvements to the rate at which encoding happens.[00:24:16] Joseph Nelson: Can you talk a bit about what led to those speed increases and, again, how that interplays with providing a fast encryption? user experience for interacting with the model.[00:24:29] Nikhila Ravi: Yeah. So the SAM2 web demo is primarily focused on video. We, we decided to just keep it simple and focus on video and on GitHub, we have a Colab notebook that shows how to run SAM2 on images.[00:24:41] Nikhila Ravi: So if you're interested in using, replacing SAM with SAM2 for images, check out GitHub, but on the SAM2 demo, it's not as straightforward to adopt the same architecture as SAM. For video, because we can't send the per frame image embeddings for an entire video back to the front end. In SAM, each frame embedding was like four megabytes, but if you have a long video and that's like per frame, it would become impossible to send that back to the front end.[00:25:11] Nikhila Ravi: So, SAM 2 actually, in terms of the architecture details, I was actually just looking at this earlier, but SAM1 model was around 630 million parameters. It's a fraction of the size of these large language models, but very small. Actually, SAM2, the largest model, is around 224 million parameters. So it's actually One third the size of the SAM original model.[00:25:38] Nikhila Ravi: So we changed the imaging coder from A-V-I-T-H and SAM to a higher model, which has also developed by by meta. So that definitely was something that helped. And in terms of the efficiency compared to sam, so if we were to run SAM per frame on a video or run SAM two, it's around six times faster to run SAM two versus run SAM per frame.[00:26:03] Nikhila Ravi: A number of things improved the efficiency of SAM2 such that we were actually able to run this entirely on the server and not have any component in the front end. But I am very curious to see who puts this on device, like I'm pretty sure soon we'll see like an on device SAM2 or, you know, maybe even running in the browser or something, so.[00:26:25] Nikhila Ravi: I think that could definitely unlock some of these edge use cases that we were able to make a compelling web demo without having to do that.[00:26:34] swyx: Hugging face is probably already working on Transformers. js version of it, but totally makes sense. I want to talk about more about things from the paper, but I think we're still in this sort of demo section.[00:26:42] Video Demo of SAM on Roboflow[00:26:42] swyx: And so I want to hand it to Joseph for his demo to see what the RoboFlow site looks like.[00:26:47] Joseph Nelson: So I can, I can give some context into one key area that Nicola, you mentioned earlier, which is. Sam has made the decision, both Sam 1 and Sam 2, to be class agnostic in terms of its predictions. And that, you then have the ability to have a generalizable, model for zero shot capability.[00:27:05] Joseph Nelson: However, in a lot of domain applications, you do want the class wise name. And so a lot of the challenge can be adding that class wise name for the, at least the annotation to an experience that we've created. That's one of the key considerations. So I will similarly Share my screen and show an example.[00:27:27] Joseph Nelson: Here, I have a bunch of images, and there's a number of ways that I could annotate things, like I could prompt a large multimodal model with like grounding capabilities, you know, you could outsource it, or I can do manual labeling. And with the manual labeling, this is where we make use of models like segment anything.[00:27:45] Joseph Nelson: to propose candidate masks and make it faster. So we have, you know, this annotation pane and what we call the smart poly tool, which is powered by Segment Anything. This is currently Segment Anything 1. We're accelerating and seeing improvements from similar to what the paper shows of Segment Anything 2 performed better on E3.[00:28:06] Joseph Nelson: Images as well as video, but with a segment, anything I'm able to basically prompt regions of my image of interest. So for example, if like, I wanted to say, I want to like add the drum set. You'll see here that like, the original candidate proposal is just the base drum, but let's say I wanted the whole drum set.[00:28:26] Joseph Nelson: So the UX primitive of being able to add and subtract candidate regions of interest is really intuitive here. And now, great, I have this outline, but in fact what I want is, I want to name that as a class. Because maybe for the model that I'm building, I want to build like a task specific model, you know, like an object detection model or an instant segmentation model.[00:28:50] Joseph Nelson: Or, you know, maybe I'm even using like a multimodal model and I want that multimodal model to refer to regions of interest in the images as a specific thing. And so I think what's, you know, really powerful is, of course, like, I get this really rich zero shot prediction. And here we have our friend Rick.[00:29:10] Joseph Nelson: So I get this really rich candidate set of predictions. But then by adding the class wise label, I can, you know, very quickly make sure that any downstream tasks are aware not just of the segment, but also of the, what is inside that segment. Which actually takes me to A separate point of something that I predict that's probably going to happen and Nikhil, I'm actually kind of interested why maybe your team made a conscious decision to not do this initially with SAM2.[00:29:40] Joseph Nelson: There's been an emergent set of models that are also adding open text prompting capabilities to grounding models. So for example, like you've seen models like Grounding Dino or Owlvit, which, you know, you can do. Even image to image or text to image based prompting to find regions of interest. And maybe maybe I can actually give an example of that even in the context of this same data.[00:30:05] Joseph Nelson: So if I wanted to try out, you know, grounding dino on this same set of images, I could try out, you know, prompting grounding dino for a set of different classes. And what's notable is let's do, I don't know, let's prompt for person and we'll prompt for person and prompt for I don't know, microphone.[00:30:26] Joseph Nelson: NLASC or microphone. Here I can text prompt the image and then the understanding, in this case Grounding Dino's understanding, of where people are in this image allows me to create, in this case, bounding boxes, but, you know, soon you can do segmentations or in tandem with SAM do segmentations. And, you know, we've already seen applications of using SAM2 in tandem with models like Grounding Dino or Florence 2.[00:30:54] Joseph Nelson: So that people can basically text prompt and then get the benefits of the zero shot segmentation at the same time as getting the open form querying. And in doing so, you know, we maintain a framework called like autodistill so like folks can very quickly, you know, bring some images and then using autodistill to find some ontology and then prompt and say what you want from that ontology.[00:31:19] Nikhila Ravi: So you already do this for video as well?[00:31:21] Joseph Nelson: You can apply videos or groups of images, yes. So this is using a project called Autodistill. And the concept of Autodistill is, use a base model, like a big base model, which could be like SAM or Grounding Dino, and then you pass a directory of images, which also could be video, broken into individual frames, and you pass an ontology as well.[00:31:43] Joseph Nelson: So an example I was just showing was like the hello world we have, which is like a shipping container. And then the combination of the grounding capabilities of, in the example I was showing, Florence 2 plus SAM, looks for the concept of container, and then SAM does the rich segmentation of turning that concept of container into the candidate proposal of the region, so that a user could just say, hey, I want all the shipping containers, run this across a bunch of images or video frames, And then get back the class wise labels plus the regions of interest.[00:32:17] Joseph Nelson: And this feels like a natural extension. And in fact, like the open form grounding capabilities between SAM1 and SAM2 became something the field was broadly doing. So I'm curious, like, from your perspective, one of the things I thought maybe SAM2 would do is actually add this capability natively. So I'm curious to hear, like, the conscious decision to say, hey, we want to continue to be class agnostic.[00:32:39] Extending SAM 2 with other models[00:32:39] Joseph Nelson: We don't want to add yet maybe open form text prompting as a part of finding the segments and parts of images. And I'd love to hear about like the decision to think about it that way. And if you are encouraged or if you want kind of like what's happening here where people are naturally combining these capabilities as something that you would expect and encourage to happen despite not having it.[00:33:00] Joseph Nelson: In the base model itself.[00:33:02] Nikhila Ravi: Yeah, it's a great question. So I think it's really cool that the community is taking SAM and taking SAM 2 and building on top of it and coming up with cool applications. We love to see that. That's exactly why we open source our work. And then in terms of why we didn't put it into SAM 2, so as you've probably seen with SAM and SAM 2, it's a fairly narrow problem.[00:33:25] Nikhila Ravi: But we really tried to make it a step change in the capability. And so with each version, we are trying to limit the focus on one thing that we can know we can do really well. And in this case, like the first SAM, it was class agnostic segmentation, but can we do it so well that it's effectively solved?[00:33:47] Nikhila Ravi: And similarly, can we do that same thing, but with Video segmentation. So one step at a time, we are working on each of these problems one at a time so that we can actually deliver something that's really world class and step changing.[00:34:03] Joseph Nelson: So does that mean SAM 3 will have the text prompting? Problem is like the next challenge.[00:34:09] Nikhila Ravi: Who knows, who knows? Maybe the community will, will we'll build that too. So[00:34:15] Joseph Nelson: it makes sense to like very narrowly do something very well. And that's, I think, proven to be well accomplished.[00:34:21] Nikhila Ravi: It's like taking the, the, both the data, the model and the demo, and how can we push all three towards solving one thing really well?[00:34:30] Nikhila Ravi: So we found that. That's like a good recipe and that's what we've limited the focus of these, of each of these models.[00:34:38] swyx: This development reminds me of how, you know, when you do, and you break out the interpretability of ConvNets and you can see like, Oh, this is the edge detection one. I feel like SAM is the edge detection version equivalent.[00:34:51] swyx: And then you build up to whatever the next feature is on top of that.[00:34:54] Limitations of SAM: Screenshots[00:34:54] Joseph Nelson: Can I bring up one? Limitation of SAM. So like we've like even SAM one, SAM two, and the monitor is released at 4 PM Pacific on Monday. We're recording this on 11 AM Pacific on, on, on Thursday. So the, it's very fresh for a lot of the capabilities and.[00:35:09] Joseph Nelson: It is so clear that it is a stepwise change in the capability that, Nikhila, you mentioned your team wants to do, which is extend SAM's zero shot class agnostic capability to video, like, A plus, kind of mission accomplished. One thing that's interesting is finding, like, domain problems where there might be still domain applicability and domain adaptation that is available.[00:35:32] Joseph Nelson: One benchmark that we introduced at CBPR is this thing called RF100, which is like, seven different domain type problems that the industry commonly is working on in vision, like underwater document processing, aerial examples, medicine examples. And one place where interestingly segment anything maybe less performant than other models is handling screenshots.[00:35:57] Joseph Nelson: For example, like a lot of folks that are building agents to interact with the web are particularly interested in that challenge of given a screenshot of a computer, what are all the buttons. And how could I autonomously navigate and prompt and tell it to click? And I can show an example of like maybe what, how like Sam kind of performs on this challenge just to outline some of the context of this problem.[00:36:23] Joseph Nelson: But I'm curious like how you think about limitations like this and what you would expect to want to be the case. So here I just have a notebook where I run Sam on the source image on the left. Or the source image on the left and then Sam output is on the right. And this is just a screenshot of, of a website where we just grab like the top 100 websites by traffic and grab screenshots from them.[00:36:42] Joseph Nelson: One example of a place where I could see the community improving on Sam, and I'm curious how you think about this challenge and maybe why Sam is less well adapted for this type of problem. Is processing screenshots. So I'll share my screen to give an example for, for viewers that are participating here, you see like an example, a screenshot of a website on the left, and then right is SAM two running on that image.[00:37:06] Joseph Nelson: And in the context of agents, folks usually want to have like, Hey, tell me all of the buttons that a, an agent could press. Tell me like maybe the headlines of the articles tell me the individual images and Sam two behaves perhaps predictably, where it outlines like people in the images and like some of like the, the screen text.[00:37:22] Joseph Nelson: I'm curious, like, how you think about a challenge like this for a model that sees everything in the world, what about handling digital contexts? And Why maybe it could perform better here and how you would expect to see improvement for domains that might have been out of distribution from the training data?[00:37:40] Nikhila Ravi: Yeah, this is a good question. So fair, we don't really build with a specific use case in mind. We try to build like these foundational models that can be applied to lots of different use cases out of the box. So I think in this kind of example, potentially people might want to annotate some data.[00:37:59] Nikhila Ravi: Fine tune on top of what we release. I think we probably won't build things that are very custom for different use cases. I think that's not a direction we'll go in, but as you said, like the model is an annotation tool to improve the model. And so I think that's definitely the approach we want to take is we provide the tools for you to improve the model as well as the model itself.[00:38:27] Joseph Nelson: That makes sense. Focus on like as many. Multi or zero shot problems and then allow the community to pick up the torch for domain adaptation.[00:38:34] Nikhila Ravi: Yeah, absolutely. Like, we can't solve all the problems ourselves. Like, we can't solve all the different domains. But if we can provide a sort of base hammer tool, and then people can apply it to all their different problems.[00:38:48] SAM 2 Paper[00:38:48] swyx: If you don't mind, I guess we want to transition to a little bit on like asking more questions about the paper.[00:38:53] Udio AI: Sure.[00:38:54] swyx: There's a lot in here. I love the transparency from Meta recently with like LLAMA 3 last week and then, and was it last week? Maybe, maybe a little bit less than last week. But just like just really, really well written and a lot of disclosures, including the data set as well.[00:39:08] SA-V Dataset and SAM Data Engine[00:39:08] swyx: I think the top question that people had on the data set, you know, you release a diverse videos and there was, there's a lot of discussion about the data engine as well, which I really love. And I think it's innovative if you wanted. I think the top question is like, how do you decide the size of data set?[00:39:22] swyx: You know, what were you constrained by? People are asking about scaling laws. You had some ablations, but as a research manager for this whole thing, like how do you decide what you need?[00:39:32] Nikhila Ravi: Yeah. I mean, it's a great question. I think it's, as with all papers, you write them at the end of the project, so we can put these nice plots at the end, but going into it, I think, you know, the data engine design really follows.[00:39:47] Nikhila Ravi: So, this is sort of the model design, how we thought about the task, how we thought of the model capabilities. You can really see it's reflected in the different phases of the data engine. We started with just SAM, we apply SAM per frame. That's like the most basic way of extending SAM to video. Then the most obvious thing to do is to take the output masks from SAM and then provide it as input into a video object segmentation model that takes the mask as the first frame input.[00:40:19] Nikhila Ravi: And that's exactly what we did. We had SAM plus a version of SAM2 that only had mask as input. And then in the last phase, we got rid of SAM entirely and just had this one unified model that can do both image. And video segmentation. And I can do everything in just one model. And we found that, you know, going from each phase, it both improved the efficiency and it improved the data quality.[00:40:46] Nikhila Ravi: And in particular, when you get rid of this two part model, one of the advantages is that when you make refinement clicks, so, You prompt the model in one frame to select an object, then you propagate those predictions to all the other frames of the video to track the object. But if the model makes a mistake and you want to correct it, when you have this unified model, you only need to provide refinement clicks.[00:41:14] Nikhila Ravi: So you can provide maybe a negative click to remove a region or a positive click to add a region. But if you had this decoupled model, you would have to Delete that frame prediction and re annotate from scratch. And so you can imagine for more complex objects, this is actually adding like a lot of extra time to redefine that object every time you want to make a correction.[00:41:39] Nikhila Ravi: So both the data and the data engine phases really follow, like how we thought about the model design and the evolution of the capabilities, because it really helped us to do that. improve the data quality and the annotation efficiency as well.[00:41:54] swyx: Yeah, you had a really nice table with like time taken to annotate and it was just going down and down.[00:41:58] swyx: I think it was like down by like 90 percent by the time you hit stage[00:42:02] Joseph Nelson: three, which is kind of cool. We joke that when SAM 1 came out at RoboFlow, we're like, was this purpose built for our software? Like you have like the embedding, you have the embedding take like a big model and the querying of the embeddings A smaller model that happens in browser, which felt remarkably aligned.[00:42:18] Joseph Nelson: Now hearing you talk about how you think about building models with a demo in mind, it makes sense. Like, you're thinking about the ways that folks downstream are going to be consuming and creating value. So, what felt like maybe a coincidence was perhaps a deliberate choice by Meta to take into account how industry is going to take Seminal advances and apply them.[00:42:36] Nikhila Ravi: Yeah. And it's not just humans. Like it could also be a model that outputs boxes that then get fed into this model. So really thinking about this as a component that could be used by a human or as a component, as part of a, of a larger AI system. And that has, you know, a number of design requirements. It needs to be promptable.[00:42:56] Nikhila Ravi: It needs to be, have the zero shot generalization capability. We, you know, need it to be real time and. Those requirements really are very core to how we think about these models.[00:43:08] Memory Attention to solve Video[00:43:08] swyx: I cannot end this podcast without talking about the architecture, because this is your, effectively the sort of research level, architecture level innovation that enabled what I've been calling object permanence for SAM.[00:43:22] swyx: And it's memory retention. What was the inspiration going into it? And you know, what did you find?[00:43:27] Nikhila Ravi: Yeah, so at a high level, the way we think about extending SAM to video is that an image is just a special case of a video that just has one frame. With that idea in mind, we can extend the SAM architecture to be able to support segmentation across videos.[00:43:45] Nikhila Ravi: So this is a quick video that shows how this works. So SAM architecture, we have the image encoder, we have a prompt encoder, we have a mask decoder. You can click on an image. And that basically is a prompt, we use that prompt along with the image embedding to make a mask prediction for that image. Going to SAM2, we can also apply SAM2 to images because we can, you know, as I said, treat an image as a video with a single frame.[00:44:15] Nikhila Ravi: And so when we, in the SAM2 architecture, we introduce this new memory mechanism that consists of three main components. There's memory attention, there's a memory encoder, and then there's a memory bank. And when we apply SAM2 to images, these are effectively not used. And the architecture just collapses down to the original SAM architecture.[00:44:35] Nikhila Ravi: But when we do apply this to video, the memory components become really useful because they provide the context of the target object from Other frames. And so this could be from past frames. It can be from, there's two types of memory. So there's like the condition, conditional frames or the prompted frames, which are basically the frames at which a user or a model provides input like clicks.[00:45:01] Nikhila Ravi: And then there's like the surrounding frames. And say we use six frames around the current frame as memory of the object. So there's, there's those, those, both those types of memory that we use to make the prediction. Going into a little bit more detail about that, there's like two kinds of memory that we use.[00:45:18] Nikhila Ravi: So one is like spatial memory. So it's like this high resolution memory that captures the spatial details. And then we also have this like longer term object pointer memory that captures some of the sort of higher level concepts. And I think Swyx, you had a comment about how does this relate to sort of context window and LLMs.[00:45:37] Nikhila Ravi: And both of these types of memories have some relation to context window, so they both provide different types of information on the spatial side or in terms of the concept of the objects that we want to track. And so we found that having like six frame length for the spatial memory, Coupled with this longer period of the object pointer memory provides strong video segmentation accuracy at high speed.[00:46:01] Nikhila Ravi: So, as I mentioned, the real time aspect is really important. We have to find this speed accuracy trade off. And one way in which we sort of circumvent this is by allowing additional prompts on subsequent frames. So even if the model makes a mistake, maybe it loses the object. After an occlusion, you can provide another prompt, which actually goes into the memory.[00:46:24] Nikhila Ravi: And so the prompted frames are always in the memory. And so if you provide a prompt on a frame, we will, or the model will always remember what you provided. And so that's a way in which we can sort of avoid some of the model failure cases that actually is a big limitation of current models, current video object segmentation models.[00:46:45] Nikhila Ravi: Don't allow any way to recover if the model makes a mistake. And so, Joseph, going back to your point about the demo, that's something that we found just by playing with these models. There's no way to make a correction, and in many real world use cases, like, it's not going to be a one time prediction, but you actually want to be able to intervene, like, if an LLM makes a mistake, you can actually be like, no, actually do it this way, and provide feedback, and so, We really want to bring some of that thinking into how we build these computer vision models as well.[00:47:16] "Context Length" in Memory Attention[00:47:16] swyx: Amazing. My main reaction to finding out about the context length of eight input frames and six pass frames as their default is why not 60? Why not 600? In text language models, we're very used to severely extending context windows. And what does that do to the memory of your model?[00:47:35] Nikhila Ravi: So I think maybe one, one thing that's different is that the object in video, it is challenging.[00:47:41] Nikhila Ravi: Objects can, you know, change in appearance. There's different lighting conditions. They can deform, but I think a difference to language models is probably the amount of context that you need is significantly less than maintaining a long multi time conversation. And so, you know, coupling this. Short term spatial memory with this, like, longer term object pointers we found was enough.[00:48:03] Nikhila Ravi: So, I think that's probably one difference between vision models and LLMs.[00:48:09] Object Tracking[00:48:09] Joseph Nelson: I think so. If one wanted to be really precise with how literature refers to object re identification, object re identification is not only what SAM does for identifying that an object is similar across frames, It's also assigning a unique ID.[00:48:25] Joseph Nelson: How do you think about models keeping track of occurrences of objects in addition to seeing that the same looking thing is present in multiple places?[00:48:37] Nikhila Ravi: Yeah, it's a good question. I think, you know, SAM2 definitely isn't perfect and there's many limitations that, you know, we'd love to see. People in the community help us address, but one definitely challenging case is where there are multiple similar looking objects, especially if that's like a crowded scene with multiple similar looking objects, keeping track of the target object is a challenge.[00:49:03] Nikhila Ravi: That's still something that I don't know if we've solved perfectly, but again, the ability to provide refinement clicks. That's one way to sort of circumvent that problem. In most cases, when there's lots of similar looking objects, if you add enough refinement clicks, you can get the perfect track throughout the video.[00:49:22] Nikhila Ravi: So definitely that's one way to, to solve that problem. You know, we could have better motion estimation. We could do other things in the model to be able to disambiguate similar looking objects more effectively.[00:49:35] swyx: I'm just interested in leaving breadcrumbs for other researchers, anyone interested in this kind of architecture.[00:49:41] swyx: Like, are there papers that you would refer people to that are influential in your thinking or, you know, have, have other interesting alternative approaches?[00:49:49] Nikhila Ravi: I think there's other ways in which you can do tracking and video. You might not even need the full mask. I think that's it. Some other works that just track like points on objects.[00:49:59] Nikhila Ravi: It really, really depends on what your application is. Like if you don't care about the entire mask, you could just track a bounding box. You could just track a point on an object. And so having the high fidelity mask might not actually be necessary for certain use cases. From that perspective, you might not need the full capabilities.[00:50:19] Nikhila Ravi: of SAM or SAM2. There's many different approaches to tracking, I think I would encourage people to think about like what actually they need for their use case and then try to find something that that fits versus, yeah, maybe SAM2 is too much, you know, maybe you don't even need the full mask.[00:50:37] swyx: Makes total sense, but you have solved the problem that you set out to solve, which is no mean feat, which is something that we're still appreciating even today.[00:50:44] The Future of FAIR[00:50:44] swyx: If there are no further questions, I would just transition to sort of forward looking, future looking stuff. Joseph already hinted at, like, you know, our interest in SAM and the future of SAM, and obviously you're the best person to ask about that. I'm also interested in, like, How should external people think about FAIR, you know, like there's this stuff going on, this llama, this chameleon, this voice box, this image bind, like, how is, how are things organized?[00:51:09] swyx: And, you know, where are things trending?[00:51:11] Nikhila Ravi: Yeah, so in FAIR, we, you know, we have a number of different research areas. I work in an area called perception. So we built vision systems that solve basically, Look at all the fundamental problems in Compute Division. Can we build a step change in all of these different capabilities?[00:51:29] Nikhila Ravi: SAM was one example. SAM2 is another example. There are tons of other problems in Compute Division where we've made a lot of progress, but can we really say that they're solved? And so that's really the area in which I work on. And then there's a number of other research areas in language and in embodied AI.[00:51:49] Nikhila Ravi: And more efficient models and various other topics. So fair in general is still very much pushing the boundaries on solving these foundational problems across different domains. Well,[00:52:07] swyx: fair enough, maybe just outside of fair, just the future of computer vision, right?[00:52:10] CVPR, Trends in Vision[00:52:10] swyx: Like you are very involved in the community. What's the talk of the town at CVPR? Both of you went, who's doing the most interesting work? It's a question for both of you.[00:52:19] Joseph Nelson: I think the trends we're seeing towards more zero shot capability for common examples will accelerate. I think Mutu modality, meaning using, you know, images in tandem with text for richer understanding or images and video in tandem with audio and other mixed media will be a continued acceleration trend.[00:52:43] Joseph Nelson: The way I kind of see the field continuing to progress, the problem statement of computer vision is making sense of visual input. And I think about the world as the things that need to be observed follow your traditional bell curve, where like things that most frequently exist out in the world are on the center of that bell curve.[00:53:05] Joseph Nelson: And then there's things that are less frequently occurring that are in those long tails. For example, you know, as back as like 2014, you have the Cocoa data set, which sets out to say, Hey, can we find 80 common objects in context, like silverware and fridge and these sorts of things. And we also conceptualized the challenge of computer vision in terms of breaking it down into individual task types, because that's like the tools we had for the day.[00:53:29] Joseph Nelson: So that's why, you know, you have the origination of classification, object detection, instant segmentation. And then as you see things continue to progress. You have models and things that need to observe areas in the long tails. And so if you think of the Cocoa dataset as the center of that bell curve, I think of like the long tails, like really edge case problems.[00:53:49] Joseph Nelson: Some of our customers like Rivian, for example, only Rivian knows what the inside of like a Rivian should look like as it's assembled and put together before it makes its way to a customer and they're making custom parts. Right? So how could a model you've been trained on the things that go inside the componentry of producing a vehicle and Andreesen, What's kind of happening with computer vision is you're seeing models that generalize in the middle of the bell curve push outward faster.[00:54:17] Joseph Nelson: That's where you see the advent of like open text models or the richness of understanding of multimodal models. To allow richer understanding without perhaps any training, or maybe just using pre training and applying it to a given problem. And then, there's like, you know, kind of like the messy middle in between those two, right?[00:54:38] Joseph Nelson: So like, Akila kind of talked about examples where SAM does well out of distribution, where like, it finds an octopus, even though there wasn't octopi in the training data. I showed an example where, like, screenshots, where Sam isn't yet super great at screenshots, so maybe that's, like, in the messy middle or in the longer tails for now.[00:54:54] Joseph Nelson: But what's going to happen is there needs to be systems of validating the point of view that I think about, like, tooling to also validate that models are doing what we want them to do, adapting to datasets that we want them to adapt to. And so there's a lot of things on a forward looking basis that allow propelling that expansion of generalizability.[00:55:14] Joseph Nelson: That's for open text problems. That's where scaling up of training, of dataset curation, continues to play a massive role. Something that's notable, I think, about SAM2 is it's, what, 57, 000 videos? 51,[00:55:30] Nikhila Ravi: 000 videos? About 51, 000, yeah.[00:55:32] Joseph Nelson: And 100, 000 internal datasets. That's, like, not Massive, right? And the model size also isn't, you know, the largest, largest model being a couple hundred million parameters.[00:55:43] Joseph Nelson: The smallest model is 38 million parameters and can run at 45 FPS on an A100, right? Like the capabilities of, we're going to see more capable, more generalizable models. Being able to run on a higher wide array of problems with zero or multi shot capability on a faster, a faster rate. And I think the architecture innovations and things like SAM2 of memory, of increasingly like transformers making their way into division and probably blended architectures increasingly too.[00:56:15] Joseph Nelson: So my viewpoint of like on a go forward basis is we will have that bell curve of what humans can see both in the center of that curve and the long tails. And architectural changes allow richer understanding, multi and zero shot, and putting those into systems and putting those into industry and putting those into contexts that allow using them in practical and pragmatic ways.[00:56:38] Joseph Nelson: Nicola, I'd love to hear like your thought and perspective of like how you think the research trends map or don't map to that. And like maybe some of the key innovations that you saw at CVPR this year that, you know, Got you excited about the direction and maybe some promising early directions that you're thinking about researching or pushing the boundaries of further.[00:56:56] Nikhila Ravi: Yeah, I just wanted to actually reply to a couple of things that you said about so actually in video object segmentation, the number of classes. that are annotated in these, and then the size of these datasets are really small. So with SAM, it's, you know, we had a billion masks, we had 11 million images, didn't have class labels.[00:57:17] Nikhila Ravi: But even before that, there were a lot of datasets that have class labels and are annotated. With significantly more with, with like a lot of class labels, whereas in video datasets, the number of class labels are very small. So there's like YouTube VOS, which has 94 object categories, there's Mose, which has around like 30 or so object categories.[00:57:38] Nikhila Ravi: And they're usually like people, there's cars, there's dogs and cats and all these common objects, but not really, they don't really cover a very large number of object categories. And so while Sam learned this general notion of what an object is in an image. These video tracking models actually don't have that knowledge at all.[00:58:01] Nikhila Ravi: And so that's why having this data set is really important for the segment anything capability in video because if you just provide the mask as the input to an off the shelf Video object segmentation model. It might not actually be able to track that arbitrary object mask as effectively as a SAM2 model that's actually trained to track.[00:58:24] Nikhila Ravi: Any object across the entire video. So doing these sort of combining two models together to try to get a capability that will actually only get you so far and being able to actually create that the dataset to enable that anything capability, it was actually really important and we can actually see that when we do comparisons with baselines where we provide some two with the same input mask and the baseline model with the same input mask.[00:58:53] Nikhila Ravi: For example, the t shirt of a person, SAM2 can track the t shirt effectively across the entire video, whereas these baselines might actually start tracking the entire person, because that's what they're used to doing, and isolating it to just one part of the person is not something they were ever trained to do, and so those are sort of some of the limitations.
In this episode of Comp & Coffee, hosted by Ruth Thomas, Chief Evangelist at Payscale, the podcast dives into the findings of Payscale's 2024 Compensation Best Practice Report (CBPR). Guests Amy Stewart, Associate Director of Content Marketing and author of the CBPR, alongside Lulu Seikaly, Payscale's Senior Corporate Attorney, share their insights on the evolving landscape of compensation management. Key topics discussed include the significant increase in organizations practicing pay transparency, the impact of artificial intelligence on compensation decisions, and the implications of minimum wage changes on compensation strategy. The episode also touches on the importance of pay communications, trends in pay equity, and the role of AI in reducing unfulfilling work. The discussions emphasize the importance of strategic compensation management to attract and retain talents, navigate legal landscapes, and foster employee trust and engagement.
On today's episode, meet Dr. Upal Basu Roy, PhD, MPH. Dr. Basu Roy is the Executive Director at LUNGevity Research where he spearheads LUNGevity's Translational Research Programs and Patient-Focused Research Center (Patient FoRCe). He has implemented and manages LUNGevity's patient-focused research including Project Transform, a multi-year, multi-stakeholder patient preference study that LUNGevity is conducting with Johns Hopkins University School of Public Health. Dr. Basu Roy also has extensive experience in community-based participatory research (CBPR) methodology in health sciences research, specifically in projects involving immigrants and minority populations. Dr. Basu Roy has a PhD in Molecular and Cellular Biology from the University of Arizona, and an MPH in Global Health Policy and Management from New York University.
In this episode of SBM's Buzz in Behavioral Medicine, Dr. Monica Baskin, a prominent figure in community based participatory research (CBPR), shares her journey from being an undergrad at Emory University to holding leadership roles at University of Alabama at Birmingham, University of Pittsburgh, UPMC Hillman Cancer Center, and SBM. Delve into her insights on the evolution of her career, the impact of personal grief and loss, and her pivotal work in diversity, equity, and inclusion (DEI). Dr. Baskin shares advice on respecting diversity, being intentional about inclusion, and dealing with the physical and mental toll of being “the first” or “the only” in a room.Key Takeaways:Stressing the importance of diverse, multidisciplinary teams, and shared decision making in community based research. The importance of DEI in shaping the future of behavioral medicine and public health.Strategies for effective leadership and fostering diversity and inclusion in academic and research settings.Career advice for early-, mid-, and senior-stage professionals in the field.Tune into this episode to explore Dr. Baskin's impactful work and gain inspiration for your own career in behavioral medicine.SBM: https://www.sbm.org/about Monica Baskin Diversity Institute for Emerging Leaders: https://www.sbm.org/training/monica-baskin-diversity-institute Community Engagement Studios: https://www.sbm.org/training/community-engagement-studiosSBM Diversity Initiatives: https://www.sbm.org/members/policies/diversity SBM Leadership Institute: https://www.sbm.org/training/leadership-institute Connect and Grow: Subscribe to SBM's BUZZ in Behavioral Medicine to unlock the secrets to a fulfilling career in behavioral medicine. Engage with each episode to learn from the best in the field and stay ahead in your professional journey. Hit 'Like', subscribe, and turn on notifications to never miss an episode dedicated to your career development in behavioral medicine. Interested in becoming a member of SBM? Check us out at https://www.sbm.org/membership Special Thanks to our production team and Jay Conner of Jaybird Media for his skillful work in compiling and editing the audio and video for the Buzz in Behavioral Medicine: Season 2 podcast series.
Keyonna King is an associate professor in the UNMC College of Public Health. She holds a doctorate in public health from Loma Linda University. King specializes in community-based participatory research, or CBPR, an approach that works with community members to make public health programs more equitable and inclusive. King is also a co-investigator in UNMC's BEAT Cancer study. BEAT Cancer, which stands for Black Equity, Access and Testing for Cancer, seeks to increase colorectal cancer screening and decrease mortality rates related to the disease in Omaha's Black community. Today she is in conversation with Michael Griffin. --- Support this podcast: https://podcasters.spotify.com/pod/show/riversidechats/support
In the latest episode of Med Tech Talks, Robert Klupacs is joined by Professor Max Ortiz Catalán, Head of the Neural Prosthetics Research Program at the Bionics Institute. In additional to his role at the Bionics Institute, he is also the Founder of the Center for Bionics and Pain Research (CBPR) in Sweden – a multidisciplinary engineering and medical collaboration between Chalmers University of Technology, Sahlgrenska University Hospital, and Sahlgrenska Academy at the University of Gothenburg.Professor Ortiz Catalán's mission is to develop and clinically implement technologies to eliminate disability and pain due to sensorimotor impairment. The Bionics Institute and CBPR are now working closely together to this mission.In this episode you will hear about: Max's innovation journey from Monterrey, Mexico; to Gothenburg, Sweden; to Melbourne, Australia. The development of the bionic limb and how augmented reality can be used to relieve phantom limb pain. What motivates Max to find innovative ways to help those with amputation. More information: Learn more about Max Ortiz Catalán. Read more about Max's research into the bionic limb. Updated version 05/08/22
Join CardioNerds Co-Founder Dr. Dan Ambinder, Dr. Nino Isakadze (EP Fellow at Johns Hopkins Hospital), Dr. Karan Desai (Cardiology Faculty at Johns Hopkins Hospital and Johns Hopkins Bayview) join Digital Health Expert, Dr. La Princess Brewer (Associate Professor of Medicine Mayo Clinic Rochester) for another installment of the Digital Health Series. In this specific episode, we discuss how digital health can both reduce and amplify health disparities. This series is supported by an ACC Chapter Grant in collaboration with Corrie Health. Notes were drafted by Dr. Karan Desai. Audio editing was performed by student Dr. Shivani Reddy. In this series, supported by an ACC Chapter Grant and in collaboration with Corrie Health, we hope to provide all CardioNerds out there a primer on the role of digital heath in cardiovascular medicine. Use of versatile hardware and software devices is skyrocketing in everyday life. This provides unique platforms to support healthcare management outside the walls of the hospital for patients with or at risk for cardiovascular disease. In addition, evolution of artificial intelligence, machine learning, and telemedicine is augmenting clinical decision making at a new level fueling a revolution in cardiovascular disease care delivery. Digital health has the potential to bridge the gap in healthcare access, lower costs of healthcare and promote equitable delivery of evidence-based care to patients. This CardioNerds Digital Health series is made possible by contributions of stellar fellow leads and expert faculty from several programs, led by series co-chairs, Dr. Nino Isakadze and Dr. Karan Desai. Enjoy this Circulation 2022 Paths to Discovery article to learn about the CardioNerds story, mission, and values. CardioNerds Digital Health Series PageCardioNerds Episode PageCardioNerds AcademyCardionerds Healy Honor Roll CardioNerds Journal ClubSubscribe to The Heartbeat Newsletter!Check out CardioNerds SWAG!Become a CardioNerds Patron! Pearls and Quotes Digital redlining occurs when a particular group has limited access to key services based on race and ethnicity, perpetuating inequities. Throughout this podcast episode, Dr. Brewer emphasizes how community engagement early in the creation of digital health technologies can mitigate structural inequities. Dr. Brewer spoke about methods to develop innovative digital health tools that are culturally sensitive and inclusive, specifically community-based participatory research (CBPR). In CBPR, community members are partners with researchers in each step of the intervention. While certain individuals and communities may have physical access to digital health tools, they still may remain inaccessible for several reasons. Notes In this episode, we focus on achieving digital health equity and how the very technologies meant to reduce health disparities can widen them. We started by discussing a paper from Dr. Brewer and colleagues that crystallized how digital health disparities can occur with the example of Pokémon Go. As described in this paper, this mobile application was one of the most used applications worldwide. It incentivized users to collect virtual goods at various physical locations termed PokéStops. For public health professionals, this mobile app represented an engaging way to promote physical activity amongst users. However, some racial and ethnic minority groups in low-income, urban areas quickly took notice of the lack of PokéStops within their neighborhoods. As researchers noted, this could be considered examples of digital redlining, or limiting a particular group from key services based on race and ethnicity. As Dr. Brewer notes in the paper, the Pokémon Go developers relied on maps that were crowdsourced from a majority white male demographic. While it may not have been deliberate, the development process created a structural digital inequity placing certain communities at a home-cour...
How can we ensure that all communities are represented in Alzheimer's and related dementias research and have access to the latest treatments and interventions? Dr. Carl Hill, the chief diversity, equity and inclusion (DEI) officer for the Alzheimer's Association, joins the podcast to delve into the significance of representation, diversity, equity, equality and inclusion within Alzheimer's disease research. He discusses the challenges of underrepresentation in clinical trials, the importance of community-based participatory research (CBPR) and the social determinants of health that influence Alzheimer's risk. Guest: Carl V. Hill, PhD, MPH, chief diversity, equity and inclusion officer, Alzheimer's Association Show Notes Learn more about the Alzheimer's Association's effort in DEI from their inaugural DEI report. Learn more about race-related topics in Alzheimer's disease from the Alzheimer's Association International Conference (AAIC) 2022 here, including a study on the impact of racism on the brain and findings on racial disparities in health equity and resources in Black and Brown communities. Listen to Dr. Hill's past episodes of Dementia Matters, “Scientific Importance Of Diversity In Alzheimer's Disease Research,” and, “Battling Health Disparities In Aging Research And Care,” on our website. Learn more about Dr. Hill in his bio on the Alzheimer's Association's website. Connect with us Find transcripts and more at our website. Email Dementia Matters: dementiamatters@medicine.wisc.edu Follow us on Facebook and Twitter. Subscribe to the Wisconsin Alzheimer's Disease Research Center's e-newsletter.
Dr. Megan Gross is a speech-language pathologist, an assistant professor at the University of Massachusetts Amherst, and the director of the Bilingual Language Development Lab. She explains why the “difference versus disorder” framework is a helpful start but may oversimplify a child's language experience, instead encouraging a move towards a “disorder within diversity” lens. Megan describes how a language disorder looks in Spanish and how shifts in language dominance over time can impact the presentation of a language disorder. She encourages us to not view bilinguals as a separate group, but as individuals on a huge spectrum of language experience. Megan also talks to us about her community-based partnerships. You'll want to take notes when listening to this one! Resources: -Megan's lab website: https://blogs.umass.edu/bld/ and Instagram: @umass_bldlab -Disorder within Diversity framework: https://pubs.asha.org/doi/10.1044/2018_LSHSS-CLSLD-17-0156 -Bedore, Peña et al. production of English grammatical forms across levels of English experience among Spanish-English dual-language learners https://pubs.asha.org/doi/10.1044/2017_LSHSS-17-0027 -Castilla-Earls et al. re: effects of shifting dominance on grammaticality https://pubmed.ncbi.nlm.nih.gov/31112666/ -LITMUS tools for evaluations in different languages: https://www.bi-sli.org/litmus-tools -Portland State resource: https://sites.google.com/pdx.edu/multicsd/home -Charles Sturt in Australia also has a great resource: http://www.csu.edu.au/research/multilingual-speech/languages -MSHA Diversity Advisory Group draft recommendations for multilingual evaluations document: https://docs.google.com/document/d/1tZzKngoh95c5qk7JhOxsAdhVo0IAwo3inTAtCGocp2s/edit?usp=sharing -CBPR example, shared by Dr. Christina Nicolaidis at the last ASHA Convention during the Researcher-Academic Town Meeting: AASPIRE (academic autism spectrum partnership in research and education) https://aaspire.org/inclusion-toolkit/participatory-research/ -Community partners that Megan works with: Nayroby Rosa Soriano, Director of Community Engagement at OneHolyoke CDC https://www.oneholyoke.org/community-engagement/ and Enlace de Familias https://www.facebook.com/profile.php?id=100064582222801, where Megan's lab hosts the Supporting Families Raising Bilingual Children group ✨ Check out our merch at coffeetea3slps.com! ✨
"CBPR is more than these tenants of what constitutes community engaged research. It's about really thinking about how you are going to demonstrate your commitment to a community... and to keep that respect intact regardless of what the institute might demand of you, because they're often at odds. And I think that keeping that front and center really shows your commitment to the process and your authentic respect of the process." In this episode, Mory Chhom is in conversation with Dr. Jerusha Nelson-Peterman, Dr. Lindiwe Sibeko, Nora Tang, and Dr. Lorraine Cordeiro. They discuss the Cambodian experience in Lowell, Massachusetts, as well as how they navigate the predictable and unpredictable challenges of being insiders and outsiders. They close by reminding us what it means to do authentic community-based participatory research. This episode references the article titled "Building on Community Research Partnerships and Training Students in a Multi-Phase Community-Based Participatory Research Study With Young Women of Cambodian Heritage in Massachusetts" by Jerusha Nelson-Peterman, PhD, RDN, Lindiwe Sibeko, PhD, Ronnie Mouth, BS, and Lorraine S. Cordeiro, PhD, MPH. Check out the Sarah Mazelis Paper of the Year Award Winners and HPP's special collection of recently published papers, poetry, and podcast episodes addressing health promotion that centers Asian, Asian American, and Pacific Islander communities and authors.
In this episode, I chat with Dr. Tatiana Elisa Bustos on community-based participatory research (CBPR). We talked about what it is, how it compares to research and other similar forms of inquiry, and how to get started doing CBPR. Disclaimer: Views expressed here are personal and not reflective of the speaker's respective employers or agencies. Contact information Dr. Tatiana Elisa Bustos tbust002@gmail.com (mailto:tbust002@gmail.com) @TElisa72 (https://mobile.twitter.com/telisa72) https://www.linkedin.com/in/tebustos/ (https://www.linkedin.com/in/tebustos/) About Dr. Bustos Dr. Tatiana Elisa Bustos knows that community partner engagement is key to understanding social issues. She'll share her experience applying community-based participatory research approaches. Dr. Bustos innovates outside the box ways to do research that invite community participation, improving programs through implementation with a social justice lens. As a 1st generation college student and the daughter of Nicaraguan immigrants, equity is deeply important to her. She is an author and award-winning researcher. She leads professional development workshops on implementation science and community based participatory research. She received her PhD in Community Psychology from Michigan State University, an MS in Psychology from Nova Southeastern University, and a BA in Psychology from Florida International University. Connect with her on LinkedIn. (https://www.linkedin.com/in/tebustos/) Dr. Bustos also appeared on The Sci-Files on Impact 89FM and Beyond the Manuscript, the podcast of Progress in Community Health Partnerships. Resources Professional Organizations * Society for Community Research and Action (http://scra27.org/) * American Evaluation Association Connect (http://comm.eval.org/search?executeSearch=true&SearchTerm=community+based+participatory+action+research&l=1) (CBPR search) * Community Psychology TIG (http://comm.eval.org/communitypsychology/home) Training Institutes * https://www.detroiturc.org/programs-expertise/cbpr-capacity-building (https://www.detroiturc.org/programs-expertise/cbpr-capacity-building) * https://www.detroiturc.org/about-cbpr/online-cbpr-course (https://www.detroiturc.org/about-cbpr/online-cbpr-course) * https://www.mitrainingcenter.org/courses/cbprs0218noce (https://www.mitrainingcenter.org/courses/cbprs0218noce) Toolkits * https://ctb.ku.edu/en/table-of-contents/evaluate/evaluation/intervention-research/main (https://ctb.ku.edu/en/table-of-contents/evaluate/evaluation/intervention-research/main) Journals * Global Journal of Community Psychology (https://www.gjcpp.org/) * American Journal of Community Psychology (https://onlinelibrary.wiley.com/journal/15732770) * Collaborations: A Journal of Community Based Research and Practice (https://collaborations.miami.edu/)
A European court's decision to overturn a 1 billion euro penalty that had been imposed on Qualcomm is being seen as a watershed moment for antitrust enforcement in the EU, with judges faulting competition investigators over both the procedures and the substance of their claims against the US chipmaker. Also on today's podcast: why reports of the Asia-Pacific region's CBPR privacy mechanism demise may have been premature.
The Northern Mental Health Nursing Qualitative Research Forum meets three times a year to connect Mental Health Nurse researcher interested in, and conducting, qualitative research, methodologies and innovations. If you wish to be added to the mailing list , please contact KMWright1@uclan.ac.uk. The following session was recorded at their first event on Friday 13 May. With thanks to Prof Karen Wright and Dr James Turner for organising the event and the invitation to support with recording the sessions. This was the last of four episodes that we've shared from this first #NorQual event. This Community-Based Participatory Research (CBPR) project focussed on a hidden form of modern slavery, that of Domestic Servitude (DS). One of the principal ways in which this approach differs to that of more traditional approaches is that instead of creating knowledge for the advancement of a field of study or for knowledge's sake, CBPR is an iterative process, incorporating research, reflection, and action in a cyclical process. The data was then interpretated through the lens of phenomenology to reflect the lived experience of the women. DS occurs in a subversive and hidden way, behind closed doors, as victims might have entered the UK as workers, brides, or visitors. Some are married to their perpetrators and have valid visas, many are hidden and tortured. This research reveals the psycho-social impact of DS on these female survivors of DS and the recommendations for mental health and criminal justice services, revealed through 3 focus group interviews with 23 women between the ages of 25-55yrs. Credits: NorQual leads: Prof Karen Wright & Dr Jim Turner Presenter: Dr Peggy Mulongo & Prof Karen Wright Theme music: Tony Gillam Production & Editing: David Munday
Matt, Bill, & Russ review the 2022 CBPR and share their insightes. What was as expected and what was surprising?
In this episode we speak with Wafa Alam and Imran Hossain Mithu from BRAC James P Grant School of Public Health, BRAC University about conducting remote community GIS mapping of informal settlements in Bangladesh. We hear about how: Young people living in informal settlements joined as co-researchers to map their community for the first time How WhatsApp was used to strengthen capacity for mapping and build new skills The process of participatory mapping was adapted during COVID-19 restrictions Wafa Alam, Assistant Coordinator BRAC James P Grant School of Public Health, BRAC University Wafa Alam is currently working as an Assistant Coordinator at BRAC James P Grant School of Public Health, BRAC University. She is currently involved in ARISE project which focuses on the health and wellbeing of marginalized communities living in urban informal settlements. Under ARISE, she works closely with community researchers and is actively engaged in various community-based participatory research (CBPR) methods. She has also worked in a research that focused on social inclusion through skills development of vulnerable population groups like persons with disabilities and transgender. Her research interests are urban health and governance, and health systems research. She completed her Master of Public Health from BRAC James P Grant School of Public Health, BRAC University. She has an undergraduate degree in Bachelor of Science (Biotechnology) from Monash University. https://bracjpgsph.org/staff-members.php (https://bracjpgsph.org/staff-members.php) https://www.ariseconsortium.org/about-us/team/ (https://www.ariseconsortium.org/about-us/team/) https://bracjpgsph.org/assets/pdf/Advocacy/communication%20tools/brochures/Journey_to_A_Better_Life_Stories_of_BRAC_Skills_Development_Programme_Graduates.pdf (https://bracjpgsph.org/assets/pdf/Advocacy/communication%20tools/brochures/Journey_to_A_Better_Life_Stories_of_BRAC_Skills_Development_Programme_Graduates.pdf) https://covid-bracjpgsph.org/front/covid/assets/files/research/brief/Urban_%20Poor%20Lived%20Experiences%20in%20SLums%20ARISE_April%2019_final%20brief%202020-min.pdf (https://covid-bracjpgsph.org/front/covid/assets/files/research/brief/Urban_%20Poor%20Lived%20Experiences%20in%20SLums%20ARISE_April%2019_final%20brief%202020-min.pdf) https://www.youtube.com/watch?v=kl4ghkwrs5Q&t=2s (https://www.youtube.com/watch?v=kl4ghkwrs5Q&t=2s) https://www.youtube.com/watch?v=cr8Czk3BvkY&t=40s (https://www.youtube.com/watch?v=cr8Czk3BvkY&t=40s) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3608577 (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3608577) https://www.thelancet.com/journals/langlo/article/PIIS2214-109X(20)30162-5/fulltext (https://www.thelancet.com/journals/langlo/article/PIIS2214-109X(20)30162-5/fulltext) https://www.thelancet.com/journals/langlo/article/PIIS2214-109X(20)30158-3/fulltext (https://www.thelancet.com/journals/langlo/article/PIIS2214-109X(20)30158-3/fulltext) https://gh.bmj.com/content/5/5/e002253.abstract (https://gh.bmj.com/content/5/5/e002253.abstract) https://www.ariseconsortium.org/learn-more-archive/community-health-volunteers-unsung-heroes-for-urban-informal-settlement-dwellers-during-covid-19-pandemic/ (https://www.ariseconsortium.org/learn-more-archive/community-health-volunteers-unsung-heroes-for-urban-informal-settlement-dwellers-during-covid-19-pandemic/) Imran Hossain Mithu; BRAC James P Grant School of Public Health, BRAC University Imran Hossain Mithu is currently working as a Research Associate at James P Grant School of Public Health. He attained his master's degree in public health from the same institution in Jan'20. He is currently involved in...
In this episode of The Accountability Studio, Moderator Cobun Zweifel-Keegan, Deputy Director of Privacy Initiatives at BBB National Programs, is joined by two industry professionals for an informative conversation on the cross-border privacy rules (CBPR) system, a voluntary framework with a global impact. BBB National Programs Director of Global Privacy Initiatives Josh Harris, and Sam … Continue reading The Past and Future of Privacy Accountability: Is CBPR a Model? →
CardioNerds (Amit Goyal and Daniel Ambinder) are joined by Dr. LaPrincess Brewer and Dr. Norrisa Haynes for a Narratives in Cardiology episode, with a special introduction by Dr. Sharonne Hayes. They discuss health inequities especially in communities of color, impact of projects utilizing community based participatory research (including FAITH! and SHARP founded by Dr. Brewer and Dr. Haynes respectively), and their experiences as underrepresented minority women physician-scientists. This special discussion is brought to you in collaboration with the Association of Black Cardiologists (ABC). The ABC's mission is to “Promote the Prevention and Treatment of Cardiovascular Disease, including Stroke, in Blacks and other Diverse Populations and to Achieve Health Equity for all through the Elimination of Disparities.” You may join and support the ABC at abcardio.org. Claim free CME just for enjoying this episode! Cardionerds Narratives in Cardiology PageCardioNerds Episode PageCardioNerds AcademyCardionerds Healy Honor Roll Subscribe to The Heartbeat Newsletter!Check out CardioNerds SWAG!Become a CardioNerds Patron! Show notes for Health Equity, Community Based Participatory Research, & Underrepresented Minority Women Physician-Scientists 1. What healthcare disparities exist in communities of color? The life expectancy of black Americans on average is 3.4 years shorter than that of white Americans. CVD is estimated to explain over 32% of the mortality difference between AA and white men and 43% of the difference between AA and white women. Together these conditions contributed to > 2 million years of life lost in the AA population between 1999-2010. (1)The impact of COVID-19 on minority communities has caused disproportionate morbidity and mortality and devastating health and financial hardship. According to the CDC, black Americans are 1.9x as likely as whites to die from COVID-19. (2) Additionally, at the beginning of the pandemic, a staggering 41% of black owned businesses closed due to COVID-19 as compared to 17% of white owned businesses. (3) 2. Community engagement & Community based participatory research (CBPR) - what is it? CBPR often has a public health bend that focuses on and attempts to address social, structural and environmental inequities through active involvement of community members in all aspects of the research process (from conception to implementation). Community partners provide their unique expertise to enhance understanding of the community and facilitate implementation. (4) 3. What is FAITH!? The Fostering African American Improvement in Total Health (FAITH) program was started by the phenomenal Dr. LaPrincess Brewer. FAITH is a cardiovascular health and wellness program that uses a CBPR approach to promote heart health in the African American faith-based community.Participants in the FAITH program have shown significant improvement in heart health knowledge. Participants have also had improvement in key heart disease risk factors such as blood pressure. The FAITH app was created in collaboration with community members to achieve easy access and easy usability. It provides vital information and a community network that provides support and motivation for participants. 4. Specifics of SHARP? SHARP stands for Safe Haircuts as We Reopen Philadelphia. SHARP was started to assist local barbershops and salons implement proper COVID-19 safety practices to keep their businesses, clients, and staff safe. In partnership with community members, a safety blueprint was created to meet CDC and Philadelphia Health Department guidelines. Through donations from UPenn and Accenture, SHARP was able to distribute a significant number of PPE items to 30 businesses in West and Southwest Philadelphia. Additionally, due to the financial toll that the pandemic has had on small businesses, SHARP organized grant writing sessions through the Netter Center at Penn to...
In this episode, Ian describes what Community-Based Participatory Research (CBPR) is and what Participatory Action Research methods often look like when done comprehensively, and details just a few of the myriad ways it could be beneficial for nurse researchers to adopt as a methodology in their patient-oriented outcomes related scientific work.
In this episode, I discuss how you can become involved in the work I am doing in 2021! Seeking Community-Based Research Opportunities During Undergraduate/Graduate School or Gap Year (January 14th at 7pm EST) Register here Frances Dean, BSHP, Founder of Create, Critique, and Revise Wizard, where she has served 100+ clients from the Public Health and STEMM (Science, Technology, Engineering, Math and Medicine) field since the company’s establishment in October 2018. Leonore Okwara, MPH, Founder of Public Health Research Consulting helps current and future community-focused researchers engage communities of color in research and manage community-based participatory research studies to meet the needs of the community and the funder. Objectives: 1. Understand the process of CBPR and how to avoid the "savior complex" 2. Learn how to prepare for a career in community-based research 3. Learn concrete ways to seek positions in this field Research in Communities of Color Virtual Summit View the recordings from the inaugural summit Complete the form to participate in the 2021 summit as a volunteer, presenter, or sponsor. Ask-A-Researcher Series Interested in sharing best practices from engaging the community in your research? Join me in an informal discussion about your work and engage in a Q&A with attendees. Sign up here. Join the Public Health Research Network LinkedIn group for a collaborative space to share and learn best practices related to everything research. Program Management Masterclass (January 28th at 7pm EST) Register here No time to dedicate to planning out the administrative steps to make conducting community-centered research activities easier? Are you interested in learning how to develop a program management plan to help you organize and track activities? This is for you if you are confused as to how to effectively manage the many moving parts of a research study. Public Health Culture Podcast Guest Interested in being a guest on the Public Health Culture Podcast? Complete the questionnaire and schedule the episode here. Network with Other Public Health Professionals! Questions about a career in research? Reach out to: Leonore Okwara, MPH (Community-based research and program management) www.publichealthresearchconsulting.com leonore@publichealthresearchconsulting.com Asya Spears, MS (Statistics) www.rosedatastudio.com asya@rosedatastudio.com Andrea Durham, MPH, CCRP (Clinical research) andrea.durham@durhamresearch.co Angela Brown, MPH (Aspiring Public Health Nurse) Instagram: @phnurseang Want to connect with the Research in Communities of Color Summit Presenters? Teneasha Washington, PhD Whitney Hewlett Noel, MPH Jacque-Corey Cormier, PhD Marline Edmond, MCIS, CHES Alisa Howard, CHW-I Joyee Washington, MS, MPH, CHES Andrea Durham, MPH, CCRP Lee Bowen, MPA, CIP, CHRC Okey Enyia, MPH Sherilyn Garner, PhD Tasena McDonald, MPH
Q’eqchi’ researcher and interpreter Pedro Makin discusses the orgins of the Maya Healers’ Association, the importance of challlenging perceptions and what he wants the world to know about his Q’eqchi’ Maya culture. Host: Michelle Gowan
Dr. James B. Waldram, medical anthropologist with University of Saskatchewan discusses his work with the Maya Healers' Association of Belize and working with them through film to advocate for and challenge misperceptions about traditional Q’eqchi’ knowledge. Host: Michelle Gowan
2020 is the gift that keeps on giving. In this episode, Paul Breitbarth and K Royal revisit some of the issues discussed on previous episodes, but with a guest who has a unique global perspective. Our guest today is an American in Brussels. He is the Deputy Chief Privacy Officer of the US-based pharmaceutical company Merck. Chris Foreman has practiced law in London, Washington DC, Istanbul, New York and Moscow, and is currently based in Brussels. Merck, or MSD as it is often known, has been an active player in the international corporate privacy community, and a big advocate for interoperability and company-wide compliance programs. They are one of the few companies that has both EU Binding Corporate Rules and APEC Cross-Border Privacy Rules, trying to ensure the best possible safeguards for international data transfers. There is even a published cross-walk between the two.Join us as we talk with Chris about his views on international data transfers. As expected, we will discuss the Schrems-II decision and its impact on data transfers, but we touch on many other topics as well that are integral to a global privacy program. These include the California Consumer Privacy Act (CCPA) and the ballot initiative, the California Privacy Rights Act (CPRA, Proposition 24), South Korea, Brazil (LGPD), clinical trials, and other twists to privacy law that we are starting to see. ResourcesJD Robb https://jdrobb.com/ TrustArc Privacy Sheild Resources https://trustarc.com/trustarcs-privacy-shield-schrems-resource/ Social MediaTwitter: @privacypodcast, @EuroPaulB, @heartofprivacy, @trustarc, Instagram @seriousprivacy
Joyee Washington, MS, MPH, CHES is a Public Health and Education Research Consultant who works with individuals, groups, organizations, institutions, and communities to plan, implement, assess, evaluation and manage health education and community-based programs, as well as research. She is the founder of Joyee Washington Consulting, LLC and is completing her PhD in Educational Research at the University of Southern Mississippi with a focus on evaluation, statistics, and assessment specifically related to adolescent sexual health. She believes that power lies in community and the best way to access that power is by listening and uplifting the voices of the people through community engagement and action. In This Episode We Cover: How the journey to a PhD is not a straight line but involves many twists and turns. Her focus on adolescent sexual health and teen pregnancy prevention. Her struggles and successes conducting Community-Based Participatory Research with a Teen Pregnancy Prevention Program. The importance of an engaged Community Advisory Board. Her biggest challenges working in Community-Based Research. Her top tips for planning for, implementing and evaluating Community-Based Research. The item(s) that you want to make sure you include in your research budget. The importance of planning for sustainability at the very beginning of the research planning process. The difference between community involvement and community engagement. Why recognizing a community’s strengths is crucial in the research process. The two key factors in helping communities solve problems. Stand-Out Quotes: With regards to Community-Based Participatory Research: "It’s one thing to read about it in a book and another thing to actually do it.” “As researchers, we cannot go into their community and assume we know what they need. That will not work. That will create more problems as opposed to solutions.” “There is a lot that goes on in the background of CBPR. It is key to have a community advisory board.” Advice for Community-Based Researchers: “Don’t go in with any expectations. You have to be open. You have to be flexible.” “At the end of the day it isn’t about you, it is about the community.” “You have to plan for sustainability on the front end.” “When talking about community-based research and projects in general, equity is providing resources to those who need it the most.” “We have a responsibility to work with a community to uncover their strengths and use those strengths to achieve health equity.” “(Research) is more than numbers, data and experiments. We have to take time and ask what’s going on. Communities have to learn to empower themselves.” Action Steps: Whenever working to help solve problems, make sure to meet people where they are and build trust and relationships by listening. Be flexible during the entire research process. Build a sustainability plan during the initial planning process. Remain open, because it is about the community, not your research expectations. Find the strengths! Reach Out: Visit Joyee’s website at: https://joyeewashington.com
Author of the book There is Something in the Water, Dr. Ingrid Waldron is a powerhouse of a community activist and researcher. Using community based participatory research (CBPR) to examine environmental racism in Nova Scotia, Dr. Waldron took on the ENRICH Project in 2012. At first Dr. Waldron was hesitant as she didn’t know much about environmental racism. However, later on she realized that her research in “social, economic, and political inequalities that shape health outcomes in Indigenous, Blakc, and other racialized communities” could make a significant contribution to understanding the health inequalities in Nova Scotia. It was only after Dr. Waldron’s book was published in 2018 that actress Ellen Page got notice of the work that needed to be done and expressed a desire to help the Indigenous and Black communities. After much discussion, the decision was to produce a documentary by the same name. The documentary premiered at the 2019 Toronto International Film Festival and was released on Netflix on March 27, 2020. We were able to dive deeply into Dr. Waldron's amazing journey and the results during this week's conversation. Environmental Racism When asked if the landfill contamination in Shelburne water was responsible for their high cancer rates, Dr. Waldron responded: “It’s probable, highly probably. We did testing, we found contaminants in the well. Now, the issue is that, you could no longer have contaminants now, but you would have had contaminants in the past, do the people who are currently dealing with cancer, that might be a result of contaminants in the past. You also have to be careful about being conclusive. As a researcher, I have to stand back and say it is not conclusive but it is highly probable.” Nova Scotia is historically the oldest and largest community of color in Canada, and continues to have the highest concentration of African Canadians in the country. “A study from Winnie Benton and Sandra Loppie (2001) found that African Nova Scotians had a higher cancer mortality rate than the general population, which they attribute to systemic forms of racism within various social institutions including the health care system” (Waldron, p.98). Oftentimes, environmental racism acts in a cyclical nature. Low income and mainly black, indigenous, people of color (BIPOC) tend to be destabilized due to the myriad of injustices and social ills they are already experiencing, such as low income or under employment, that results in violently unfair policy decisions. Governments often place landfills, paper mills, and other hazardous projects in these areas without consulting the community, furthering the detriment to their personal health and environment. Community Based Participatory Research (CBPR) Dr. Waldron emphasizes the importance of community based research. This was originated in health research, a familiar area for Dr. Waldron, and allows the community to have direct say in how the research is conducted. It starts with engaging the community before the research even begins, informing them of the researcher’s intention and asking how the researcher can give back to them. Building and maintaining trust and integrity is crucial. From there, the community prioritizes the research, proposes the questions they want studied, and how they want it shared. This creates a partnership where the community is allowed to speak for themselves. The difference between extractive research and research that empowers the community is vast. Other Topics Spatial Theory of Racism Slavery in Nova Scotia Intersectional Analysis Gender and Environmental Injustices Social Determinants of Health Follow Dr. Ingrid Waldron: ENRICH Project Twitter Facebook Instagram Related Resources: There’s Something in the Water - Documentary There’s Something in the Water - Book The Racialization of Space and the Spatialization of Race Abstract
Mike speaks with Dr. Tahani Dari on what community-based participatory research is, embodying humility in the research process, and how CBPR helps bridge the researcher-to-practicioner gap. For more on Tahani, links from the conversation and the APA citation for this episode visit the show notes on our website. The Thoughtful Counselor is created in partnership with Palo Alto University’s division of Continuing & Professional Studies. Learn more at paloaltou.edu/concept.
Join Leonore Okwara, MPH as she discusses culturally safe programs. www.thepublichealthconsultant.com CBPR Methods Book: Israel, B., Eng, E., Schulz, A., & Parker E. (Eds.). (2013). Methods for community-based participatory research for health. John Wiley & Sons. CBPR Case Studies: Promoting Health Public Policy through Community-Based Participatory Research: Ten Case Studies https://depts.washington.edu/ccph/pdf_files/CBPR_final.pdf CBPR Toolkits: https://cbprtoolkit.org https://www.aapcho.org/resources_db/cbpr-toolkit/ https://prevention.ucsf.edu/resources/community-based-participatory-research-toolbox Cultural Safety: Richardson, Sandra & Williams, Tracey. (2008). Why is cultural safety essential in health care?. Medicine and law. 26. 699-707. Gerlach, A. J. (2012). A critical reflection on the concept of cultural safety. Canadian Journal of Occupational Therapy, 79(3), 151-158.
This episode of Tell Us About It is the first of a two-part series focused on the Community Based Participatory Research (CBPR) Toolkit. In part one, we speak with Lisa Goodman and Ronit Barkai about their experiences using CBPR as a researcher and practitioner respectively. Lisa and Ronit discuss why CBPR is valuable in a domestic violence context and the importance of including practitioners in this type of work. Lisa Goodman is the lead author of the CBPR Toolkit and the co-founder of the Domestic Violence Program Evaluation and Research Collaborative. She is also a faculty member in the Counseling and Developmental Psychology program at Boston College. Ronit Barkai is the co-founder of the Domestic Violence Program Evaluation and Research Collaborative, and a contributor to the CBPR Toolkit. She is also the Assistant Director of Transition House, a domestic violence agency based in Cambridge, Massachusetts. For more information on this episode, including related links, please visit our website: https://victimresearch.org/podcast/tell-us-about-it-episode-11-using-community-based-participatory-research-in-a-domestic-violence-context-part-1/
Brian Rivers, Ph.D., M.P.H. share with us his journey to get to Rochester MN and his experience working with CBPR Community Base participatory research DIRECTOR Cancer Health Equity Institute Location: National Center for Primary Care Contact: http://www.msm.edu/about_us/FacultyDirectory/CommunityHealthPreventiveMedicine/BrianRivers/index.php
Nuestro editor jefe Manuel nos da un mensaje como despedida del 2015
Nuestro editor jefe Manuel nos da un mensaje como despedida del 2015
Ãngel, Edgar y Manuel hablan de lo que les pareció la presentación del Playstation Experience y del evento The Game Awards. Ademas hablan de otros temas relacionados a la misma.Recuerda seguir a CBPR:https://www.facebook.com/Cyberboxprhttps://twitter.com/cyberboxprAyúdanos a crear más contenido awesome como este.https://www.patreon.com/CyberBoxPR
Ãngel, Edgar y Manuel hablan de lo que les pareció la presentación del Playstation Experience y del evento The Game Awards. Ademas hablan de otros temas relacionados a la misma.Recuerda seguir a CBPR:https://www.facebook.com/Cyberboxprhttps://twitter.com/cyberboxprAyúdanos a crear más contenido awesome como este.https://www.patreon.com/CyberBoxPR
Episode 82: Today's episode of the Social Work Podcast is about how to balance the demands of doing good research with the passion that practitioners and advocates have for addressing the social problems that face their communities. My guests are Corey Shdaimah and Sanford Schram. I speak with Corey and Sandy about the differences between Participatory Action Research (PAR) and Community-Based Participatory Research (CBPR) and why they use PAR rather than CBPR in their work with communities. They give examples of how challenging it is to actually do PAR. They talked about the need to bridge the gap between research and practice and how that was one of their motivations for writing their text, Change Research. Throughout our conversation Sandy and Corey bring up lots of ideas that are perfect discussion points for research classes, both at the masters and doctoral level. For those of you interested in learning more about doing the kind of community-based change research that we talk about in today's episode, I posted a list of resources on socialworkpodcast.com that Corey very generously provided. You can connect with other social workers at the Social Work Podcast Facebook page, http://www.facebook.com/swpodcast, or follow the Twitter feed http://www.twitter.com/socworkpodcast. You can listen to the Social Work Podcast from socialworkpodcast.com, by downloading the episodes through iTunes or any number of other apps, or you can stream the 10 most recent episodes right from your mobile device using the Stitcher Radio mobile app http://www.stitcher.com/podcast/social-work-podcast/the-social-work-podcast.
Episode 82: Today's episode of the Social Work Podcast is about how to balance the demands of doing good research with the passion that practitioners and advocates have for addressing the social problems that face their communities. My guests are Corey Shdaimah and Sanford Schram. I speak with Corey and Sandy about the differences between Participatory Action Research (PAR) and Community-Based Participatory Research (CBPR) and why they use PAR rather than CBPR in their work with communities. They give examples of how challenging it is to actually do PAR. They talked about the need to bridge the gap between research and practice and how that was one of their motivations for writing their text, Change Research. Throughout our conversation Sandy and Corey bring up lots of ideas that are perfect discussion points for research classes, both at the masters and doctoral level. For those of you interested in learning more about doing the kind of community-based change research that we talk about in today's episode, I posted a list of resources on socialworkpodcast.com that Corey very generously provided. You can connect with other social workers at the Social Work Podcast Facebook page, http://www.facebook.com/swpodcast, or follow the Twitter feed http://www.twitter.com/socworkpodcast. You can listen to the Social Work Podcast from socialworkpodcast.com, by downloading the episodes through iTunes or any number of other apps, or you can stream the 10 most recent episodes right from your mobile device using the Stitcher Radio mobile app http://www.stitcher.com/podcast/social-work-podcast/the-social-work-podcast.
This presentation describes a research program at the Center for Alaska Native Health Research in the Institute of Arctic Biology based on exploring Yup’ik youth resilience strategies in an effort to inform and develop effective interventions that address disparities in youth suicide and substance abuse. This program is based on over a decade of community-based participatory research (CBPR) collaborations between researchers at UAF and Yup’ik Tribal communities and organizations. This presentation will describe the development of this collaborative and participatory research relationship and identify process outcomes of implementing a Tribal and youth participatory approach for one community in Southwest Alaska. This program receives support from the NIH/National Institute of Minority Health and Health Disparities, the National Science Foundation and the State of Alaska.
Sara Young, Director of Tribal and Tribal College Partnerships and CBPR & Health Disparities Core Co-Director, spoke on "Building Health Disparities Research Capacity in Montana's Tribal Colleges" on November 2, 2011.