POPULARITY
How can we leverage crypto rails to develop reliable onchain impact evaluation mechanisms?
Catherine Jones, Senior Analyst, Government Affairs, ASTHO, outlines a new ASTHO blog article about the federal response to rising temperatures; Bobbi Krabill, Deputy Director, Office of Performance and Innovation, Ohio Department of Health, discusses ensuring those with disabilities are included in emergency planning; and an ASTHO webinar on Tuesday, September 17th, details campaign messaging to promote pharmacist-prescribed contraception. An Impact Evaluation of the Disability and Preparedness Specialist Program Ohio Includes Stakeholders of All Abilities in Public Health Planning ASTHO Webinar: Implementation of Pharmacist-Prescribed Contraception: Public Outreach & Awareness
In this episode, we meet the Chizvirizvi community in Zimbabwe – a latecomer to the CAMPFIRE programme, compared to the Mahenye featured in episode 5.Chizvirizvi is somewhat different as it's not operated by the Rural District Council. Instead, authority for the utilisation and management of wildlife has been conferred to the community or collective resettlement scheme plot holders. And with the authority only designated in 2003, their CAMPFIRE programme is relatively in its infancy, with the infrastructure only just beginning to grow.A survey in 2013 discovered the 77% of the population said they had experienced human wildlife conflict between the year 2000 and 2010. With this background, we expected a very different conversation to the one we had at the Jamanda Conservancy but as we've found throughout this series, there are always surprising – and often uplifting – stories to hear.We start with a shocking story of bravery in the face of a crocodile attack from Morina and her son Gideon. Thankfully, the story ends well.Mr Chirhilele is a farmer and rancher and describes how scouts and monitors go some way to protecting residents' cattle but could do more. He asks that the wildlife population be maintained at an optimal number to ensure coexistence for him and his family, and for future generations.Dr Shylock Muyengwa is Managing Consultant at the Centre for Impact Evaluation and President of the Zimbabwe Evaluation Association and since 2007, has studied community-based natural resource management systems (CBNRM). Kevin Mfishani is a member of Community Leader's Network and a project officer with the Zimbabwe CAMPFIRE association.They discuss the past, present and future of life alongside wild animals and the importance of empowering communities to make decisions and revenue, utilising their natural resources.We speak to them all, beneath the baobab.Visit the website https://jammainternational.com to explore more international projects.The video of this episode can be seen here: https://youtu.be/-0fYvwrhQoohttps://www.researchgate.net/figure/Chizvirizvi-resettlement-area-Chigonda-2017_fig1_328048741 https://twitter.com/forevaluation?lang=en https://www.communityleadersnetwork.org https://campfirezimbabwe.org Hosted on Acast. See acast.com/privacy for more information.
Impact: EHR tool for aldosteronism diagnosis
Welcome to the Mixtape with Scott! Due to a technical difficulty with my producer's computer, this week's interview was not ready in time. So we are going to do another repeat from season one. This is with Petra Todd, a labor economist, econometrician and author of a new book on causal inference entitled, Impact Evaluation in International Development with Paul Glewwe. She was also elected to the Academy of Arts and Sciences last 2023. And she is Jim Heckman's former student and coauthor, which fits with my slowly building deck of interviews on “Heckman's students” (along with John Cawley and Chris Taber). But I also just loved this interview and so it's also nice just to repost it. Plus, it's probably nice I think to give people some breathing room given the pace at which these come out. Next week, though, I should be back on track with new episodes. Thanks again for tuning in!Scott's Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to Scott's Substack at causalinf.substack.com/subscribe
International development is a major political priority in many countries, with billion-dollar budgets. But, as recently as 2006, the influential Center for Global Development published a damning report entitled 'When will we ever learn?', essentially arguing that the entire policy area had been built on a foundation of guesswork and good intentions. In the two decades since then, a huge amount of work has been done to bring rigorous evidence to this complex and often values-laden political area. For the Science for Policy podcast, Marie Gaarder and Thomas Kelly from the International Initiative for Impact Evaluation cover all the bases: the evidence we have and the evidence we need, how it should be used, and what's still getting in the way. Resources mentioned in this episode Report 'When will we ever learn?': https://www.cgdev.org/publication/when-will-we-ever-learn-improving-lives-through-impact-evaluation
Godfrey Senkaba chats with Sajilu Kamwendo, the Director, Outcome, and Impact Evaluation at Mastercard Foundation. Sajilu has over 20 years of experience in program evaluation which includes work with international organizations, implementing or funding relief and development programs. Sajilu elaborates on how his passion for curiosity and patterns propelled his journey in evaluation, ultimately leading him from a novice to the position of directing impact evaluation at the MasterCard Foundation. He discusses the significance of his current role and how it has shaped his evaluative priorities. For instance, he underscores MasterCard's evaluation approach, which is rooted in the belief that "Africa Works." He also emphasizes the ongoing campaign at the Africa Evaluation Association, advocating for "Made in Africa evaluation." Sajilu encourages evaluators of all levels—whether young, mid-career, or experienced—to join this movement, emphasizing the importance of supporting Africans in narrating their own stories of change using indigenous language, style, and approaches. Below is the discussion outline: [00:34] Hello, everyone![02:00] Introducing Sajilu Kamwendo.[03:03] A ‘youthful' Sajilu and career dreams before his first professional job?[8:18] How working different evaluation jobs shaped Sajilu's evaluation perspective, and interests and shared her career goals.[12:50] The uniqueness of MasterCard Foundation's approach to Evaluation and how it is shaping Sajilu's evaluative approach. [15:18] How to raise the indigenous voice in evaluation.[18:34] Top 2-3 skills every evaluation professional should have. Key soft skills include curiosity, point of view, and pragmatism.[22:06] Biggest challenges Sajilu has faced as an evaluator and how she overcame them. [25:08] Top monitoring and evaluation tools and/or methods Sajilu uses to improve her work.[29:02] Sajilu's career growth strategies and recommendations to evaluators.[33:58] Evaluation in the post-COVID-19 evaluation era. [37:05] Sajilu's best moment as an evaluator [39:07] Sajilu's plans.[40:39] How to contact Sajilu Kamwendo[44:58] Thank you, Sajilu. Thank you everyone for listening! I would like to hear your feedback on this episode, and your suggestions for future episodes. Use the comment box below or send me an email. Thank you for your feedback. Connect with Godfrey Senkaba:Website: https://www.mandeboost.comEmail: info@mandeboost.comTwitter: https://twitter.com/senkaba_gLinkedIn: https://www.linkedin.com/in/godfrey-senkaba-32452428/ Connect with Sajilu Kamwendo: E-mail: sajilu_kamwendo@yahoo.com Twitter: https://twitter.com/sajiluLinkedIn: https://www.linkedin.com/in/sajilu-kamwendo-b5705276/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Operational Funding Match 2023, published by Luke Freeman on December 8, 2023 on The Effective Altruism Forum. We are excited to announce a match for donations made towards our operations at Giving What We Can! Starting December 1st, every dollar donated towards GWWC's operations will be matched 1:1 up to US$200,000 until the match has been exhausted, or until January 31st 2024, whichever comes first*. Donate We believe that GWWC is a great funding opportunity for those who believe in effective giving. Our most recent Impact Evaluation suggests that from 2020 to 2022: GWWC generated an additional $62 million in value for highly-effective charities. GWWC had a giving multiplier of 30x, meaning that for each $1 spent on our operations, we generated $30 of value to highly-effective charities on average. Please note that this isn't a claim that your additional dollar will have a 30x multiplier, even though we think it will still add a lot of value. Read more on how to interpret our results. Each new GWWC Pledge generates >$20,000 of value for highly-effective charities that would not have happened without GWWC. Reaching our US$200K goal will fully unlock the matching funds, and with US$400K we will be close to filling our baseline funding for 2024, allowing us to revamp the How Rich Am I? Calculator, continue evaluating evaluators, launch in new markets, improve the donation platform including likely reworking the checkout flow and much more. We strongly recommend you read our case for funding to learn more about our plans, our impact and what your donation could help us achieve. This is a true, counterfactual match, and we will only receive the equivalent amount to what we can raise. Thank you to Meta Charity Funders for generously providing funding for this match. Donate *The following terms and conditions apply: Match will apply in a 1:1 ratio to donated funds. In other words, for every $1 you donate to GWWC's operations, the matching donors will give $1. The match will be applied to eligible donations from December 1st and will apply retroactively The match will end once US$200,000 has been reached, or we reach January 31st 2024, whichever comes first. Once the matched funds have been exhausted, we will update this page. The match will be applied to both one-off and recurring donations that occur during the match period Donors who have funded more than US$250,000 of GWWC's operations since Jan 1 2022 are not eligible for this match - if you'd like to clarify whether you are ineligible, please contact us at community@givingwhatwecan.org Match will apply to the first US$50,000 per donor Donations can be made through givingwhatwecan.org or through other pathways or entities that can receive donations for GWWC's operations (please contact us for other options, or if you're an Australia tax resident) Gift Aid payments will not be included in the match Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effektiv Spenden's Impact Evaluation 2019-2023 (exec. summary), published by Sebastian Schienle on December 1, 2023 on The Effective Altruism Forum. effektiv-spenden.org is an effective giving platform in Germany and Switzerland that was founded in 2019. To reflect on our past impact, we examine Effektiv Spenden's cost-effectiveness as a "giving multiplier" from 2019 to 2022 in terms of how much money is directed to highly effective charities due to our work. We have two primary reasons for this analysis: To provide past and future donors with transparent information about our cost-effectiveness; To hold ourselves accountable, particularly in a situation where we are investing in further growth of our platform. We provide both a simple multiple (or "leverage ratio") of donations raised for highly effective charities compared to our operating costs, as well as an analysis of the counterfactual (i.e. what would have happened had we never existed). Our analysis complements our Annual Review 2022 (in German) and builds on previous updates and annual reviews, such as, amongst others, our reviews of 2021 and 2019. In both instances, we also included initial perspectives on our counterfactual impact. Since then, the investigation of Founders Pledge into giving multipliers as well as Giving What We Can (GWWC)'s recent impact evaluation have provided further methodological refinements. In line with GWWC's approach, we shift to 3-year time horizons, which we feel better represents our impact over time and avoids short-term distortions. However, our attempt to quantify our " giving multiplier" deviates in some parts from the methodologies and assumptions applied by Founders Pledge and GWWC and is an initial, shallow analysis only that we intend to develop further in the future. Below, we share the key results of our analysis. We invite you to share any comments or takeaways you may have, either by directly commenting or by reaching out to sebastian.schienle@effektiv-spenden.org Key results In 2022, we moved 15.3 million to highly effective charities, amounting to 37 million in total donations raised since Effektiv Spenden was founded in 2019. Our leverage ratio, i.e. the money moved to highly effective charities per 1 spent on our operations was 55.7 and 40.8 for the 2019-2021 and 2020-2022 time periods respectively.[1] Our best-guess counterfactual giving multiplier is 17.9 and 13.0 for those two time periods, robustly exceeding 10x. This means that for every 1 spent on Effektiv Spenden between 2019-2022, we are confident to have facilitated more than 10 to support highly effective charities which would not have materialized had Effektiv Spenden not existed. Our conservative counterfactual giving multiplier is 10.4 for 2019-2021, and 7.5 for 2020-2022. The decline of our multiplier over time is driven by the investment into our team. Over the last year, our team has grown substantially to enable further growth. While this negatively impacts our giving multiplier in the short term, we consider it a necessary prerequisite for further growth. Our ambition is to return to a best-guess counterfactual multiplier of at least 15x in the coming years. That said, ultimately our goal is not to maximize the multiplier, but to maximize counterfactually raised funds for highly effective charities. (As long as our work remains above a reasonable cost-effectiveness bar.) How to interpret our results We consider our analysis an important stocktake of our impact, and a further contribution to the growing body of giving multiplier analyses in the effective giving space. That said, we also recognize the limitations of our approach and want to call out some caveats to guide interpretation of these results. Our analysis is largely retrospective, i.e. it compares our past money moved with operating ...
effektiv-spenden.org is an effective giving platform in Germany and Switzerland that was founded in 2019. To reflect on our past impact, we examine Effektiv Spenden's cost-effectiveness as a "giving multiplier" from 2019 to 2022 in terms of how much money is directed to highly effective charities due to our work. We have two primary reasons for this analysis:To provide past and future donors with transparent information about our cost-effectiveness;To hold ourselves accountable, particularly in a situation where we are investing in further growth of our platform.We provide both a simple multiple (or “leverage ratio”) of donations raised for highly effective charities compared to our operating costs, as well as an analysis of the counterfactual (i.e. what would have happened had we never existed). Our analysis complements our Annual Review 2022 (in German) and builds on previous updates and annual reviews, such as, amongst others, our reviews of 2021 and 2019. [...] ---Outline:(02:07) Key results(03:56) How to interpret our resultsThe original text contained 1 footnote which was omitted from this narration. --- First published: December 1st, 2023 Source: https://forum.effectivealtruism.org/posts/wtjFne8WdcLJTpyWm/effektiv-spenden-s-impact-evaluation-2019-2023-exec-summary --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Started with Impact Evaluation Surveys: A Beginner's Guide, published by Emily Grundy on November 14, 2023 on The Effective Altruism Forum. In 2023, I provided research consulting services to help AI Safety Support evaluate their organisation's impact through a survey[1]. This post outlines a) why you might evaluate impact through a survey and b) the process I followed to do this. Reach out to myself or Ready Research if you'd like more insight on this process, or are interested in collaborating on something similar. Epistemic status This process is based on researching impact evaluation approaches and theory of change, reviewing what other organisations do, and extensive applied academic research and research consulting experience, including with online surveys (e.g., the SCRUB study). I would not call myself an impact evaluation expert, but thought outlining my approach could still be useful for others. Who should read this? Individuals / organisations whose work aims to impact other people, and who want to evaluate that impact, potentially through a survey. Examples of those who may find it useful include: A career coach who wants to understand their impact on coachees; A university group that runs fellowship programs, and wants to know whether their curriculum and delivery is resulting in desired outcomes; An author who has produced a blog post or article, and wants to assess how it affected key audiences. Summary Evaluating the impact of your work can help determine whether you're actually doing any good, inform strategic decisions, and attract funding. Surveys are sometimes (but not always) a good way to do this. The broad steps I suggest to create an impact evaluation survey are: Articulate what you offer (i.e., your 'services'): What do you do? Understand your theory of change: What impact do you hope it has, and how? Narrow in on the survey: How can a survey assess that impact? Develop survey items: What does the survey look like? Program and pilot the survey: Is the survey ready for data collection? Disseminate the survey: How do you collect data? Analyse and report survey data: How do you make sense of the results? Act on survey insights: What do you do about the results? Why conduct an impact evaluation survey? There are two components to this: 1) why evaluate impact and 2) why use a survey to do it. Why evaluate impact? This is pretty obvious: to determine whether you're doing good (or, at least, not doing bad), and how much good you're doing. Impact evaluation can be used to: Inform strategic decisions. Collecting data can help you decide whether doing something (e.g., delivering a talk, running a course) is worth your time, or what you should do more or less of. Attract funding. Being able to demonstrate (ideally good) impact to funders can strengthen applications and increase sustainability. Impact evaluation is not just about assessing whether you're achieving your desired outcomes. It can also involve understanding why you're achieving those outcomes, and evaluating different aspects of your process and delivery. For example, can people access your service? Do they feel comfortable throughout the process? Do your services work the way you expect them to? Why use a survey to evaluate impact? There are several advantages of using surveys to evaluate impact: They are relatively low effort (e.g., compared to interviews); They can be easily replicated: you can design and program a survey that can be used many times over (either by you again, or by others); They can have a broad reach, and are low effort for participants to complete (which means you'll get more responses); They are structured and standardised, so it can be easier to analyse and compare data; They are very scalable, allowing you to collect data from hundreds or thousands of respond...
SummaryGiven EA's history and values, I'd have expected for impact evaluation to be a distinguishing feature of the movement. In fact, impact evaluation seems fairly rare in the EA space.There are some things specific actors could do for EA to get more of the benefits of impact evaluation. For example, organisations that don't already could carry out evaluations of their impact, and a well-suited individual could start an organisation to carry out impact evaluations and analysis of the EA movement.Overall I'm unsure to what extent more focus on impact evaluation would be an improvement. On the one hand, establishing impact is challenging for many EA activities and impact evaluation can be burdensome. On the other hand, an organisation's historic impact seems very action-relevant to its future activities and current levels of impact evaluation seem low.What Is Impact Evaluation?Over the last year I've been speaking to EA orgs about their impact [...] ---Outline:(00:59) What Is Impact Evaluation?(01:22) Why is Impact Evaluation Important?(02:17) I'd expect Impact Evaluation to be quite common in EA(03:00) Impact evaluation is fairly rare in EA(04:42) Potentially justified reasons for this(05:58) Some low- to medium-cost opportunities(08:11) ConclusionThe original text contained 2 footnotes which were omitted from this narration. --- First published: October 26th, 2023 Source: https://forum.effectivealtruism.org/posts/hDNpHEA2Kn4xBoS8r/impact-evaluation-in-ea --- Narrated by TYPE III AUDIO.
The post Ethos Investment Impact Evaluation appeared first on AIO Financial - Fee Only Financial Advisors.
Mark discusses a range mixed methods evaluation designs that can help you collect data to evidence impacts arising from industry, policy, media, and public engagement. The methods are easy to use without any specialist training or experience, and can generate useful data in some of the trickiest areas of impact evaluation.Find out more about the "postcard to your future self" methodFind out more about the Media Impact Guide and ToolkitRead Mark's REF2021 case study in which he evidenced a range of policy impactsYou can download a written transcript of this episode here
Keywords: Innovation, Programme Lifecycle, Impact Evaluation, Partnerships, Social Enterprise, Winning Scotland, Charity, Social Change, Board Champions. Description: Can you imagine crafting a future where children are not just surviving their circumstances but thriving in them? That's precisely what Zahra Hedges, the dynamic CEO of Winning Scotland, is striving to achieve. In our engrossing chat, Zahra opens up about the charity's ambitious mission to revolutionize Scotland's social fabric, focusing on empowering children and young people to develop resilience and confidence. We dissect their approach and discuss Zahra's experience of moving from running her own business to supporting social enterprises to being charity CEO. We also discuss the importance of getting the most out of your board of trustees. Zahra Hedges Zahra Hedges is CEO of Winning Scotland. Before that she worked for the Scottish Government in children and young people's mental health, and the CEIS Group where Zahra supported social enterprises. Zahra is also an advisor to Samtaler, which helps large companies to create social value, a board member of community justice organisation SACRO and a mentor with MCR Pathways and Pilotlight. * If you enjoy the podcast, please do follow us and leave a rating / review. * The Charity Impact Podcast aims to help you increase your charity's income and impact by sharing the experience and expertise of our guests. Whatever your role or level of experience, we think you'll be inspired and informed by our guests who are absolutely the stars of the show! We aim to showcase a diverse range of guests, including people whose voices have been less heard as well as established leaders in our field. So, whether you are CEO, fundraiser, trustee, manager, practitioner, funder, or any other flavour of social leader, welcome to the Charity Impact Podcast! * For episode notes with links to resources and organisations mentioned in this episode, please visit https://www.kedaconsulting.co.uk/charity-impact-podcast/ For the opportunity to submit questions to future guests, sign up to our e-mails via the banner at the top of the website. If you have any questions, feedback or enquiries regarding the podcast, you can reach us by e-mail at hello@kedaconsulting.co.uk Follow the Charity Impact Podcast: Twitter: @CharityImpactPd LinkedIn: @Charity Impact Podcast Facebook: @Charity Impact Podcast Follow our host, Alex Blake: Twitter: @alexblake_KEDA LinkedIn: @Alex Blake
How do you design an impact evaluation? There is no blueprint, but in this episode, Mark gives you the tools you will need to choose an evaluation design with methods that can deliver convincing evidence while giving you win-wins for your research.Evidencing impact paper: https://www.sciencedirect.com/science/article/pii/S0048733320302225?via%3Dihub Decision tree: https://www.dropbox.com/s/pbupwthhgpj9yzv/Decision%20tree%20EDITED.pdf?dl=0
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's 2020–2022 Impact evaluation (executive summary), published by Michael Townsend on March 31, 2023 on The Effective Altruism Forum. Giving What We Can (GWWC) is on a mission to create a world in which giving effectively and significantly is a cultural norm. Our research recommendations and donation platform help people find and donate to effective charities, and our community — in particular, our pledgers — help foster a culture that inspires others to give. In this impact evaluation, we examine GWWC's cost-effectiveness from 2020 to 2022 in terms of how much money is directed to highly effective charities due to our work. We have several reasons for doing this: To provide potential donors with information about our past cost-effectiveness. To hold ourselves accountable and ensure that our activities are providing enough value to others. To determine which of our activities are most successful, so we can make more informed strategic decisions about where we should focus our efforts. To provide an example impact evaluation framework which other effective giving organisations can draw from for their own evaluations. This evaluation reflects two months of work by the GWWC research team, including conducting multiple surveys and analysing the data in our existing database. There are several limitations to our approach — some of which we discuss below. We did not aim for a comprehensive or “academically” correct answer to the question of “What is Giving What We Can's impact?” Rather, in our analyses we are aiming for usefulness, justifiability, and transparency: we aim to practise what we preach and for this evaluation to meet the same standards of cost-effectiveness as we have for our other activities. Below, we share our key results, some guidance and caveats on how to interpret them, and our own takeaways from this evaluation. GWWC has historically derived a lot of value from our community's feedback and input, so we invite readers to share any comments or takeaways they may have on the basis of reviewing this evaluation and its results, either by directly commenting or by reaching out to sjir@givingwhatwecan.org. Key results Our primary goal was to identify our overall cost-effectiveness as a giving multiplier — the ratio of our net benefits (additional money directed to highly effective charities, accounting for the opportunity costs of GWWC staff) compared to our operating costs. We estimate our giving multiplier for 2020–2022 is 30x, and that we counterfactually generated $62 million of value for highly effective charities. We were also particularly interested in the average lifetime value that GWWC contributes per pledge, as this can inform our future priorities. We estimate we counterfactually generate $22,000 of value for highly effective charities per GWWC Pledge, and $2,000 per Trial Pledge. We used these estimates to help inform our answer to the following question: In 2020–2022, did we generate more value through our pledges or through our non-pledge work? We estimate that pledgers donated $26 million in 2020–2022 because of GWWC. We also estimate GWWC will have caused $83 million of value from the new pledges taken in 2020–2022. We estimate GWWC caused $19 million in donations to highly effective charities from non-pledge donors in 2020–2022. These key results are arrived at through dozens of constituent estimates, many of which are independently interesting and inform our takeaways below. We also provide alternative conservative estimates for each of our best-guess estimates. How to interpret our results This section provides several high-level caveats to help readers better understand what the results of our impact evaluation do and don't communicate about our impact. We generally looked at average rather than marginal cost-effectiveness Most of our ...
Graeme Blair talks about the effects of community policing in the Global South. “Community Policing Does Not Build Citizen Trust in Police or Reduce Crime in the Global South” by Graeme Blair, Jeremy M. Weinstein, Fotini Christia, Eric Arias, Emile Badran, Robert A. Blair, Ali Cheema, Thiemo Fetzer, Guy Grossman, Dotan Haim, Rebecca Hanson, Ali Hasanain, Ben Kachero, Dorothy Kronick, Benjamin Morse, Robert Muggah, Matthew Nanes, Tara Slough, Nico Ravanilla, Jacob N. Shapiro, Barbara Silva, Pedro C. L. Souza, Lily Tsai, and Anna Wilke. *** Probable Causation is part of Doleac Initiatives, a 501(c)(3) nonprofit. If you enjoy the show, please consider making a tax-deductible contribution. Thank you for supporting our work! *** OTHER RESEARCH WE DISCUSS IN THIS EPISODE: “Community Policing, Chicago Style” by Wesley G. Skogan and Susan M. Hartnett. “Impact Evaluation of the LAPD Community Safety Partnership” by Sydney Kahmann, Erin Hartman, Jorja Leap, and P. Jeffrey Brantingham. “Crime, Insecurity, and Community Policing: Experiments on Building Trust” by Graeme Blair, Fotini Christia, Jeremy M. Weinstein, Eric Arias, Emile Badran, Robert A. Blair, Ali Cheema, Thiemo Fetzer, Guy Grossman, Dotan Haim, Rebecca Hanson, Ali Hasanain, Ben Kachero, Dorothy Kronick, Benjamin Morse, Robert Muggah, Matthew Nanes, Tara Slough, Nico Ravanilla, Jacob N. Shapiro, Barbara Silva, Pedro C. L. Souza, Lily Tsai, and Anna Wilke. [Forthcoming book.]
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's Fund: Better Science impact evaluation. Registered Reports now available in Nature, published by Hauke Hillebrandt on February 26, 2023 on The Effective Altruism Forum. Cross-posted from my blog - inspired by the recent call for more monitoring and evaluation Hi, it's Hauke, the founder of Let's Fund. We research pressing problems, like climate change or the replication crisis in science, and then crowdfund for particularly effective policy solutions. Ages ago, you signed up to my newsletter. Now I've evaluated the $1M+ in grants you donated, and they had a big impact. Below I present the Better Science / Registered Report campaign evaluation, but stay tuned for the climate policy campaign impact evaluation (spoiler: clean energy R&D increased by billions of dollars). Let's Fund: Better Science Chris Chambers giving a talk on Registered Reports We crowdfunded ~$80k for Prof. Chambers to promote Registered Reports, a new publication format, where research is peer-reviewed before the results are known. This fundamentally changes the way research is done across all scientific fields. For instance, one recent Registered Report studied COVID patients undergoing ventilation1 (but there are examples in other areas including climate science,2 development economics,3 biosecurity, 4 farm animal welfare,5 etc.). Registered Reports have higher quality than normal publications,6 because they make science more theory-driven, open and transparent find methodological weaknesses and also potential biosafety failures of dangerous dual-use research prior to publication (e.g. gain of function research)7 get more papers published that fail to confirm the original hypothesis increase the credibility of non-randomized natural experiments using observational data If Registered Reports become widely adopted, it might lead to a paradigm shift and better science. 300+ journals have already adapted Registered Reports. And just last week Nature, the most prestigious academic journal, adopted it: Chris Chambers on Twitter: "10 years after we created Registered Reports, the thing critics told us would never happen has happened: @Nature is offering them Congratulations @Magda_Skipper & team. The @RegReports initiative just went up a gear and we are one step closer to eradicating publication bias. This is big and Registered Reports might soon become the gold standard. Why? Imagine you're a scientist with a good idea for an experiment with high value of information (think: a simple cure for depression). If that has a low chance of working out (say 1%), then previously you had little incentive to run it. Now, if your idea is really good, and based on strong theory, Registered Reports derisks running the experiment. You can first submit the idea and methodology to Nature and the reviewers might say: ‘This idea is nuts, but we agree there's a small chance it might work and really interested in this works. If you run the experiment, we'll publish this independent of results!' Now you can go ahead spend a lot of effort on running the experiment, because even if it doesn't work, you still get a Nature paper (which you wouldn't with null results). This will lead to more high risk, high reward research (share this post or the tweet with academics! They might thank you for the Nature publication). Many people were integral to this progress, but I think Chambers, the co-inventor and prime proponent of Registered Reports deserves special credit. In turn he credited: Chris Chambers @chrisdc77: 'You. That's right. Some of the most useful and flexible funding I've received has been donated by hundreds of generous members of public (& small orgs) via our @LetsFundOrg-supported crowd sourcing fund' You may feel smug. If you want to make a bigger donation (>$1k), click here. There are proposals to improve Regis...
Gordon Buchanan meets two of the pioneers of CBNRM or Community based Natural Resources Management in this episode of Beneath the Baobab.Dr Brian Child and Dr Shylock Muyengwa have teamed up from their homes on other sides of the world for years, conducting fieldwork and research with communities to help develop increasingly sophisticated models and practises for wildlife conservation with people at their heart.Brian's childhood in Zimbabwe inspired a career defending the rights and wellbeing of rural people and today he is Associate Professor at the University of Florida.Shylock has an enormous breadth of experience across Zimbabwe's agriculture, food, security and livelihoods sector. He's Managing Director at the Center for Impact Evaluation and Research Design as well as CBNRM Manager for Resource Africa Southern Africa.Their work together on community governance in reinstating rights through participatory democracies continues to provide new insights for the future of conservation in communities living together with wildlife. They explain how the pioneering CAMPFIRE programme worked to devolve rights for the use, management, disposal of and benefit from wildlife resources and how learnings have been built upon to build modern-day CBNRM. They also discuss the legacy of colonial land practises and laws in contemporary conservation and share ideas for overcoming this.Brian and Shylock discuss the social and practical aspects of this approach but also share details of the governance dashboard they developed with villagers to help them create participatory democracies for decision-making.Visit the website https://jammainternational.com to explore more international projects.https://c4ierd.orghttps://resourceafrica.nethttps://twitter.com/africa_resource Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.
Host: Ben Banerjee & Sveta Banerjee Topic: Are we measuring the Social Impact? Guest: Juan J. Alarcon, Project Director of the Limmat Foundation Juan J. Alarcon has a Master's degree in Economics (Barcelona, 1976) and in Financial Investment (Geneva, 1978). He has also a Diploma for Development Cooperation - NADEL of the ETH (Zurich, 2002) and completed the Advanced Management Program of the IESE Business School (Barcelona, 2008). From 1980-1985 he was the financial director of an international trading company in Geneva. From 1985 -2018, J. Alarcon has been the project director of the Limmat Foundation, and has directed more than 450 social projects in 40 different countries. He has lectured at numerous seminars in developing countries for managers and project developers from NGOs and development cooperation institutions. In 2002, he developed the methodology of the Socio-Economic Welfare Index (SEWI), applying and improving it within the Limmat Foundation. From 2012-2021 he was a Lecturer for Impact Evaluation at the CEPS (Center for Philanthropy Studies) at the University of Basel. Since 2013 he is the CEO of Swissocial and is responsible for all operational activities. The weekly show on how Impactful investments and businesses are helping to implement the 17 UN SDG's worldwide to preserve the world for future generation. Banerjis have enlightening and in-depth conversations with newsmakers, celebrities, thought leaders, entrepreneurs, project owners, investors, politicians and business leaders and encourage them to act now.
Minnesota has been identified by several good-government organizations as one of the leaders in the nation on evidence-based policymaking. The Pew Charitable Trusts, for example, has noted that “Using evidence-based policymaking has enabled Minnesota…to provide better outcomes for residents, [and] improve the way research and evidence inform the budget and policymaking processes….” One important piece of […] The post How Minnesota’s impact evaluation unit supports evidenece-based budgeting: An interview with Weston Merrick, Minnesota Management and Budget – Episode #195 appeared first on Gov Innovator podcast.
How do researchers assess the impact of peacebuilding interventions? And what can we learn from examining existing literature as a whole? In this episode, we speak with Ada Sonnenfeld, a former Evaluation Specialist with the International Initiative for Impact Evaluation (3ie). She talks about her work managing systematic reviews and evidence gap map projects, which can help policymakers make more informed decisions about how to use evidence – to make sense of what we know and learn from what has been done before. We discuss her recent review, where she and her colleagues synthesize evidence on programs that promote intergroup social cohesion in fragile contexts.This podcast is produced in partnership with the Pearson Institute for the Study and Resolution of Global Conflicts. For more information, please visit their website at www.thepearsoninstitute.orgAccess the study here: http://bit.ly/SocialCohesionSR46 Podcast Production Credits:Interviewing: Reema Saleh and Mwangi ThuitaEditing: Aishwarya KumarProduction: Reema Saleh
The post Ethos Investment Impact Evaluation appeared first on AIO Financial Advisors Fee Only Fiduciary.
The post Ethos Investment Impact Evaluation appeared first on Impact Financial Planners.
The post Ethos Investment Impact Evaluation appeared first on Impact Financial Planners.
The post Ethos Investment Impact Evaluation appeared first on AIO Financial Advisors Fee Only Fiduciary.
The post Ethos Investment Impact Evaluation appeared first on AIO Financial Advisors Fee Only Fiduciary.
Originally recorded on March 12, 2021 Alix Zwane, Chief Executive Officer of the Global Innovation Fund, continued the discussion after a virtual CID Speaker Series event held on March 12, 2021 exploring their work further with CID Student Ambassador Sama Kubba. Successfully meeting international development goals in the post pandemic era calls for a renewed commitment to honesty both on a micro level and a macro level about what development assistance can and should seek to achieve. The debate about official assistance is often bookended by, at best, misplaced good intent and, at worst, falsehoods told to reinforce the status quo. Supporting innovation and R&D is at the heart of both an honest development agenda and the clearest path toward pushing decision-making more locally while still being true to our values around environmental, social, and governance standards such as gender equity and climate resilience. Alix Peterson Zwane is Chief Executive Officer of the Global Innovation Fund. She has 20 years of experience advancing the agenda of evidence-based aid and international development as an investor, a social entrepreneur, and an innovator herself. Alix has worked at the intersection of the evidence and innovation agendas from a diverse set of posts. She was the first employee and Executive Director at Evidence Action, a non-profit that develops service delivery models to scale evidence-based programs. Under Alix's leadership, Evidence Action catalyzed school-based deworming for hundreds of millions of children around the world, and safe drinking water for millions of people in four countries. Alix launched Evidence Action Beta, an incubator for innovations in development. Alix has also advocated for evidence-based philanthropy at the Bill & Melinda Gates Foundation and Google.org, where she set strategy and made investments to support new public service models that work for the poor and developed models for outcome-based grant-making. She began her career in management consulting and was a member of the faculty of the Agricultural and Resource Economics Department at University of California, Berkeley. Alix has published in Science, Proceedings of the National Academy of Sciences, the Quarterly Journal of Economics, and elsewhere. She previously served on the board of directors of Innovations for Poverty Action, the International Initiative for Impact Evaluation, and Evidence Action. She holds a Ph.D. in Public Policy from Harvard University and is a World Economic Forum Young Global Leader. Born and raised in Colorado, she divides her time between Washington, D.C. and London.
Can cash transfers reduce violence within the home, keeping women safe from intimate partner violence? This episode features IFPRI Senior Research Fellow Melissa Hidrobo (https://www.ifpri.org/profile/melissa-hidrobo) and Research Fellow Shalini Roy (https://www.ifpri.org/profile/shalini-roy) who, in a conversation with Sivan Yosef (https://www.ifpri.org/profile/sivan-yosef), tell the story of how development programs can sometimes have surprising impacts. When Melissa found that a cash transfer program in Ecuador reduced intimate partner violence, defined as physical, sexual, or emotional harm by a current or former partner or spouse, she and Shalini decided to team up and see whether the same results held in Bangladesh and Mali. Their work shows the vast potential of cash transfer programs, which are already used by many countries around the world, to reduce intimate partner violence at a large scale. Interviewees: Melissa Hidrobo; Shalini Roy Interviewer: Sivan Yosef Producer: Sivan Yosef Editor: Jennifer Weingart Promotions: Drew Sample To learn more: Blog: https://www.ifpri.org/blog/cash-transfers-and-intimate-partner-violence VoxDev - Op-ed: https://voxdev.org/topic/public-economics/cash-transfers-and-intimate-partner-violence TMRI Website: https://bangladesh.ifpri.info/tag/tmri/ Project Output: https://www.ifpri.org/cash-transfer-and-intimate-partner-violence-research-collaborative-project-outputs Donors & Partners: United States Agency for International Development, World Bank, Sexual Violence Research Initiative, PIM, CGIAR, International Initiative for Impact Evaluation, the Government of Mali, World Food Program
Today's special guest in Outcomes Rocket wanted to be a difference-maker, so she followed a path that will put her in a position that can help not just one but a whole population. Dr. Tista Ghosh discusses how her company has democratized health care. Grand Rounds provides the best doctor-patient match at the individual provider level and they use metrics to screen doctors and specialists. She talks about the outcomes in terms of healthcare and of reduced cost to the employer. Dr. Ghost shares inspiring insights on her work, healthcare, learning to adapt quickly, and more. We truly enjoyed our interview with her, and we hope you'll do too! https://outcomesrocket.health/grandrounds/2020/08/
Can a smartphone camera help provide a safety net for smallholder famers? This episode features IFPRI’s Research Fellow Berber Kramer ( https://www.ifpri.org/profile/berber-kramer), who in a conversation with Sivan Yosef (https://www.ifpri.org/profile/sivan-yosef), shares the story of how IFPRI researchers came up with the idea of using smartphone pictures to help small farmers in India reap the benefits of crop insurance. Working together, IFPRI researchers are testing a suite of innovative financial instruments that have the potential to help farmers manage risk, with the aim of improving resilience, incomes, and making inroads against poverty. To learn more: - Blog post ( https://www.ifpri.org/blog/picture-based-crop-insurance-it-feasible-it-sustainable) - Journal Article ( https://doi.org/10.1016/j.deveng.2019.100042) - Project Website (https://www.ifpri.org/project/PBInsurance) Donors & Partners: - CGIAR Research Program on Policies, Institutions and Markets (http://pim.cgiar.org/) - CGIAR research program on Climate Change, Agriculture and Food Security ( https://ccafs.cgiar.org/) - International Initiative for Impact Evaluation (3ie) (https://www.3ieimpact.org/) - Borlaug Institute for South Asia (https://bisa.org/) - HDFC ERGO General Insurance (https://www.hdfcergo.com/) - CGIAR Platform for Big Data on Agriculture https://bigdata.cgiar.org/) - Centre for Agriculture and Biosciences International (https://www.cabi.org/) - Manchester University - Mahalanobis National Crop Forecast Centre (http://www.ncfc.gov.in/) - Dvara E-Registry (https://dvaraeregistry.com/) - Digital Credit Observatory (https://cega.berkeley.edu/initiative/digital-credit-observatory/) Interviewee: Berber Kramer Interviewer: Sivan Yosef Producer: Sivan Yosef Editor: Jennifer Weingart Promotions: Smita Aggarwal
In this week's podcast we discuss ways that evaluation practitioners in the energy industry are thinking through program impacts in light of COVID-19. Listen in as Anne Dougherty talks with ILLUME's technical team as they discuss how massive shifts in energy use patterns has us rethinking everything from baselines to alternative metrics for 2020.
As technology improves organizations’ ability to collect, manage, and analyze data, it’s becoming easier to inform public policy decisions today in a range of areas, from health care to criminal justice, based on estimated risks in the future. On this episode of On the Evidence, I talk with three researchers who work with child welfare agencies in the United States to use algorithms—or, what they call predictive risk models—to inform decisions by case managers and their supervisors. My guests are Rhema Vaithianathan, Emily Putnam-Hornstein, and Beth Weigensberg. Vaithianathan is a professor of economics and director of the Centre for Social Data Analytics in the School of Social Sciences and Public Policy at Auckland University of Technology, New Zealand, and a professor of social data and analytics at the Institute for Social Science Research at the University of Queensland, Australia. Putnam-Hornstein is an associate professor of social work at the University of Southern California and the director of the Children’s Data Network. Weigensberg is a senior researcher at Mathematica. Vaithianathan and Putnam-Hornstein have already worked with Allegheny County in Pennsylvania to implement a predictive risk model that uses hundreds of data elements to help the people screening calls about child abuse and neglect better assess the risk associated with each case of potential maltreatment. Now they are working with two more counties in Colorado to pilot a similar predictive risk model. Last year, they initiated a partnership with Mathematica to replicate and scale-up their work by offering the same kind of assistance to states and counties around the country. Find more information about Mathematica’s partnership with the Centre for Social Data Analytics and the Children’s Data Network here: https://www.mathematica.org/our-publications-and-findings/publications/predictive-risk-modeling-for-child-protection Find The New York Times Magazine article about Allegheny County's use of algorithms in child welfare here: https://www.nytimes.com/2018/01/02/magazine/can-an-algorithm-tell-when-kids-are-in-danger.html Find the publications page for the the Centre for Social Data Analytics here: https://csda.aut.ac.nz/research/recent-publications Find the results of an independent evaluation of the Allegheny County predictive risk model here: https://www.alleghenycountyanalytics.us/wp-content/uploads/2019/05/Impact-Evaluation-from-16-ACDHS-26_PredictiveRisk_Package_050119_FINAL-6.pdf
In 2019 the Nobel prize for economics went to three economists who have promoted the use and importance of Randomised Control Trials (RCTs) in development economics and interventions. But how useful are RCTs in the real world of development assistance? And what more generally needs to be done to improve the quality and impact of impact evaluations, and to promote learning in aid? Panellists: The Hon Dr Andrew Leigh MP, Member for Fenner, ACT Dr Lant Pritchett, Research Director, RISE Programme; Fellow, Blavatnik School of Government, Oxford University Dr Jyotsna Puri, Head, Independent Evaluation Unit, Green Climate Fund Professor Stephen Howes, Director, Development Policy Centre, ANU (Chair)
For people living in the Za'atari refugee camp in Jordan there are few legally accessible work opportunities in and outside the camp. The Cash for Work activities currently being carried out in Za'atari provides income, increases household wealth, teaches skills and improves well-being.But how effective is the intervention?As part of our Real Geek Series, Franziska and Simone speak to Nour and Teshome at Oxfam Jordan. They discuss and share the evidence and learning from the impact evaluation of the Cash for Work interventions, and consider how the findings can influence change.This episode will delve into how the project was implemented, how it was evaluated, the evaluation findings and what recommendations came out of it. Read the full evaluation here: Livelihoods in the Za'atari Camp: Impact evaluation of Oxfam's Cash for Work activities in the Za'atari camp (Jordan) Image: Oxfam works in Za'atari Camp in Districts 6, 7 and 8, providing safe drinking water and sanitation, such as toilets, showers, solid waste management and hygiene promotion. Credit: Adeline Guerra/Oxfam
Roots in Maradi, Niger – Impact evaluation of the Food Security and Development Support Project by IFAD Independent Office of Evaluation
According to Kenya's Social Protection Policy, poverty, disease, and ignorance were identified at the time of independence in 1963 as the critical challenges facing the new nation of Kenya. While some degree of success has been achieved in the area of education, progress in reducing poverty and providing healthcare has barely been made. 56 years after independence, “poverty and vulnerability remain major challenges, with almost one in every two Kenyans trapped in a long-term, chronic and inter-generational cycle of poverty." Our Constitution in Article 43 guarantees all Kenyans their economic, social, and cultural rights. It asserts the "right for every person...to social security and binds the state to provide appropriate social security to persons who are unable to support themselves and their dependents." This right is closely linked to other social protection rights, including the right to healthcare, human dignity, reasonable working conditions, and access to justice. Article 21 establishes the progressive realization of social and economic rights and obligates the state to "observe, respect, protect, promote, and fulfill the rights and fundamental freedoms in the Bill of Rights.” We’re joined by Pauline Vata, Executive Director of Hakijamii Trust, to discuss social protection in Kenya. Resources Kenya National Social Protection Policy (2012) Article 43, Constitution of Kenya (2010) National Social Security Fund Act (2013) National Hospital Insurance Fund Act (2013) Social Assistance Act (2013) Analytical Review of the Pension System in Kenya Social security reforms in Kenya: Towards a workerist or a citizenship-based system? Policy Brief on National Hospital Insurance Fund (NHIF) NHIF Strategic Plan 2014 - 2018: Sustainable Financing Towards Universal Health Coverage in Kenya HEALTHY AMBITIONS? KENYA’S NATIONAL HOSPITAL INSURANCE FUND (NHIF) MUST BECOME MORE TRANSPARENT IF IT IS TO ANCHOR UNIVERSAL HEALTH COVERAGE Extending Social Security and Fighting Poverty: Two reform proposals to extend social security in Kenya The Right to Social Security in Kenya: The gap between international human rights and domestic law and policy PARTICIPATION OF VULNERABLE POPULATIONS IN THEIR OWN PROGRAMMES: THE CASH TRANSFERS IN KENYA Political Economy of Cash Transfers In Kenya Kenya’s Social Cash Transfer Program From Evidence to Action: The Story of Cash Transfers and Impact Evaluation in Sub-Saharan Africa The Short-term Impact of Unconditional Cash Transfers to the Poor: Experimental Evidence from Kenya The Long-Term Impact of Unconditional Cash Transfers: Experimental Evidence from Kenya. Income Changes and Intimate Partner Violence: Evidence from Unconditional Cash Transfers in Kenya Scaling up Cash Transfer Programmes in Kenya The Evolution of the Government of Kenya Cash Transfer Programme for Vulnerable Children between 2002 to 2006 and prospects for nationwide scale-up Episode 44: The State of Kenya's Healthcare Episode 21: #LipaKamaTender Image Credit: Business Insider
What's an Innovation Lab? What is Rapid Impact Assessment, and how is it different from Iteration? And how can acknowledging failure increase connectedness to the people affected, and bring a better outcome? More honest experiences from the field, on this edition of Innovate On Demand. An accessible version of this podcast can be found on the Canada School of Public Service’s external facing website.
Dr. Jorg Faust of the German Institute for Development Evaluation (DEval) discusses the power of impact evaluation in development cooperation. His discussion took place during IEU's LORTA workshop in Mannheim, Germany on April 15, 2019. Learn More About LORTA DEval Follow us on Twitter! @GCF_Eval
Dr. Edward Jackson of Carleton University and Institute of Development Studies (IDS) looks at strategies for incorporating the private sector in impact evaluation. His discussion took place during IEU's LORTA workshop in Mannheim, Germany on April 16. Learn More Institute of Development Studies About LORTA Follow us on Twitter! @GCF_Eval
In this episode, Maria Cancian and Daniel Meyer discuss the Child Support Noncustodial Parent Employment Demonstration or CSPED, a large, eight state experiment that aimed to see if a different approach to child support could lead to better outcomes. Over the course of the episode, they talk about how the CSPED project came to be, what it looked like for child support offices to change their approach to child support services for this demonstration, and what they learned. Cancian is the Dean of the McCourt School of Public Policy at Georgetown University and an affiliate and former director of the Institute for Research on Poverty. Meyer is Professor of Social Work at the University of Wisconsin-Madison and an IRP affiliate.
Nathan Mallonee talks with Daudi Msseemmaa from Convoy of Hope about his experience during the organization's first impact evaluation. As the Regional Field Operations Director for Africa and Asia, Daudi had a unique perspective on what it was like to propose and help oversee an impact evaluation for a women's empowerment project in Ethiopia. He not only shares the results of the study, but he also goes into what it was like to propose this idea to staff who were already stretched to capacity and perhaps didn't see the value of empirical studies. You can find out more about Convoy of Hope and its work with women's empowerment at www.convoyofhope.org.
I invite you to pause just for a second and take a moment to think about the last time you changed your mind about something. Specifically, I'd like for you to identify something that was either very important to you or your worldview, or something that you had taken for granted, that today you have either the complete opposite or at least a very different perspective on. Got it? Now ask yourself, what was it that made you change your mind? And, again specifically, what evidence did you unearth, or were presented with, that made the case for changing your mind? For most of us, a profound change of mind doesn't happen very often, but when it does, the effects of such a change alter lives, communities, and entire belief systems. As a final step in this exercise, I'd like for you to think about the core beliefs you have about the work you do in the social impact sector, and what you expect that work will help achieve for people in need. Now, ask yourself, what would it take to alter those beliefs - even if it meant radically shifting the entire system for how you've expected to serve others? I wanted to start with this exercise because in today's 140th episode of the terms of reference podcast, I'd be discussing the revolution being brought about through the practice of impact evaluation. Impact evaluation holds the promise of confirming, or refuting, the effectiveness of the practices, processes and systems we rely upon in humanitarian and development programming to help those in need. My guest for this show, David Evans, knows a thing or two about impact evaluations. He is a Lead Economist in the Chief Economist's Office for the Africa Region of the World Bank where he coordinates impact evaluation work across agriculture, education, health, and social protection in more countries than most people will visit in their lifetimes. I know you're going to love this show as we discuss how we can design evaluations to learn more, how to make evaluation real time, and, ultimately how do you create an evaluation that will succeed - even if you're working for a small NGO.
There is a good deal of energy in the development and humanitarian space focused on building an evidence base for what works - and what doesn't. Here on the Terms of Reference Podcast, we've talked with numerous individuals and organizations who are building data sets towards that end, and the International Initiative for Impact Evaluation - or 3ie - has been contributing to this conversation since its founding in 2008. To date, they've funded 146 impact evaluations, 33 systematic reviews and 38 other studies in over 50 countries. But how do we properly reflect on and communicate about the evidence we've collected, and its resulting analysis, so that it can be used by development and humanitarian actors to design (and deliver) better programing? I discuss this and a host of other topics on the 114th episode of the Terms of Reference Podcast with my guest, Dr. Jyotsna Puri. Jo is the Deputy Executive Director and Head of Evaluation at 3ie and has more than 21 years of experience in policy research and development evaluation.
Is my program or initiative having a positive impact? It’s a question about which organizational leaders may want hard evidence, either to take stock and help improve program results, or to satisfy their authorizers or funders who may be asking for rigorous evidence of impact. Either way, how can you determine the impact of your program? And […] The post Determining if your program is having a positive impact (i.e., impact evaluation 101): An interview with David Evans, Senior Economist, The World Bank – Episode #122 appeared first on Gov Innovator podcast.
Courtney Tanenbaum (@courttanenbaum) is a senior researcher and science, technology, engineering, and mathematics (STEM) marketing and research lead at AIR. She is a graduate of the Institute for Education Leadership's DC Education Policy Fellowship Program. Since joining AIR in May 2003, she has worked on several research and evaluation studies focused on federal policies and initiatives designed to improve the outcome of disadvantaged students and underrepresented minorities, both in K-12 and higher education. Currently, Dr. Tanenbaum serves as the principal investigator for the National Study of the Alliances for Graduate Education and the Professoriate, through a grant from the National Science Foundation. Under this grant she is responsible for managing the project, writing data-driven issue briefs on issues related to the participation of underrepresented minorities and women of all races and ethnicities in STEM. Most recently she contributed to an issue brief examining graduate student debt levels and one examining gender differences in the early career pathways of new STEM doctoral recipients. She also led a two-day symposium examining the implicit and explicit biases, barriers and challenges underrepresented groups of individuals in STEM encounter along their academic and career pathways, and how institutions of higher education and STEM academic departments can use this research to develop more effective recruitment and retention programs and practices. Under a previous grant from the National Science Foundation, Dr. Tanenbaum served as the task lead for the implementation analysis of the national evaluation of the grant program. As task lead, she conducted multiple site visits to institutions of higher education participating in the grant, during which she led interviews with college deans, grant program leadership, faculty, and undergraduate and graduate students. She led the coding and analyses of the data collected during site visits to inform the implementation component of the evaluation. Dr. Tanenbaum also serves on several studies of federal policy. She serves as a data collection and analysis task lead for the Impact Evaluation of Race to the Top and School Improvement Grant (SIG) programs. In this role, she has contributed to an evaluation brief examining school turnaround policies, practices, and strategies in SIG, the first and second year evaluation reports. She is also lead author on an evaluation brief examining state capacity to support school turnaround. Dr. Tanenbaum serves as the deputy project director for the Equitable Distribution of Effective Teachers study, for which she assists in the overall management of the project, leads the collection and analysis of data gathered through interviews with officials, and serves as a lead author of the final evaluation report. In addition, she leads the school-level data collection and analysis task for the Early Implementation of Elementary and Secondary Education Act (ESEA) Flexibility Study. In this role, Dr. Tanenbaum is responsible for producing a policy brief exploring school-level perspectives on the implementation of ESEA flexibility that will be shared with U.S. Department of Education staff to inform future policy making, and for contributing to a key highlights report that will be released to the public. In this episode we discussed Science, Technology, Engineering and Math (STEM) Stereotype Threat and Imposter Syndrome Keeping Kids Interested in STEM with Comics Resources American Institutes for Research The Diamond Age by Neal Stephenson
It is one thing to have a non profit that is doing well and making a difference in the world. It is another to know just what that difference is. Results are great but if you don't analyze them you won't know how to move forward. Annette N. Brown is a Deputy Director for the International Initiative for Impact Evaluation (3ie) and heads its Washington office. Prior to joining 3ie, Brown held executive and senior management positions at several development implementers, for which she performed technical assistance and research in more than twenty countries across all regions. Earlier in her career, Brown was Assistant Professor of economics at Western Michigan University and held research positions at the World Bank and the Stockholm Institute for Transition Economics. The International Initiative for Impact Evaluation (3ie) is a grant-making NGO dedicated to improving lives in the developing world by supporting the production of and disseminating evidence from impact evaluations and related research.
Dr Philip Davies, Deputy Director at the International Initiative for Impact Evaluation, gives a keynote talk for the Department of Social Policy and Intervention Graduate Research Student Conference on October 19 2012.