POPULARITY
Welcome to Strategy Skills episode 524, an interview with the author of Meet Every Learner's Needs: Redesigning Instruction So All Students Can Succeed, Robert Barnett. In this episode, Robert discussed his approach to educational fairness and equity and the importance of personalized learning experiences. He explained the need for engaging and challenging learning environments rather than a one-size-fits-all approach, highlighting the importance of tailoring learning to individual needs. Robert Barnett co-founded the Modern Classrooms Project. Robert's approach - now known as the Modern Classroom instructional model - has empowered more than 80,000 educators, across all grade levels and content, in all 50 states and 180+ countries worldwide. Evaluators from Johns Hopkins found "overwhelming positive support" for this model's many benefits. Robert graduated cum laude from Princeton University and Harvard Law School and speaks English, French, and Spanish. Get Robert's new book here: https://shorturl.at/2al4p Meet Every Learner's Needs: Redesigning Instruction So All Students Can Succeed Here are some free gifts for you: Overall Approach Used in Well-Managed Strategy Studies free download: www.firmsconsulting.com/OverallApproach McKinsey & BCG winning resume free download: www.firmsconsulting.com/resumepdf Enjoying this episode? Get access to sample advanced training episodes here: www.firmsconsulting.com/promo
Guest: Ron Foxcroft. Chairman FOX40 Industries. Owner FLUKE Transport. Member Order of Canada. Honourary Colonel, Canadian Armed Forces. Inventor of the FOX40 whistle. Official whistle of the NFL, NBA, NCAA, NHL, CFL, used by all referees in the World Cup of Soccer. - Named among the top 50 sports officials of all time, by Referee magazine. - Evaluators of game officials for the NBA. Learn more about your ad choices. Visit megaphone.fm/adchoices
Future of Roy Green Show Guest: Mike Bendixen. National director of talk, Corus The mood of Canadians as we close out 2024. - Canadians want Justin Trudeau to resign Guest: Darrell Bricker. CEO. IPSOS Public Affairs One of Canada's most prominent business and community leaders on what he expects for 2025 for this country Guest: Ron Foxcroft. Chairman FOX40 Industries. Owner FLUKE Transport. Member Order of Canada. Honourary Colonel, Canadian Armed Forces. Inventor of the FOX40 whistle. Official whistle of the NFL, NBA, NCAA, NHL, CFL, used by all referees in the World Cup of Soccer. - Named among the top 50 sports officials of all time, by Referee magazine. - Evaluators of game officials for the NBA. Food issues in 2024 Guest: Professor Sylvain Charlebois. Director of the Agri foods lab at Dalhousie University. Learn more about your ad choices. Visit megaphone.fm/adchoices
Introduction The Giving What We Can research team is excited to share the results of our 2024 round of evaluations of charity evaluators and grantmakers! In this round, we completed three evaluations that will inform our donation recommendations for the 2024 giving season. As with our 2023 round, there are substantial limitations to these evaluations, but we nevertheless think that they are a significant improvement to a landscape in which there were no independent evaluations of evaluators' work. In this post, we share the key takeaways from each of our 2024 evaluations and link to the full reports. We also include an update explaining our decision to remove The Humane League from our list of recommended programs. Our website has now been updated to reflect the new fund and charity recommendations that came out of these evaluations. Please also see our website for more context on [...] ---Outline:(00:10) Introduction(01:13) Key takeaways from each of our 2024 evaluations(01:36) Global health and wellbeing(01:41) Founders Pledge Global Health and Development Fund (FP GHDF)(04:07) Animal welfare(04:10) Animal Charity Evaluators' Movement Grants (ACE MG)(06:08) Animal Charity Evaluators' Charity Evaluation Program(08:35) Additional recommendation updates(08:39) The Humane League's corporate campaigns program(11:29) ConclusionThe original text contained 2 footnotes which were omitted from this narration. --- First published: November 27th, 2024 Source: https://forum.effectivealtruism.org/posts/NhpAHDQq6iWhk7SEs/gwwc-s-2024-evaluations-of-evaluators-1 --- Narrated by TYPE III AUDIO.
How will specialized AI agents collaborate to outperform general AI? - A deep dive into Theoriq's vision for decentralized agent collectives with founder Ron Bodkin, former Google Cloud CTO office lead. This in-depth conversation between Luke Saunders and Ron Bodkin explores: - Why specialized agent collectives may outperform general AI systems - The technical foundations of agent collaboration and evaluation - How Theoriq enables permissionless agent development and discovery - The role of decentralization in ensuring safe and ethical AI development - Future implications for autonomous AI systems and agent coordination - Insights from Ron's experience at Google, Teradata, and Vector Institute The discussion provides valuable perspective on how decentralized networks of specialized AI agents could provide an alternative to centralized AI development, with a focus on modular, community-driven innovation and proper governance structures. Watch more sessions from Crypto x AI Month here: https://delphidigital.io/crypto-ai --- Crypto x AI Month is the largest virtual event dedicated to the intersection of crypto and AI, featuring 40+ top builders, investors, and practitioners. Over the course of three weeks, this event brings together panels, debates, and discussions with the brightest minds in the space, presented by Delphi Digital. Crypto x AI Month is free and open to everyone thanks to the support from our sponsors: https://olas.network/ https://venice.ai/ https://near.org/ https://mira.foundation/ https://www.theoriq.ai/ --- Follow the Speakers: Luke Saunders on Twitter/X ► https://x.com/lukedelphi Ron Bodkin on Twitter/X ► https://x.com/ronbodkin --- Chapters 00:00 Introduction and Sponsor Acknowledgments 00:52 Introduction of Ron Bodkin, Founder of Theoriq AI 01:17 Ron's Background in AI and Big Data 04:28 Ron's Experience at Google and AI Development 07:35 The Impact of Transformers and GPT-3 on AI 11:32 Defining AI Agents and Their Capabilities 15:31 The Concept of Agent Collectives 18:38 The Future of AI and AGI 25:00 Concerns About AI Safety and Development 28:23 Overview of Theoriq AI's Agent Base Layer 30:54 Evaluators in Theoriq's System 34:08 Permissionless Nature of Theoriq's Platform 36:14 Developer Experience and SDK for Theoriq 39:33 Optimizers and Agent Collectives 41:48 Future of Autonomous AI Agents 44:22 Discussion on Truth Terminal and AI Autonomy 47:34 Call to Action and Closing Remarks Disclaimer All statements and/or opinions expressed in this interview are the personal opinions and responsibility of the respective guests, who may personally hold material positions in companies or assets mentioned or discussed. The content does not necessarily reflect the opinion of Delphi Citadel Partners, LLC or its affiliates (collectively, “Delphi Ventures”), which makes no representations or warranties of any kind in connection with the contained subject matter. Delphi Ventures may hold investments in assets or protocols mentioned or discussed in this interview. This content is provided for informational purposes only and should not be misconstrued for investment advice or as a recommendation to purchase or sell any token or to use any protocol.
Slam the Gavel welcomes Jamie Logan and Jamie Cunningham from Colorado. They shared their concern on how the corrupt family court system is in Colorado and the devastating effects it has had on their families and especially their children. Interestingly enough, the two women had met because "Taya" Matoy habitually mixed up their court evaluations. Apparently "Taya" Matoy WAS court appointed to be a PRE (Parental Responsibility Evaluator), to two young boys in a custody battle. She was also court ordered to be a CFI (Child Family Investigator), to two young girls also in a custody battle. As a PRE, the person has to be a LICENSED mental health professional, but that wasn't the case for "Taya" Matoy because Mita Johnson, the Board Chair Member of Marriage and Family Therapy Licensing Board had signed off on her fraudulent application. The office of Investigation and Inspections could not verify ANY of her CREDENTIALS, her DEGREE has SOMEONE ELSE'S NAME ON IT. The other degree was not a certified transcript. She also allegedly has three DUI's and perhaps more now, for her records are NOW SEALED. "Taya" is still actively licensed as a Marriage and Family Therapist. Colorado's DORA (Department of Regulatory Agency), Marriage and Family Therapy Public Board Meetings:https://m.youtube.com/watch?v=QXfQey3Yp1whttps://m.youtube.com/watch?v=j620AAE5CVA&t=10sTo reach Jamie Logan: Reach out through the above YouTube ChannelTo Reach Jamie Cunningham: dismantlingfamilycourtcorruption.comOther Resources and SHOUT OUT TO: https://considerationnonprofit.org/ http://GiveSendGo.com/PALADNmedia******** Supportshow(https://www.buymeacoffee.com/maryannpetri)Maryann Petri: dismantlingfamilycourtcorruption.comhttps://www.tiktok.com/@maryannpetriFacebook: https://www.youtube.com/@slamthegavelpodcasthostmar5536Instagram: https://www.instagram.com/guitarpeace/Pinterest: Slam The Gavel Podcast/@guitarpeaceLinkedIn: https://www.linkedin.com/in/maryann-petri-62a46b1ab/ YouTube: https://www.youtube.com/@slamthegavelpodcasthostmar5536 Twitter https://x.com/PetriMaryann*DISCLAIMER* The use of this information is at the viewer/user's own risk. Not financial, medical nor legal advice as the content on this podcast does not constitute legal, financial, medical or any other professional advice. Viewer/user's should consult with the relevant professionals.Support the showSupportshow(https://www.buymeacoffee.com/maryannpetri)http://www.dismantlingfamilycourtcorruption.com/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pay Risk Evaluators in Cash, Not Equity, published by Adam Scholl on September 7, 2024 on LessWrong. Personally, I suspect the alignment problem is hard. But even if it turns out to be easy, survival may still require getting at least the absolute basics right; currently, I think we're mostly failing even at that. Early discussion of AI risk often focused on debating the viability of various elaborate safety schemes humanity might someday devise - designing AI systems to be more like "tools" than "agents," for example, or as purely question-answering oracles locked within some kryptonite-style box. These debates feel a bit quaint now, as AI companies race to release agentic models they barely understand directly onto the internet. But a far more basic failure, from my perspective, is that at present nearly all AI company staff - including those tasked with deciding whether new models are safe to build and release - are paid substantially in equity, the value of which seems likely to decline if their employers stop building and releasing new models. As a result, it is currently the case that roughly everyone within these companies charged with sounding the alarm risks personally losing huge sums of money if they do. This extreme conflict of interest could be avoided simply by compensating risk evaluators in cash instead. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pay Risk Evaluators in Cash, Not Equity, published by Adam Scholl on September 7, 2024 on LessWrong. Personally, I suspect the alignment problem is hard. But even if it turns out to be easy, survival may still require getting at least the absolute basics right; currently, I think we're mostly failing even at that. Early discussion of AI risk often focused on debating the viability of various elaborate safety schemes humanity might someday devise - designing AI systems to be more like "tools" than "agents," for example, or as purely question-answering oracles locked within some kryptonite-style box. These debates feel a bit quaint now, as AI companies race to release agentic models they barely understand directly onto the internet. But a far more basic failure, from my perspective, is that at present nearly all AI company staff - including those tasked with deciding whether new models are safe to build and release - are paid substantially in equity, the value of which seems likely to decline if their employers stop building and releasing new models. As a result, it is currently the case that roughly everyone within these companies charged with sounding the alarm risks personally losing huge sums of money if they do. This extreme conflict of interest could be avoided simply by compensating risk evaluators in cash instead. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Personally, I suspect the alignment problem is hard. But even if it turns out to be easy, survival may still require getting at least the absolute basics right; currently, I think we're mostly failing even at that.Early discussion of AI risk often focused on debating the viability of various elaborate safety schemes humanity might someday devise—designing AI systems to be more like “tools” than “agents,” for example, or as purely question-answering oracles locked within some kryptonite-style box. These debates feel a bit quaint now, as AI companies race to release agentic models they barely understand directly onto the internet.But a far more basic failure, from my perspective, is that at present nearly all AI company staff—including those tasked with deciding whether new models are safe to build and release—are paid substantially in equity, the value of which seems likely to decline if their employers stop building and [...] --- First published: September 7th, 2024 Source: https://www.lesswrong.com/posts/sMBjsfNdezWFy6Dz5/pay-risk-evaluators-in-cash-not-equity --- Narrated by TYPE III AUDIO.
Changes at General Manager often mean changes down the line, regardless of how well you do your job. Dru Grigson experienced that in Arizona after a long and successful run with the franchise. Their loss is your gain as Dru jumps on with us to talk all things front office, scouting and the Arizona Cardinals. How did he get his start and what did he learn as he moved up the ladder? How did the Cardinals evaluate their scouts and continue to develop their people? What was special about Kyler Murray and so much more. For the Scouts out there (and everyone else too) - this is as good as it gets. Neil Stratton - @InsidetheLeague Rodrik David - @RightStepAdv Dru Grigson Agent Live 360 @NFL @NFLDraft
How can we evaluate that our faith is genuine? What does real faith look like according to the Bible? Join us as Pastor Phil Moser unpacks this topic in James 2.
From my new blog: AI Lab Watch. All posts will be crossposted to LessWrong. Subscribe on Substack. Many AI safety folks think that METR is close to the labs, with ongoing relationships that grant it access to models before they are deployed. This is incorrect. METR (then called ARC Evals) did pre-deployment evaluation for GPT-4 and Claude 2 in the first half of 2023, but it seems to have had no special access since then.[1] Other model evaluators also seem to have little access before deployment. Clarification: there are many kinds of audits. This post is about model evals for dangerous capabilities. But I'm not aware of the labs using other kinds of audits to prevent extreme risks, excluding normal security/compliance audits. Frontier AI labs' pre-deployment risk assessment should involve external model evals for dangerous capabilities.[2] External evals can improve a lab's risk assessment and—if the evaluator can publish [...] The original text contained 5 footnotes which were omitted from this narration. --- First published: May 26th, 2024 Source: https://forum.effectivealtruism.org/posts/ZPyhxiBqupZXLxLNd/ai-companies-aren-t-really-using-external-evaluators-1 --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI companies aren't really using external evaluators, published by Zach Stein-Perlman on May 24, 2024 on LessWrong. Crossposted from my new blog: AI Lab Watch. Subscribe on Substack. Many AI safety folks think that METR is close to the labs, with ongoing relationships that grant it access to models before they are deployed. This is incorrect. METR (then called ARC Evals) did pre-deployment evaluation for GPT-4 and Claude 2 in the first half of 2023, but it seems to have had no special access since then.[1] Other model evaluators also seem to have little access before deployment. Frontier AI labs' pre-deployment risk assessment should involve external model evals for dangerous capabilities.[2] External evals can improve a lab's risk assessment and - if the evaluator can publish its results - provide public accountability. The evaluator should get deeper access than users will get. To evaluate threats from a particular deployment protocol, the evaluator should get somewhat deeper access than users will - then the evaluator's failure to elicit dangerous capabilities is stronger evidence that users won't be able to either.[3] For example, the lab could share a version of the model without safety filters or harmlessness training, and ideally allow evaluators to fine-tune the model. To evaluate threats from model weights being stolen or released, the evaluator needs deep access, since someone with the weights has full access. The costs of using external evaluators are unclear. Anthropic said that collaborating with METR "requir[ed] significant science and engineering support on our end"; it has not clarified why. And even if providing deep model access or high-touch support is a hard engineering problem, I don't understand how sharing API access - including what users will receive and a no-harmlessness no-filters version - could be. Sharing model access pre-deployment increases the risk of leaks, including of information about products (modalities, release dates), information about capabilities, and demonstrations of models misbehaving. Independent organizations that do model evals for dangerous capabilities include METR, the UK AI Safety Institute (UK AISI), and Apollo. Only Google DeepMind says it has recently shared pre-deployment access with such an evaluator - UK AISI - and that sharing was minimal (see below). What the labs say they're doing on external evals before deployment: DeepMind DeepMind shared Gemini 1.0 Ultra with unspecified external groups apparently including UK AISI to test for dangerous capabilities before deployment. But DeepMind didn't share deep access: it only shared a system with safety fine-tuning and safety filters and it didn't allow evaluators to fine-tune the model. DeepMind has not shared any results of this testing. Its Frontier Safety Framework says "We will . . . explore how to appropriately involve independent third parties in our risk assessment and mitigation processes." Anthropic Currently nothing Its Responsible Scaling Policy mentions "external audits" as part of "Early Thoughts on ASL-4" It shared Claude 2 with METR in the first half of 2023 OpenAI Currently nothing Its Preparedness Framework does not mention external evals before deployment. The closest thing it says is "Scorecard evaluations (and corresponding mitigations) will be audited by qualified, independent third-parties." It shared GPT-4 with METR in the first half of 2023 It said "We think it's important that efforts like ours submit to independent audits before releasing new systems; we will talk about this in more detail later this year." That was in February 2023; I do not believe it elaborated (except to mention that it shared GPT-4 with METR). All notable American labs joined the White House voluntary commitments , which include "external red-teaming . . . in areas ...
New blog: AI Lab Watch. Subscribe on Substack.Many AI safety folks think that METR is close to the labs, with ongoing relationships that grant it access to models before they are deployed. This is incorrect. METR (then called ARC Evals) did pre-deployment evaluation for GPT-4 and Claude 2 in the first half of 2023, but it seems to have had no special access since then.[1] Other model evaluators also seem to have little access before deployment.Frontier AI labs' pre-deployment risk assessment should involve external model evals for dangerous capabilities.[2] External evals can improve a lab's risk assessment and—if the evaluator can publish its results—provide public accountability.The evaluator should get deeper access than users will get. To evaluate threats from a particular deployment protocol, the evaluator should get somewhat deeper access than users will — then the evaluator's failure to elicit dangerous capabilities is stronger evidence [...]The original text contained 5 footnotes which were omitted from this narration. --- First published: May 24th, 2024 Source: https://www.lesswrong.com/posts/WjtnvndbsHxCnFNyc/ai-companies-aren-t-really-using-external-evaluators --- Narrated by TYPE III AUDIO.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI companies aren't really using external evaluators, published by Zach Stein-Perlman on May 24, 2024 on LessWrong. Crossposted from my new blog: AI Lab Watch. Subscribe on Substack. Many AI safety folks think that METR is close to the labs, with ongoing relationships that grant it access to models before they are deployed. This is incorrect. METR (then called ARC Evals) did pre-deployment evaluation for GPT-4 and Claude 2 in the first half of 2023, but it seems to have had no special access since then.[1] Other model evaluators also seem to have little access before deployment. Frontier AI labs' pre-deployment risk assessment should involve external model evals for dangerous capabilities.[2] External evals can improve a lab's risk assessment and - if the evaluator can publish its results - provide public accountability. The evaluator should get deeper access than users will get. To evaluate threats from a particular deployment protocol, the evaluator should get somewhat deeper access than users will - then the evaluator's failure to elicit dangerous capabilities is stronger evidence that users won't be able to either.[3] For example, the lab could share a version of the model without safety filters or harmlessness training, and ideally allow evaluators to fine-tune the model. To evaluate threats from model weights being stolen or released, the evaluator needs deep access, since someone with the weights has full access. The costs of using external evaluators are unclear. Anthropic said that collaborating with METR "requir[ed] significant science and engineering support on our end"; it has not clarified why. And even if providing deep model access or high-touch support is a hard engineering problem, I don't understand how sharing API access - including what users will receive and a no-harmlessness no-filters version - could be. Sharing model access pre-deployment increases the risk of leaks, including of information about products (modalities, release dates), information about capabilities, and demonstrations of models misbehaving. Independent organizations that do model evals for dangerous capabilities include METR, the UK AI Safety Institute (UK AISI), and Apollo. Only Google DeepMind says it has recently shared pre-deployment access with such an evaluator - UK AISI - and that sharing was minimal (see below). What the labs say they're doing on external evals before deployment: DeepMind DeepMind shared Gemini 1.0 Ultra with unspecified external groups apparently including UK AISI to test for dangerous capabilities before deployment. But DeepMind didn't share deep access: it only shared a system with safety fine-tuning and safety filters and it didn't allow evaluators to fine-tune the model. DeepMind has not shared any results of this testing. Its Frontier Safety Framework says "We will . . . explore how to appropriately involve independent third parties in our risk assessment and mitigation processes." Anthropic Currently nothing Its Responsible Scaling Policy mentions "external audits" as part of "Early Thoughts on ASL-4" It shared Claude 2 with METR in the first half of 2023 OpenAI Currently nothing Its Preparedness Framework does not mention external evals before deployment. The closest thing it says is "Scorecard evaluations (and corresponding mitigations) will be audited by qualified, independent third-parties." It shared GPT-4 with METR in the first half of 2023 It said "We think it's important that efforts like ours submit to independent audits before releasing new systems; we will talk about this in more detail later this year." That was in February 2023; I do not believe it elaborated (except to mention that it shared GPT-4 with METR). All notable American labs joined the White House voluntary commitments , which include "external red-teaming . . . in areas ...
Today is Q&A Day! Most questions focus on convention this time of year, but there is a bit in here about evaluators, CHAP's online group list, options other than CTC, and the College and Career event on June 13th. Get all your info here! To register for convention, go here: https://conv.chaponline.com/ Chattin' with CHAP is a series of informational podcasts designed to equip and encourage families on their homeschooling journeys. CHAP is the Christian Homeschool Association of Pennsylvania and has provided year-round support to homeschoolers since 1994. Find valuable resources at https://www.chaponline.com Check out https://www.homeschoolpennsylvania.org for all information regarding homeschool law in Pennsylvania. Contact us at https://www.chaponline.com/contact-us with your questions or Chattin' with CHAP topics for discussion. Don't miss out on the latest in PA homeschool news! Subscribe to our eNews at https://chaponline.com/subscribe-to-enews/ Donate to support CHAP in the endeavor to encourage, connect, equip, and protect homeschoolers at https://chaponline.com/donate/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLM Evaluators Recognize and Favor Their Own Generations, published by Arjun Panickssery on April 17, 2024 on The AI Alignment Forum. Self-evaluation using LLMs is used in reward modeling, model-based benchmarks like GPTScore and AlpacaEval, self-refinement, and constitutional AI. LLMs have been shown to be accurate at approximating human annotators on some tasks. But these methods are threatened by self-preference, a bias in which an LLM evaluator scores its own outputs higher than than texts written by other LLMs or humans, relative to the judgments of human annotators. Self-preference has been observed in GPT-4-based dialogue benchmarks and in small models rating text summaries. We attempt to connect this to self-recognition, the ability of LLMs to distinguish their own outputs from text written by other LLMs or by humans. We find that frontier LLMs exhibit self-preference and self-recognition ability. To establish evidence of causation between self-recognition and self-preference, we fine-tune GPT-3.5 and Llama-2-7b evaluator models to vary in self-recognition ability and measure the resulting change in self-preference, while examining potential confounders introduced by the fine-tuning process. We focus on text summarization, sampling 1,000 news articles and associated human summaries from each of two datasets: XSUM and CNN/DailyMail. We use instruction-tuned LLMs (GPT-4, GPT-3.5 Turbo, Claude 2, and Llama-2-7b-chat) to produce additional summaries for each news article. Measuring Self-Recognition and Self-Preference Both self-recognition and self-preference are evaluated in two settings: Pairwise Setting: The LLM evaluator is presented two unlabeled summaries, one generated by itself and another by one of the other four human/LLM sources. In the self-recognition tasks, the LLM evaluator is prompted to choose the summary that it wrote; in the self-preference task, the evaluator is prompted to choose the higher-quality summary. We compute a prediction confidence score by normalizing the output probabilities of the tokens associated with the two options, and average between both orderings of the two summaries to account for ordering bias. Individual Setting: The LLM evaluator is presented a single summary generated either by itself or by one of the other four sources. For self-recognition, the model is prompted with the yes/no question of whether it wrote the summary, with the confidence score computed by normalized the output probability for the "yes" and "no" tokens. For self-preference, the model is prompted to assigned the summary a score on a scale of one to five. The final score is computed as the average of the five possible scores weighted by the output probability of their respective tokens. To make the individual-setting responses comparable to the pairwise measurements, they're normalized further. For each LLM evaluator, the response scores for both tasks on summaries generated by other sources are normalized against the response given to the LLM. For example, if the GPT-4 evaluator gave a weighted score of 2.0 to a summary generated by Claude 2 and a weighted score of 3.0 to its own summary for the same article, then its final normalized self-preference score for the Claude summary would be 2/(2+3)=0.4. Some of our findings on out-of-the-box evaluation: GPT-4 is significantly more capable at self-recognition than the two weaker models. All three LLM evaluators most easily distinguish their summaries from human-written summaries and show the greatest self-preference against the human summary. Weaker LLMs struggle to distinguish themselves from stronger LLMs: Llama 2 is completely incapable of distinguishing itself from GPT-3.5 and GPT-4, and GPT-3.5 struggles to distinguish itself from GPT-4. Investigating Evidence of Causation Next we look for evidence...
Coming up on today's edition of the Locked On Raiders podcast, mock draft season is full throttle right now. What do Draft evaluators think about the Raiders at 13, is it QB, CB, or OT? We will talk about Washington QB Michael Penix and if he's the Raiders gu, where would they be comfortable taking him, plus your calls and text on Wednesdays edition of the Locked On Raiders Podcast for March 20th 2024Sponsored by:eBay MotorsFor parts that fit, head to eBay Motors and look for the green check. Stay in the game with eBay Guaranteed Fit at eBayMotos.com. Let's ride. eBay Guaranteed Fit only available to US customers. Eligible items only. Exclusions apply.RobinhoodRobinhood has the only IRA that gives you a 3% boost on every dollar you contribute when you subscribe to Robinhood Gold. Now through April 30th, Robinhood is even boosting every single dollar you transfer in from other retirement accounts with a 3% match. Available to U.S. customers in good standing. Robinhood Financial LLC (member SIPC), is a registered broker dealer.BetterHelpThis episode is sponsored by BetterHelp. Make your brain your friend, with BetterHelp. Visit BetterHelp.com/LOCKEDON today to get 10% off your first month.GametimeDownload the Gametime app, create an account, and use code LOCKEDON for $20 off your first purchase.FanDuelNew customers, join today and you'll get TWO HUNDRED DOLLARS in BONUS BETS if your first bet of FIVE DOLLARS or more wins. Visit FanDuel.com/LOCKEDON to get started. FANDUEL DISCLAIMER: 21+ in select states. First online real money wager only. Bonus issued as nonwithdrawable free bets that expires in 14 days. Restrictions apply. See terms at sportsbook.fanduel.com. Gambling Problem? Call 1-800-GAMBLER or visit FanDuel.com/RG (CO, IA, MD, MI, NJ, PA, IL, VA, WV), 1-800-NEXT-STEP or text NEXTSTEP to 53342 (AZ), 1-888-789-7777 or visit ccpg.org/chat (CT), 1-800-9-WITH-IT (IN), 1-800-522-4700 (WY, KS) or visit ksgamblinghelp.com (KS), 1-877-770-STOP (LA), 1-877-8-HOPENY or text HOPENY (467369) (NY), TN REDLINE 1-800-889-9789 (TN)
Today we explore your Human Design in business and how it can be key to unlocking the level of growth and success in your business that you desire. I talk about some of the ways we currently approach making decisions and taking action in business and how that can be one of the biggest reasons we find ourselves falling short of the growth we'd like to achieve. I share with you the secret to tapping into your body's wisdom and aligning your decisions with your unique energy and Human Design. The episode explores the five different career & business types (Classic Builders, Express Builders, Advisors, Initiators, and Evaluators) and their specific strategy for business success. Here's what you can expect: Why relying solely on your mind for decision-making can hinder your business growth. How Human Design can help you understand your body's energetic wisdom. The core strategies for each Energy Type in a business context. How Generators and Manifesting Generators can harness their creative energy for building a fulfilling business. How Projectors can avoid bitterness and find success as advisors and guides. How Manifestors can leverage their powerful aura to inform and attract success in business How Reflectors can utilize their unique perspective to evaluate and contribute to businesses and communities. Timestamp: [00:32] Opening Thoughts [03:03] Intro to Human Design in Business [17:10] Intro to the Human Design Business and Career Types [20:00] Generators and Manifesting Generators [25:25] Projectors [36:05] Manifestors [42:30] Reflectors [49:50] Closing Thoughts & Reflections Link to download your Human Design Chart: Jovian Archive Chart Generator SUPPORT THE SHOW: I would love to hear from you! If you loved this episode and would like to support the podcast, then I would love to invite you to please rate and leave a review over on your preferred podcast platform! This is one of the best ways to help me reach more Cosmic Entrepreneurs like you. For daily doses of cosmic wisdom and inspiration, follow me on Instagram at @thecosmicentrepreneur. Let's connect and build a thriving community of Cosmic Entrepreneurs! LET'S CONNECT: Instagram: @keelierae Work with me 1-on-1: www.keelierae.com Join my community for spiritual entrepreneurs: The Cosmic Entrepreneur Community
Today is Q&A Day! The questions ranged from evaluations to how to help the homeschool movement to what the school district's involvement is in our homeschools. Tune in to receive encouragement to stay the course! Chattin' with CHAP is a series of informational podcasts designed to equip and encourage families on their homeschooling journeys. CHAP is the Christian Homeschool Association of Pennsylvania and has provided year-round support to homeschoolers since 1994. Find valuable resources at https://www.chaponline.com Check out https://www.homeschoolpennsylvania.org for all information regarding homeschool law in Pennsylvania. Contact us at https://www.chaponline.com/contact-us with your questions or Chattin' with CHAP topics for discussion. Don't miss out on the latest in PA homeschool news! Subscribe to our eNews at https://chaponline.com/subscribe-to-enews/ Donate to support CHAP in the endeavor to encourage, connect, equip, and protect homeschoolers at https://chaponline.com/donate/
Hour 1 with Lynnell Willingham: The NBA Slam Dunk competition is boring, but it shouldn't be. And Lynnell says we're at the mercy of the talent evaluators when it comes to the draft.
Northfield Police Chief Mark Elliott talks about safety during the holidays, officers trained as drug recognition evaluators to detect impairment by drugs, new e-bikes, and more.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's evaluations of evaluators, published by Sjir Hoeijmakers on November 22, 2023 on The Effective Altruism Forum. The Giving What We Can research team is excited to share the results of our first round of evaluations of charity evaluators and grantmakers! After announcing our plans for a new research direction last year, we have now completed five[1] evaluations that will inform our donation recommendations for this giving season. There are substantial limitations to these evaluations, but we nevertheless think that this is a significant improvement on the status quo, in which there were no independent evaluations of evaluators' work. We plan to continue to evaluate evaluators, extending the list beyond the five we've covered so far, improving our methodology, and regularly renewing our existing evaluations. In this post, we share the key takeaways from each of these evaluations, and link to the full reports. Our website will be updated to reflect the new fund and charity recommendations that came out of these evaluations (alongside many other updates) on Monday, the 27th. We are sharing these reports in advance of our website update so those interested have time to read them and can ask questions before our AMA next Monday and Tuesday. We're also sharing some context about why and how we evaluate evaluators, which will be included in our Monday website update as well. One other exciting (and related) announcement: we'll be launching our new GWWC cause area funds on Monday! These funds (which you'll see referenced in the reports) will make grants based on our latest evaluations of evaluators, advised by the evaluators we end up working with.[2] We are launching them to provide a strong and easy default donation option for donors, and one that will stay up-to-date over time (i.e., donors can set up a recurring donation to these funds knowing that it will always be allocated based on GWWC's latest research). donation platform as well. We look forward to your questions and comments, and in particular to engaging with you in our AMA! (Please note that we may not be able to reply to many comments until then, as we are finalising the website updates and some of us will be on leave.) Global health and wellbeing GiveWell (GW) Based on our evaluation, we've decided to continue to rely on GW's charity recommendations and to ask GW to advise our new GWWC Global Health and Wellbeing Fund. Some takeaways that inform this decision include: GW's overall processes for charity recommendations and grantmaking are generally very strong, reflecting a lot of best practices in finding and funding the most cost-effective opportunities. GW's cost-effectiveness analyses stood up to our quality checks. We thought its work was remarkably evenhanded (we never got the impression that the evaluations were exaggerated), and we generally found only minor issues in the substance of its reasoning, though we did find issues with how well this reasoning was presented and explained. We found it noteworthy how much subjective judgement plays a role in its work, especially with how GW compares different outcomes (like saving and improving lives), and also in some key parameters in its cost-effectiveness analyses supporting deworming. We think reasonable people could come to different conclusions than GW does in some cases, but we think GW's approach is sufficiently well justified overall for our purposes. For more, please see the evaluation report. Happier Lives Institute (HLI) We stopped this evaluation short of finishing it, because we thought the costs of finalising it outweighed the potential benefits at this stage. For more on this decision and on what we did learn about HLI, please see the evaluation report. Animal welfare EA Funds' Animal Welfare Fund (AWF) Based on our evaluation, we've decide...
The Giving What We Can research team is excited to share the results of our first round of evaluations of charity evaluators and grantmakers! After announcing our plans for a new research direction last year, we have now completed five[1] evaluations that will inform our donation recommendations for this giving season. There are substantial limitations to these evaluations, but we nevertheless think that they are a significant improvement on the status quo, in which there were no independent evaluations of evaluators' work. We plan to continue to evaluate evaluators, extending the list beyond the five we've covered so far, improving our methodology, and regularly renewing our existing evaluations. In this post, we share the key takeaways from each of these evaluations, and link to the full reports. [EDIT 27 November] Our website has now been updated to reflect the new fund and charity recommendations that came out of these [...] ---Outline:(02:26) Global health and wellbeing(02:30) GiveWell (GW)(03:51) Happier Lives Institute (HLI)(04:10) Animal welfare(04:14) EA Funds' Animal Welfare Fund (AWF)(05:21) Animal Charity Evaluators (ACE)(08:31) Reducing global catastrophic risks(08:36) EA Funds' Long-Term Future Fund (LTFF)(10:22) Longview's Longtermism Fund (LLF)The original text contained 4 footnotes which were omitted from this narration. --- First published: November 22nd, 2023 Source: https://forum.effectivealtruism.org/posts/PTHskHoNpcRDZtJoh/gwwc-s-evaluations-of-evaluators --- Narrated by TYPE III AUDIO.
The Giving What We Can research team is excited to share the results of our first round of evaluations of charity evaluators and grantmakers! After announcing our plans for a new research direction last year, we have now completed five[1] evaluations that will inform our donation recommendations for this giving season. There are substantial limitations to these evaluations, but we nevertheless think that this is a significant improvement on the status quo, in which there were no independent evaluations of evaluators' work. We plan to continue to evaluate evaluators, extending the list beyond the five we've covered so far, improving our methodology, and regularly renewing our existing evaluations.In this post, we share the key takeaways from each of these evaluations, and link to the full reports. Our website will be updated to reflect the new fund and charity recommendations that came out of these evaluations (alongside many other updates) on [...] ---Outline:(02:23) Global health and wellbeing(02:26) GiveWell (GW)(03:43) Happier Lives Institute (HLI)(04:00) Animal welfare(04:04) EA Funds' Animal Welfare Fund (AWF)(05:07) Animal Charity Evaluators (ACE)(08:13) Reducing global catastrophic risks(08:17) EA Funds' Long-Term Future Fund (LTFF)(09:57) Longview's Longermism Fund (LLF)The original text contained 4 footnotes which were omitted from this narration. --- First published: November 22nd, 2023 Source: https://forum.effectivealtruism.org/posts/PTHskHoNpcRDZtJoh/gwwc-s-evaluations-of-evaluators --- Narrated by TYPE III AUDIO.
In this episode, we discuss Cyber Scent Work and some of the changes, updates and new offerings including the launch of the Cyber Sniffing Games Program, the revamped Traditional Cyber Scent Work Program as well as introducing in-person assessments with the new Evaluator Program. YAY! To learn more, be certain to check out: Cyber Scent Work website Cyber Sniffing Games Program Traditional Cyber Scent Work Program Evaluator Program If you have any questions or would like to learn more, contact Dianna directly at dianna@cyberscentwork.com. Speaker: Dianna L. Santos ----more---- Scent Work University is an online dog training platform focused on all things Scent Work. Our online courses, seminars, webinars and eBooks are not only for those who are interested in competition, but also for those dog owners who are simply looking for something fun and engaging to do with their dogs. Check out Scent Work University today! Interested in other dog sports, helping a new dog or puppy learn the ropes to be more successful at home and when out and about? Check out Pet Dog U, where we offer online dog training courses, webinars, mini-webinars, seminars as well as a regularly updated blog and podcast for all of your dog training needs! #allaboutscentworkpodcast #cyberscentwork #cybersniffinggames #traditionalcyberscentwork #scentwork #nosework #scentworktraining #noseworktraining #scentworktrialing #noseworktrialing #scentworkwebinar #noseworkwebinar #onlinescentwork #onlinenosework #virtualscentwork #virtualnosework #scentworku #scentworkuniversity
Hosts Larry and Rebecca Gifford are preparing for Larry's DBS surgery scheduled for October 24, 2023. During the evaluation for DBS surgery. Larry needed to completely go off meds for at least 12 hours. The medical team tests motor symptoms while OFF levodopa and then after taking a dosage and waiting forty minutes repeat the tests. Evaluators were looking for a 40% or more difference in my motor symptoms from OFF to ON. It was truly revelatory to see just how much levodopa, the medication commonly used for Parkinson's, impacts daily life. MAIL Larry and Rebecca: ParkinsonsPod@curiouscast.ca LEAVE US A MESSAGE: Have a topic or questions that you would like Larry & Rebecca to address on a future episode? We would love you to click here and leave a message https://www.speakpipe.com/WhenLifeGivesYouParkinsons WATCH: Here is a link to see a comparison of Larry ON Levodopa and OFF levodopa Follow us, Larry & Rebecca Gifford Twitter: @ParkinsonsPod Facebook: Facebook.com/ParkinsonsPod Instagram: @parkinsonspod Thanks to Curiouscast Dila Velazquez – Story Producer Greg Schott – Sound Design Our Presenting Partner is Parkinson Canada. Diagnosed with Parkinson's? You are not alone. Contact presenting partner Parkinson Canada http://www.parkinson.ca/, call the toll free hotline 1-800-565-3000 or on Twitter you can message @ParkinsonCanada. Thanks also to PD Avengers – We are building a global alliance to end Parkinson's. Join us. www.pdavengers.com
Microsoft plans to sell a new version of Databricks software that helps customers make AI apps for their businesses, potentially hurting OpenAI's business. Businesses should prioritize customer experience over cost reduction when implementing AI, according to an article titled "How NOT to apply Artificial Intelligence in your business". Three AI research papers were discussed, including a multi-agent debate framework for language model evaluation, a curricular subgoal-based framework for inverse reinforcement learning, and a parameter-efficient module operation approach for deficiency unlearning in large language models. Contact: sergi@earkind.com Timestamps: 00:34 Introduction 01:32 Microsoft Plans AI Service With Databricks That Could Hurt OpenAI 02:46 AI news are dire this august 04:12 How NOT to apply Artificial Intelligence in your business 05:37 Fake sponsor 07:37 ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate 09:20 Curricular Subgoals for Inverse Reinforcement Learning 11:08 Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operation 12:51 Outro
Today is the first Q&A in two months! There are a lot of questions, especially law and school district issues, along with questions about convention, PA state curriculum, and concerns about evaluators with different view points. Listen in and get updated! Chattin' with CHAP is a series of informational podcasts designed to equip and encourage families on their homeschooling journeys. CHAP is the Christian Homeschool Association of Pennsylvania and has provided year-round support to homeschoolers since 1994. Find valuable resources at https://www.chaponline.com Check out https://www.homeschoolpennsylvania.org for all information regarding homeschool law in Pennsylvania. Contact us at https://www.chaponline.com/contact-us with your questions or Chattin' with CHAP topics for discussion. Don't miss out on the latest in PA homeschool news! Subscribe to our eNews at https://chaponline.com/subscribe-to-enews/ Donate to support CHAP in the endeavor to encourage, connect, equip, and protect homeschoolers at https://chaponline.com/donate/
In this hour, Adam Crowley and Dorin Dickerson talk to Mike DeFabo of The Athletic to get a report from Steelers' camp. Also, should Mitch Keller get a new contract at the end of the year? And "That's right, I said it!"
Rob Pizzola and Johnny from betstamp discuss the legend of Wilt Chamberlain, reacting to injury news in the futures markets, evaluating tout services, and give their plus EV and negative EV moves. Looking to sign up at new sportsbooks? Support Circles Off when you do! www.betstamp.app/circlesoff If you want more content like this, DM us on the Circles Off Twitter account and let us know what to react to next!
The Center Collaborative: Creative Solutions in Behavioral Health and Criminal Justice
Dr. Andrew Orf, partner of Lithia Forensic and Consulting LCC and a certified forensic evaluator, discusses: Oregon began the certification for forensic evaluators in 2012 for fitness to proceed evaluations, and the courts now prefer Certified Forensic Evaluators for pre-adjudication services. The level of nuance between evaluations, as it is combining the clinical perspective with the legal perspective. The many clinical components to consider, such as neurocognitive conditions, personality disorders, or substance use. Legal considerations for statutory evaluations are related to an individual's intent. Evaluations are also conducted in order to determine if a person's qualifying mental health disorder impacted their capacity to form intent. The pressing need for more Certified Forensic Evaluators, as a lot of people are in correctional settings, and end up waiting for evaluations. When people are acutely ill, there are few, if any places to send them for help, as the bar for civil commitment in Oregon is very high. Rapid evaluations increase access, and timeliness in more rural areas, as the majority of certified evaluators are in the Portland, Eugene, and Salem areas. There is a collaborative effort between community mental health programs, district attorney, courts and the defense attorney to identify who needs a rapid evaluation. Regular consultation with other evaluators is important for maintaining wellness as a clinician doing the evaluations day in and day out. Certified Forensic Evaluators can conduct several different types of evaluations based on the requests from the court, such as: guilty except for insanity, juvenile waiver evaluations, risk assessments, mental health evaluations for qualification, neuropsychological evaluations, and civil evaluations. People assume that evaluators are advocates, but they strive to be independent and ethical. There are layers of complexity within human beings. People can have multiple underlying conditions that make it difficult to know a conclusive answer as to what drives behavior. Two well-trained, experienced evaluators can disagree on a diagnosis, and neither are necessarily wrong. Nuanced and well thought-out evaluations are crucial due to the real world implications and ripple effects for people. There is a vested interest from the general public because once the process is initiated, costs pile up. One day at the Oregon State Hospital for a person is over $1000. Oregon is in a transitional phase with mental health conversations and legislature. It's easy for everyone involved to point out the problems, but very hard to come up with solutions. For more information about the intersection between criminal justice and behavioral health in Oregon, please reach out to us through our website at http://www.ocbhji.org/podcast and Facebook page at https://www.facebook.com/OCBHJI/. We'd love to hear from you. Notice to listeners: https://www.ocbhji.org//podcast-notice
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Newsletter: June 2023, published by Giving What We Can on July 4, 2023 on The Effective Altruism Forum. Hello and welcome to our June newsletter!Pop quiz! If you travelled to visit every person who's taken the GWWC pledge, how many different countries would you visit?Answer: 100 countries! With the newest addition of Uzbekistan , Giving What We Can members are present in 100 countries worldwide! Turns out the idea of giving to help others effectively has universal appeal! Even though our movement is growing, we need your help - talking to your friends, family and colleagues is one of the best ways to help us change the norms around giving, which in turn means faster progress on some of the world's biggest issues.We'd love to know how we can help you talk to the people in your life about high-impact charities and how we can help you advocate. (We already have lots of ideas and resources here)If you have any ideas about what you would find helpful, simply reply or send me a quick email at: grace.adams@givingwhatwecan.org.Below you'll find loads of interesting updates from our partner charities and other news we think you'll like! With gratitude, - Grace Adams & the Giving What We Can team News & Updates Community Our Executive Director Luke Freeman recently published a post on the EA Forum about the role of individuals in helping to fund high-impact projects and charities as well as hosting an AMA (Ask Me Anything) about his work, life and more! Director of Research, Sjir Hoeijmakers published a post on the EA Forum with “Four claims about the role of effective giving in the EA Community”. Power for Democracies is a new non-profit democracy charity evaluator based in Berlin, Germany, and operating globally. They are looking to hire 5-6 democracy enthusiasts to form their ‘Knowledge & Research Team'. The objective of the team is twofold: To build and execute a ‘knowledge-building roadmap' that will lead to a growing set of methodologies for identifying highly effective pro-democracy interventions and potential NGOs to apply them. And to use these methodologies to generate giving recommendations for the international community of democracy-focused, effectiveness-driven donors. Magnify Mentoring is still accepting mentee applications from women, non-binary, and trans people of any gender who are enthusiastic about pursuing high-impact career paths for the next day or two. On average, mentees and mentors meet once a month for 60-90 minutes. Magnify Mentoring offers mentees access to a broader community with a wealth of professional and personal expertise. You can find out more here and apply here. Evaluators, grantmakers and incubators Updates to ACE's Charity Evaluation Criteria in 2023: Animal Charity Evaluators (ACE) is entering its 2023 charity evaluation season! This is the time of year when ACE works to identify charities that can do the most good for animals with two years of additional funding. To provide more transparency and insight into its evaluation process, ACE is sharing some changes it made to its four charity evaluation criteria this year. ACE's Updated Strategic Plan: One year ago, in 2022, ACE developed a strategic plan for the period of 2022–2024. This plan, created collectively by ACE staff under the leadership of the Acting Executive Director and approved by the board of directors, was the result of the hard work and dedication of a severely understaffed team. It represented what was needed then. Things have changed since last year. ACE added several talented individuals to their team, including new leadership and board members. ACE now has an updated strategic plan and is looking forward to testing its assumptions and delivering results. GiveWell CEO and co-founder Elie Hassenfeld was interviewed on the 80,000 Hours podcast about newer areas of ...
Kasim reveals the SECRET FORMULA so Google LOVES your content! This guide can help you build yourself as a thought leader in any industry in addition to helping you rank above the SERPs.The secret is EEAT, and it's a part of Google's Search Quality Guidelines. Evaluators use it to assess the quality and credibility of content. While it's meant to be a litmus test for webpage quality, we think it's an amazing rules engine for anyone trying to establish themselves as a thought leader.Listen to this episode to know what EEAT stands for. And as a bonus, Kasim also adds his rule!Related videos:
Don and Jessica discuss Child Custody Evaluators and what to be aware of when the court appoints Evaluators for your case.
On today's episode, I talked with my fellow forensic psychologist colleagues, Drs. Collins, Delatorre, and Haji about their careers as expert witnesses and forensic evaluators. Listener questions that were briefly addressed on the episode include: How did you find yourself working in the field? Are there things you wish you did differently in route to becoming a psychologist?How do you obtain work as an expert witness?What is a dilemma or hurdle you have come across when working as an expert witness?How do you prepare for court as an expert witness? What is the process like?What is the difference between forensic evaluations done in private practice versus evaluations done by psychologists working in a prison?What is your day like as a forensic evaluator?Where can you get hired if you want to do forensic evaluation work?How many hours does one case typically take?If forensic psychology is a field of interest, what is the first thing someone should do after undergrad?Any current forensic psych hot topics you are particularly interested in right now? About the Guests: Dr. John Delatorre is a licensed psychologist in Texas, Arizona, and New York State. He has a private practice focused on forensic psychology, primarily doing criminal work. Dr. Delatorre has a Master's degree in Jurisprudence from St. Mary's University School of Law and is often retained as a trial consultant and mediator. He provides expert analysis to the media as well as commentary on live trials for Court TV and the Law & Crime Trial Network. He is the co-host of the Without Consent Podcast. You can find him on social media @drjohndelatorre and through his website www.resolutionfcs.comDr. Lina Haji is a licensed clinical and forensic psychologist and licensed mental health counselor practicing in the Miami, Florida area.Her clinical experience over the last 20 years includes working with mentally ill and dually diagnosed adults in inpatient and outpatient settings including correctional facilities, substance abuse rehabilitation centers, outpatient clinics, psychiatric hospitals and private practice in four states, NY, NJ, CA, and FL. She currently works in private practice conducting clinical and forensic evaluations. She can be found at www.risepsychological.com and IG Rise_psychological_com.Dr. Michael Collins is the owner and Chief Neuropsychologist/Mental Health Expert of the Clinical Neuropsychology Center. Dr. Collins has testified over 100 times as an expert witness and has been court appointed or retained for over 1000 psychological evaluations. Prior to forming the Clinical Neuropsychology Center, Dr. Collins was the director of Psychology at South University and has since that time developed the Broward County Diversion program and become a national expert for his work in forensic neuropsychology, mental health assessment and risk management. Dr. Collins earned his PHD in Clinical Neuropsychology from Nova Southeastern University and completed residencies in forensic and neuropsychology. Dr. Collins is a vendor with the state of Florida and performs expert witness evaluations throughout the state.Contact Dr. Michael Collins: Office: (754) 202-4443 | Email: mjcollinsphd@thecncenter.com https://thecncenter.com/Thanks for listening! See you again in two weeks for another amazing episode unraveling psychology and the law. Please Note: The podcast shows, guests, and all linked content is for educational and informational purposes only. It does not constitute medical, psychiatric, or legal advice. Nor is it intended to replace professional advice from your healthcare or legal professional. Last, it is not a substitute for supervision. Please continue to seek the appropriate guidance form your clinical supervisor. The show content is to be used at listeners' own risk. I invite you to show your support for the show by: Telling your friends and colleagues about the showSubscribing (free) and leaving a rating/reviewFind and connect with Dr. Vienna on Twitter, TT, Fb, or IG to continue the discussion Connect with Dr. Vienna:LinkedIn: Dr. Nicole M. ViennaIG: @drnicoleviennaFacebook: Vienna Psychological Group, Inc.Are you an attorney looking for a forensic evaluation? Book a FREE 20-minute consultation with Dr. Vienna here.
Today Ginger tackles tons of questions! From cyber school to evaluators to finding homeschool resources to convention. There is so much packed into this episode. If you have questions about convention, listen in - chances are your question will be answered here! For more information about convention, check out our convention website at https://conv.chaponline.com/ Chattin' with CHAP is a series of informational podcasts designed to equip and encourage families on their homeschooling journeys. CHAP is the Christian Homeschool Association of Pennsylvania and has provided year-round support to homeschoolers since 1994. Find valuable resources at www.chaponline.com Check out www.homeschoolpennsylvania.org for all information regarding homeschool law in Pennsylvania. Contact us at www.chaponline.com/contact-us with your questions or Chattin' with CHAP topics for discussion. Don't miss out on the latest in PA homeschool news! Subscribe to our eNews at https://chaponline.com/subscribe-to-enews/ Donate to support CHAP in the endeavor to encourage, connect, equip, and protect homeschoolers at https://chaponline.com/donate/
In the third hour, Mike Mulligan and David Haugh continued their conversation with football analyst Dave Wannstedt, with a focus on what talent evaluators value most at the NFL Combine. Later, Mully and Haugh listened and reacted to comments from Ohio State quarterback C.J. Stroud and Alabama quarterback Bryce Young at the NFL Combine.
In this episode, Dr. Michael Quinn Patton (aka MQP) joins me to talk about his work. He is a prolific writer and deep thinker and has influenced the career of many evaluators. In this episode we discuss:How his work has changed over time. You will hear about utilization-focused evaluation, developmental evaluation, and the use of a principles approach to evaluation.How he thinks about “community.” Why understanding “systems” is hard for many community members and how those of us who work with them can help them begin to think from a system perspective. Hint: Metaphor and story helps!How operating from principles can serve as a guide for community coalitions and other community-based organizations.Thinking and acting locally and globally.Why virtual connections are our future.How connecting with each other can help with so many social problems.What he is working on now.BioMichael Quinn Patton an independent evaluation and organizational development consultant based in Minnesota, USA. He is former President of the American Evaluation Association (AEA) and author of eight major evaluation books including a 5th edition of Utilization-Focused Evaluation and 4th edition of Qualitative Research and Evaluation Methods used in over 500 universities worldwide. He has also authored books on Practical Evaluation, Creative Evaluation, and Developmental Evaluation: Applying Systems Thinking and Complexity Concepts to Enhance Innovation and Use. He co-authored a book on the dynamics of social innovation and transformation with two Canadians entitled Getting to Maybe: How the World is Changed. He is recipient of the Myrdal Award for Outstanding Contributions to Useful and Practical Evaluation Practice, the Lazarsfeld Award for Lifelong Contributions to Evaluation Theory, and the 2017 Research on Evaluation Award, all from AEA. EvalYouth recognized him with the first Transformative Evaluator Award in 2020. He regularly conducts training for The Evaluators' Institute and the International Program for Development Evaluation Training. In 2018 he published books on Principles-Focused Evaluation (Guilford Press) and Facilitating Evaluation: Principles in Practice (Sage Publications). In 2020 his new book on evaluating global systems transformations was published entitled Blue Marble Evaluation: Premises and Principles. He has also co-edited a book entitled THOUGHT WORK: Thinking, Action, and the Fate of the World (Rowman & Littlefield Publishing, 2020). Connect with Michael on his website: Like what you heard? Please like and share wherever you get your podcasts! Connect with Ann: Community Evaluation Solutions How Ann can help: · Support the evaluation capacity of your coalition or community-based organization. · Help you create a strategic plan that doesn't stress you and your group out, doesn't take all year to design, and is actionable. · Engage your group in equitable discussions about difficult conversations. · Facilitate a workshop to plan for action and get your group moving. · Create a workshop that energizes and excites your group for action. · Speak at your conference or event. Have a question or want to know more? Book a call with Ann .Be sure and check out our updated resource page! Let us know what was helpful. Community Possibilities is Produced by Zach Price Music by Zach Price: Zachpricet@gmail.com
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Air-gapping evaluation and support, published by Ryan Kidd on December 26, 2022 on LessWrong. This blog post was written fast to communicate a concept I think is important. I may edit this post for legibility later. I think evaluation and support mechanisms should be somewhat “air-gapped,” or isolated, in their information-gathering and decision-making processes. The incentives of optimal evaluators (to critique flaws) seem to run counter to the incentives of optimal supporters (to improve flaws). Individuals who might benefit from support may be discouraged from seeking it by fear of harsher evaluation if their private struggles are shared with evaluators. Evaluators who want to provide support may worry about compromising their evaluation ability if they make inconsistent exceptions. To optimally evaluate and support individuals, I believe that it is necessary to establish and declare appropriate information air gaps between different ecosystem roles. Evaluation mechanisms, such as academic exams, job interviews, grant applications, and the peer review process, aim to critique an individual or their output. To be maximally effective, evaluation mechanisms should be somewhat adversarial to identify flaws and provide useful criticism. It is in the interests of evaluators to have access to all information about a candidate; however, it is not always in the candidate's best interests to share all information that might affect the evaluation. It is also in the interests of evaluators for candidates to get access to all the support they need to improve. If an attribute that disadvantages a job candidate (e.g., a disability) is protected by antidiscrimination law, an evaluator may be biased against the attribute either unconsciously or on the basis that it might genuinely reduce performance. Of course, evaluators should be required to ignore or overcome biases against protected attributes, but this “patch” may break or fail to convince candidates to divulge all evaluation-relevant information. Additionally, in the case that a candidate shares sensitive information with an evaluator, they might not have the appropriate resources or experience to provide beneficial support. Thus, an independent support role might benefit the interests of evaluators. Support mechanisms, such as psychological counseling, legal support, and drug rehabilitation programs, aim to help individuals overcome their personal challenges, often to improve their chances at evaluation. To be maximally effective, support mechanisms should encourage candidates to divulge highly personal and unflattering information. It is in the interests of supporters to guarantee that sensitive information that could affect evaluation is not shared with evaluators (barring information that might prevent harm to others). Generally, the more information a supporter can access, the better support they can provide. If a candidate has a secret challenge (e.g., a drug problem) that might rightly bias an evaluator (e.g., an employer), they might be motivated not to seek support for this problem if the supporter (e.g., a psychologist or support group) cannot guarantee this information will be kept private. Candidates can be told that evaluators will not punish them for revealing sensitive information, but this policy seems difficult to enforce convincingly. Thus, it is in the interests of supporters to advertise and uphold confidentiality. A consequentialist who wants to both filter out poor candidates and benefit candidates who could improve from support will have to strike a balance between the competing incentives of evaluation and support. One particularly effective mechanism used in society is establishing and advertising independent, air-gapped evaluators and supporters. I think the EA and AI safety communities could benefi...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Evaluating the evaluators": GWWC's research direction, published by SjirH on November 24, 2022 on The Effective Altruism Forum. This post is about GWWC's research plans for next year; for our giving recommendations this giving season please see this post and for our other activities see this post. The public effective giving ecosystem now consists of over 40 organisations and projects. These are initiatives that either try to identify publicly accessible philanthropic funding opportunities using an effective-altruism-inspired methodology (evaluators), or to fundraise for the funding opportunities that have already been identified (fundraisers), or both. Over 25 of these organisations and projects are purely fundraisers and do not have any research capacity of their own: they have to rely on evaluators for their giving recommendations, and in practice currently mainly rely on three of those: GiveWell, Animal Charity Evaluators and Founders Pledge. At the moment, fundraisers and individual donors have very little to go on to select which evaluators they rely on and how to curate the exact recommendations and donations they make. These decisions seem to be made based on public reputation of evaluators, personal impressions and trust, and perhaps in some cases a lack of information about existing alternatives or simple legacy/historical artefact. Furthermore, many fundraisers currently maintain separate relationships with the evaluators they use recommendations from and with the charities they end up recommending, causing extra overhead for all involved parties. Considering this situation and from checking with a subset of fundraising organisations, it seems there is a pressing need for (1) a quality check on new and existing evaluators (“evaluating the evaluators”) and (2) an accessible overview of all recommendations made by evaluators whose methodology meets a certain quality standard. This need is becoming more pressing with the ecosystem growing both on the supply (evaluator) and demand (fundraiser) side. The new GWWC research team is looking to start filling this gap: to help connect evaluators and donors/fundraisers in the effective giving ecosystem in a more effective (higher-quality recommendations) and efficient (lower transaction costs) way. Starting in 2023, the GWWC research team plan to evaluate funding opportunity evaluators on their methodology, to share our findings with other effective giving organisations and projects, and to promote the recommendations of those evaluators that we find meet a certain quality standard. In all of this, we aim to take an inclusive approach in terms of worldviews and values: we are open to evaluating all evaluators that could be seen to maximise positive impact according to some reasonably common worldview or value system, even though we appreciate the challenge here and admit we can never be perfectly “neutral”. We also appreciate this is an ambitious project for a small team (currently only 2!) to take on, and expect it to take us time to build our capacity to evaluate all suitable evaluators at the quality level at which we'd like to evaluate them. Especially in this first year, we may be limited in the number of evaluators we can evaluate and in the time we can spend on evaluating each, and we may not yet be able to provide the full "quality check" we aim to ultimately provide. We'll try to prioritise our time to address the most pressing needs first, and aim to communicate transparently about the confidence of our conclusions, the limitations of our processes, and the mistakes we are inevitably going to make. We very much welcome any questions or feedback on our plans, and look forward to working with others on further improving the state of the effective giving ecosystem, getting more money to where it is needed mos...
PennLive's Johnny McGonigal and Bob Flounders react to James Franklin's Tuesday news conference which included several questions about the Penn State quarterback room. Sean Clifford turned the ball over four times in PSU's loss to Ohio State and Franklin knows he has a unique talent in freshman Drew Allar. How will Franklin play it the rest of the season? Olu Fashanu might be the best tackle in the Big Ten. Is this his last year in State College? Learn more about your ad choices. Visit megaphone.fm/adchoices
Mac Jones ranked in Tier 3 of 5 by NFL evaluators
It happens every cycle. Evaluators gravitate to signal-callers with the raw traits to be game-changing players at the NFL level. It's true – quarterbacks need high-level talent to be starters in the professional realm. But there are also many more components vital to successful quarterback play. As an evaluator, how does one go about balancing the physical and mental traits? PFN Draft analysts Oli Hodgkinson and Ian Cummings discuss this, as well as the latest discourse about the 2023 OT class. Learn more about your ad choices. Visit megaphone.fm/adchoices
Come Join Us! as we discuss the new WNBA season, its 26th season, what's hot and what's not! We discuss Candace Parker, Liz Cambage, Chelsea Gray, Dearica Hamby, Rhyne Howard and more! Thank you for your support and downloading, sharing, rating, and following us on social media and all of your podcast platforms. --- Send in a voice message: https://anchor.fm/adrienne-goodson/message
Kevin discusses movements in the NFL draft prop markets. Plus, why the QB evaluators are good at what they do, not what they haven't studied. Plus mailbag.
In this episode Brook has invited Dr. Luke Dalflume Phd to discuss how custody evaluations are conducted and the process he sees as sometimes excessive and resource consuming and some of the ways he along with other professionals are trying to develop a more streamlined process. Enjoy!!!
Matt Rhule's coaching staff continues to come together as he replaces the recently departed Jason Simmons with former Carolina Panthers defensive coordinator and Charlotte native Steve Wilks. How much has Rhule upgraded his staff?Following the Senior Bowl, CBS Sports NFL insider Jason La Canfora talked to several NFC and AFC talent evaluators and executives and the consensus was that Pitt QB Kenny Pickett is very likely to be a Carolina Panther come the NFL Draft. How much of a role should David Tepper and Rhule's connection to Pickett factor into the possible decision?J.J. Jansen is back for another year to compete at long snapper. Is it in the best interest of the franchise for him to lose the battle against Thomas Fletcher?Plus, the NFL announced on Wednesday that they'll be playing four games over the next four seasons in Germany. Could the Panthers be a participant?Support Us By Supporting Our Sponsors!Built BarBuilt Bar is a protein bar that tastes like a candy bar. Go to builtbar.com and use promo code “LOCKED15,” and you'll get 15% off your next order.GetUpsideJust download the FREE GetUpside App and use promo code TOUCHDOWN to get 25 cents per gallon or more cash back on your first tank. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Whether you're working with a young child, teen, or adult, executive functioning skills are among the most critical and practical skills we need. In this interview with Sara Ward, an SLP specializing in executive function, she shares the unique background that makes this work so special to her, as well as some really interesting approaches to assessment and intervention.Many people have different explanations for executive functioning. Sara defines executive functioning for young children in the most basic form, sequencing. As you reach middle and high school, you're continuing to plan with a window of time and space that is continuously growing. It's very easy for parents and professionals to be a child's “prosthetic frontal lobe”, we visualize the students through space and time and we at times over prompt. So in a neurotypical brain, this is the ability to visualize where you are in a future time or space. 90% of the time, task planning happens in a different place from where you execute the plan. Naturally, as you might plan your day and anticipate the tasks necessary to accomplish your daily routine, you may use gestures to prompt your steps. An intervention Sara uses that is really successful in young children is teaching them to gesture. So a child with really great executive functioning skills would use very specific verbs to describe the steps necessary for their future plans. With a child who is lacking in executive function, you might prompt them to show you with their hands. Oftentimes, when students are able to feel the steps with their hands, the attached verb comes. So there is this language and movement attached with task execution.How do you determine the need for executive functioning? Assessments are tricky because SLP's are not licensed to administer tests related to neuro-capability. Evaluators tend to look at Executive Skills through observation and rating scales. Sara recommends the Barkley Attention Deficit Executive Function Scale, because of the way it differentiates between attention deficits in comparison to executive skills in the individual. CEFI, Clinical Executive Function Inventory, is an online tool that Sara suggests to accurately characterize kids' behaviors related to executive functioning. She also mentions several other tests and scales that can be used, in addition to looking at existing speech and language assessments with an executive functioning lens.Sara provides so many great suggestions and tools for working with students on executive functioning and the program she developed. As an experienced SLP, I myself found this information so enlightening. I cannot wait to take these tools to my next IEP meeting and to my therapy. I hope you found this just as helpful.#autism #speechtherapy What's Inside:What is executive functioning?Why are executive function skills important?Assessment and intervention for executive function skills.Executive functioning in young children, teens, and adults. Mentioned In This Episode:ABA Speech: HomeCognitive Connections: Executive Function