POPULARITY
You know when a company brags about its “great culture,” but the employees look dead inside? That's because culture isn't what leaders say it is — it's what customers feel. In this episode, Mark Rampolla, founder of ZICO Coconut Water and managing partner at Ground Force Capital, tells us how culture quietly shapes every customer interaction. From Liquid Death's branding genius to why “culture fit” hiring is a terrible idea, Mark breaks down what it really takes to build a company people actually want to engage with.We also dive into the “need behind the need” (AKA why customers don't buy what you're selling but what it does for them). Mark shares how ZICO won over yoga studios by solving problems beyond hydration and why understanding where your customers make their money is the key to selling. If your culture, hiring, or customer experience feels off, this conversation holds the solution you're looking for. Key Moments: 00:00 Who is Mark Rampolla, founder of ZICO & managing partner at GroundForce Capital?01:02 Building a Movement02:13 Why Company Culture Matters03:35 Culture in Action: Real-World Examples08:27 Hiring a Culture Add & Customer Obsession17:36 Assessing and Evolving Company Culture24:42 Understanding the Need Behind the Need25:06 Real-World Examples of Customer Empathy26:17 Building Relationships with Yoga Studios29:31 Marketing Strategies and ROI31:34 Hypothesis Testing & the Opportunities in Operational Failure38:17 Active Listening and Empathy in Business41:24 Impressive Brand Experiences43:18 Mark's Key Advice for CX Leaders –Are your teams facing growing demands? Join CX leaders transforming their strategies with Agentforce. Start achieving your ambitious goals. Visit salesforce.com/agentforce Mission.org is a media studio producing content alongside world-class clients. Learn more at mission.org
Title: Episode 6- Hypothesis Testing (e.g. Type 1 and Type II Errors, P-values) Target Audience This activity is directed to physicians who take care of hospitalized children, medical students, nurse practitioners, and physician assistants working in the emergency room, intensive care unit, or hospital wards. Objectives: Upon completion of this activity, participants should be able to: 1. Discuss the definition and relevance of p-values. 2. Discuss type 1 vs type ii errors. 3. Discuss statistical significance and what it means. Course Directors: Tony R. Tarchichi MD — Associate Professor, Department of Pediatrics, Children's Hospital of Pittsburgh of the University of Pittsburgh Medical Center (UPMC.) Paul C. Gaffney Division of Pediatric Hospital Medicine. No relationships with industry relevant to the content of this educational activity have been disclosed. Jenna Carlson Ph.D. - University of Pittsburgh- Assistant Professor of Human Genetics and Biostatistics in school of Public Health No relationships with industry relevant to the content of this educational activity have been disclosed. Conflict of Interest Disclosure: No other planners, members of the planning committee, speakers, presenters, authors, content reviewers and/or anyone else in a position to control the content of this education activity have relevant financial relationships to disclose. Accreditation Statement: In support of improving patient care, the University of Pittsburgh is jointly accredited by the Accreditation Council for Continuing Medical Education (ACCME), the Accreditation Council for Pharmacy Education (ACPE), and the American Nurses Credentialing Center (ANCC), to provide continuing education for the healthcare team. The University of Pittsburgh School of Medicine designates this enduring material activity for a maximum of 0.5 AMA PRA Category 1 CreditsTM. Physicians should only claim credit commensurate with the extent of their participation in the activity. Other health care professionals will receive a certificate of attendance confirming the number of contact hours commensurate with the extent of participation in this activity. Disclaimer Statement: The information presented at this activity represents the views and opinions of the individual presenters, and does not constitute the opinion or endorsement of, or promotion by, the UPMC Center for Continuing Education in the Health Sciences, UPMC / University of Pittsburgh Medical Center or Affiliates and University of Pittsburgh School of Medicine. Reasonable efforts have been taken intending for educational subject matter to be presented in a balanced, unbiased fashion and in compliance with regulatory requirements. However, each program attendee must always use his/her own personal and professional judgment when considering further application of this information, particularly as it may relate to patient diagnostic or treatment decisions including, without limitation, FDA-approved uses and any off-label uses. Released 2/20/2025, Expires 2/20/2028 The direct link to the course is provided below: https://cme.hs.pitt.edu/ISER/app/learner/loadModule?moduleId=25580&dev=true
David Morton is a technologist with extensive experience across various sectors, including retail, finance, consulting, energy, and commodities trading. David has successfully contributed to companies of all sizes, from small startups to large enterprises with up to 60,000 employees. Renowned for his ability to simplify complex concepts and solutions, he believes in using the most effective tools to address challenges efficiently and elegantly. Topics of Discussion: [2:41] David Morton's background and early Career. [5:30] What is a data scientist? [7:35] Data Science vs. Software Engineering. [12:08] Hypothesis Testing and Model Building. [12:49] David explains the concept of a model in data science, using the metaphor of how a grandmother thinks about someone. [13:04] How models are mathematical representations of the real world, used for prediction and analysis. [15:06] Data science models vs. a GPT model. [18:08] The importance of using the right tool for the job. [26:10] The operational side of data science and the role of machine learning. [35:56] Practical examples of Data Science applications. Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net. Clear Measure, Inc. (Sponsor) .NET DevOps for Azure: A Developer's Guide to DevOps Architecture the Right Way, by Jeffrey Palermo — Available on Amazon! Jeffrey Palermo's Twitter — Follow to stay informed about future events! David Morton LinkedIn David Morton GitHub Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.
Bayesian Statistics allows combining prior information of a population to the current sample of experimentation to create stronger inferences. Dr. Taylor Winter, Senior Lecturer in Mathematics and Statistics at University of Canterbury, uses Bayesian methods to investigate a range of societal and group factors (Social Psychology).Dr. Winter takes us through some of the basic ideas around Bayesian statistics and how it differs from traditional methods of hypothesis testing in research. We discuss examples from his work on authoritarianism and social identity theory as well as learn the the differences between his time working in industry vs academia. Lastly, we discuss his culture focused projects including Dungeons and Dragons and how Māori culture can manifest behavioural change.Support the showSupport us and reach out!https://smoothbrainsociety.comInstagram: @thesmoothbrainsocietyTikTok: @thesmoothbrainsocietyTwitter/X: @SmoothBrainSocFacebook: @thesmoothbrainsocietyMerch and all other links: Linktreeemail: thesmoothbrainsociety@gmail.com
Let's examine a handful of parametric and non-parametric comparison tools, including various hypothesis tests. The post Fundamentals of Hypothesis Testing appeared first on Accendo Reliability.
Jeff Wetzler, co-CEO of Transcend, brings over 25 years of expertise in learning and human potential. With a background spanning business and education, he's served as a management consultant to leading corporations, facilitated learning for global leaders, and held the role of Chief Learning Officer at Teach For America. Jeff holds a doctorate in adult learning and leadership from Columbia University and a bachelor's in psychology from Brown University.He is also the author of Ask: Tap Into the Hidden Wisdom of People Around You For Unexpected Breakthroughs in Leadership and Life. ___Get your copy of Personal Socrates: Better Questions, Better Life Connect with Marc >>> Website | LinkedIn | Instagram | Twitter Drop a review and let me know what resonates with you about the show!Thanks as always for listening and have the best day yet!*Behind the Human is proudly recorded in a Canadian made Loop Phone Booth*Special props
To explain the concepts of hypothesis testing (null and alternative hypothesis statements), I use an example of James Bond. His famous line is "shaken, not stirred" when ordering a martini. In a hypothesis test, we would set the null (default) hypothesis to say that he cannot tell the difference. The alternative hypothesis would be proven if there was data showing he could tell the difference (better than guessing or chance). Hope this helps you understand the concept of hypothesis testing a little more clearly. Are you interested? Contact me at brion@biz-pi.com Links Need help in your organization? Let's talk! Schedule a free support call Podcast Sponsor: Creative Safety Supply is a great resource for free guides, infographics, and continuous improvement tools. I recommend starting with their 5S guide. It includes breakdowns of the five pillars, ways to begin implementing 5S, and even organization tips and color charts. From red tags to floor marking; it's all there. Download it for free at creativesafetysupply.com/5S BIZ-PI.com LeanSixSigmaDefinition.com Have a question? Submit a voice message at Podcasters.Spotify.com --- Send in a voice message: https://podcasters.spotify.com/pod/show/leansixsigmabursts/message
In this episode, my guest is Chris Voss, a former Federal Bureau of Investigation (FBI) agent who was the lead negotiator in many high-risk, high-consequence cases. Chris has taught negotiation courses at Harvard and Georgetown Universities and is the author of the book “Never Split the Difference.” We discuss how to navigate difficult conversations of all kinds, including in business, romance and romantic breakups, job firings and tense conversations with family and friends. Chris explains how to navigate online, in person and in written negotiations, the red flags to watch out for and how to read body and voice cues in face-to-face and phone conversations. He explains how to use empathy, certain key questions, proactive listening, emotional processing and more to ensure you reach the best possible outcome in any hard conversation. This episode ought to be of interest to anyone looking to improve their interpersonal abilities and communication skills and for those who want to be able to keep a level head in heated discussions. For show notes, including referenced articles and additional resources, please visit hubermanlab.com. Thank you to our sponsors AG1: https://drinkag1.com/huberman Plunge: https://plunge.com/huberman ROKA: https://roka.com/huberman InsideTracker: https://insidetracker.com/huberman InsideTracker Giveaway: http://fitnessfuelslongevity.com Momentous: https://livemomentous.com/huberman Timestamps (00:00:00) Chris Voss (00:02:18) Sponsors: Plunge & ROKA (00:04:59) Negotiation Mindset, Playfulness (00:11:41) Calm Voice, Emotional Shift, Music (00:18:59) “Win-Win”?, Benevolent Negotiations, Hypothesis Testing (00:28:38) Generosity (00:32:46) Sponsor: AG1 (00:33:44) Hostile Negotiations, Internal Collaboration (00:39:40) Patterns & Specificity; Internet Scams, “Double-Dip” (00:48:15) Urgency, Cons, Asking Questions (00:54:46) Negotiations, Fair Questions, Exhausting Adversaries (01:01:09) Sponsor: InsideTracker (01:02:18) “Vision Drives Decision”, Human Nature & Investigation (01:07:47) Lying & Body, “Gut Sense” (01:15:42) Face-to-Face Negotiation, “738” & Affective Cues (01:20:39) Online/Text Communication; “Straight Shooters” (01:26:47) Break-ups (Romantic & Professional), Firing, Resilience (01:32:16) Ego Depletion, Negotiation Outcomes (01:37:35) Readiness & “Small Space Practice”, Labeling (01:45:17) Venting, Emotions & Listening; Meditation & Spirituality (01:51:41) Physical Fitness, Self-Care (01:57:01) Long Negotiations & Recharging (02:02:40) Hostages, Humanization & Names (02:08:50) Tactical Empathy, Compassion (02:15:27) Tool: Mirroring Technique (02:22:20) Tool: Proactive Listening (02:29:48) Family Members & Negotiations (02:35:21) Self Restoration, Humor (02:39:01) Fireside, Communication Courses; Rapport; Writing Projects (02:47:45) “Sounds Like…” Perspective (02:50:54) Zero-Cost Support, Spotify & Apple Reviews, Sponsors, YouTube Feedback, Momentous, Social Media, Neural Network Newsletter Title Card Photo Credit: Mike Blabac Disclaimer
Getting kids enthusiastic for science and technology is a challenge. Are physics and math boring? How can you make it exciting? At the Eindhoven Roumanian school, Vlad Niculescu-Dinca goes a step further: bringing a scientific mindset to young kids is not so hard as it may seem. Kids can understand what hypothesis testing is. Just let them find out whether some assumption is crazy or in fact is true. As the Brainport region is asking schools and universities to make a scale jump on educating engineers, Jean-Paul Linnartz interviewed Vlad about his experience in making science attractive and in promoting a curious and investigative mindset. Photo: Jodie Trimble, 10 July 2023 --- Send in a voice message: https://podcasters.spotify.com/pod/show/podcasts-4-brainport/message
In today's episode, we welcome special guest Dr. Elizabeth Sweeney, a biostatistician and Assistant Professor of Biostatistics at Penn Medicine.Elizabeth discusses the topic of Hypothesis Testing alongside one of our hosts, Dr. Alonso Carrasco-Labra. Elizabeth and Alonso have a very informative discussion including how hypothesis testing can be applied on a daily basis along with examples of common misconceptions with hypothesis testing.To view this episode's corresponding video of "Statistics with Hans & Hera," please visit the following link: https://youtu.be/XdmuqrroRSA
All rights belong to J.K Rowling. This is a Harry Potter fan fiction written by Eliezer Yudkowsky. I am Jack Voraces, a professional audiobook narrator: https://www.audible.com/search?searchNarrator=Jack+Voraces I do not intend to make any money from this podcast. It is a free audiobook for anyone to listen to and it is my hope that it will eventually evolve into a dream I have had for a while. The 500 hour audiobook. I would like to create an audiobook that is 500 hours long, totally free and available in multiple formats. The author has given permission for this recording and if you enjoyed Mother of Learning, you will likely enjoy this too. Each chapter is recorded live on Discord on Mondays at 20:00 GMT: https://discord.gg/6B5hJdx
All rights belong to J.K Rowling. This is a Harry Potter fan fiction written by Eliezer Yudkowsky. I am Jack Voraces, a professional audiobook narrator: https://www.audible.com/search?searchNarrator=Jack+Voraces I do not intend to make any money from this podcast. It is a free audiobook for anyone to listen to and it is my hope that it will eventually evolve into a dream I have had for a while. The 500 hour audiobook. I would like to create an audiobook that is 500 hours long, totally free and available in multiple formats. The author has given permission for this recording and if you enjoyed Mother of Learning, you will likely enjoy this too. Each chapter is recorded live on Discord on Mondays at 20:00 GMT: https://discord.gg/6B5hJdx
All rights belong to J.K Rowling. This is a Harry Potter fan fiction written by Eliezer Yudkowsky. I am Jack Voraces, a professional audiobook narrator: https://www.audible.com/search?searchNarrator=Jack+Voraces I do not intend to make any money from this podcast. It is a free audiobook for anyone to listen to and it is my hope that it will eventually evolve into a dream I have had for a while. The 500 hour audiobook. I would like to create an audiobook that is 500 hours long, totally free and available in multiple formats. The author has given permission for this recording and if you enjoyed Mother of Learning, you will likely enjoy this too. Each chapter is recorded live on Discord on Mondays at 20:00 GMT: https://discord.gg/6B5hJdx
All rights belong to J.K Rowling. This is a Harry Potter fan fiction written by Eliezer Yudkowsky. I am Jack Voraces, a professional audiobook narrator: https://www.audible.com/search?searchNarrator=Jack+Voraces I do not intend to make any money from this podcast. It is a free audiobook for anyone to listen to and it is my hope that it will eventually evolve into a dream I have had for a while. The 500 hour audiobook. I would like to create an audiobook that is 500 hours long, totally free and available in multiple formats. The author has given permission for this recording and if you enjoyed Mother of Learning, you will likely enjoy this too. Each chapter is recorded live on Discord on Mondays at 20:00 GMT: https://discord.gg/6B5hJdx
A reflection on the value of self-referential coding in informing your research - when handled cautiously. --- Send in a voice message: https://podcasters.spotify.com/pod/show/corina-paraschiv65/message
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.03.31.535153v1?rss=1 Authors: Koenig, S. D., Safo, S., Miller, K. J., Herman, A. B., Darrow, D. P. Abstract: Background: Time series analysis is critical for understanding brain signals and their relationship to behavior and cognition. Cluster-based permutation tests (CBPT) are commonly used to analyze a variety of electrophysiological signals including EEG, MEG, ECoG, and sEEG data without a priori assumptions about specific temporal effects. However, two major limitations of CBPT include the inability to directly analyze experiments with multiple fixed effects and the inability to account for random effects (e.g. variability across subjects). Here, we propose a flexible multi-step hypothesis testing strategy using CBPT with Linear Mixed Effects Models (LMEs) and Generalized Linear Mixed Effects Models (GLMEs) that can be applied to a wide range of experimental designs and data types. Methods: We first evaluate the statistical robustness of LMEs and GLMEs using simulated data distributions. Second, we apply a multi-step hypothesis testing strategy to analyze ERPs and broadband power signals extracted from human ECoG recordings collected during a simple image viewing experiment with image category and novelty as fixed effects. Third, we assess the statistical power differences between analyzing signals with CBPT using LMEs compared to CBPT using separate t-tests run on each fixed effect through simulations that emulate broadband power signals. Finally, we apply CBPT using GLMEs to high-gamma burst data to demonstrate the extension of the proposed method to the analysis of nonlinear data. Results: First, we found that LMEs and GLMEs are robust statistical models. In simple simulations LMEs produced highly congruent results with other appropriately applied linear statistical models, but LMEs outperformed many linear statistical models in the analysis of suboptimal data and maintained power better than analyzing individual fixed effects with separate t-tests. GLMEs also performed similarly to other nonlinear statistical models. Second, in real world human ECoG data, LMEs performed at least as well as separate t-tests when applied to predefined time windows or when used in conjunction with CBPT. Additionally, fixed effects time courses extracted with CBPT using LMEs from group-level models of pseudo-populations replicated latency effects found in individual category-selective channels. Third, analysis of simulated broadband power signals demonstrated that CBPT using LMEs was superior to CBPT using separate t-tests in identifying time windows with significant fixed effects especially for small effect sizes. Lastly, the analysis of high-gamma burst data using CBPT with GLMEs produced results consistent with CBPT using LMEs applied to broadband power data. Conclusions: We propose a general approach for statistical analysis of electrophysiological data using CBPT in conjunction with LMEs and GLMEs. We demonstrate that this method is robust for experiments with multiple fixed effects and applicable to the analysis of linear and nonlinear data. Our methodology maximizes the statistical power available in a dataset across multiple experimental variables while accounting for hierarchical random effects and controlling FWER across fixed effects. This approach substantially improves power and accuracy leading to better reproducibility. Additionally, CBPT using LMEs and GLMEs can be used to analyze individual channels or pseudo-population data for the comparison of functional or anatomical groups of data. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
A quiz:What statistical concept is used in these design activities?DOE (design of experiments)Sampling Test results analysis Gage R&R studies Test method validations SPC (with some rules in monitoring changes in a process) If you answered, "Hypothesis Testing", you're right on!Even if you're not really into statistics, there's still basics that we should learn and understand, especially in design engineering. One of those basics is hypothesis testing. So much of our decisions about whether a design is good or not good (or good enough) is based on this concept. It will fundamentally change how you look at design activities – so much so, it's a responsibility to understand it.The podcast blog for this episode includes links to videos to introduce or refresh you on this fundamental topic. Support the show**FREE RESOURCES**Quality during Design engineering and new product development is actionable. It's also a mindset. Subscribe for consistency, inspiration, and ideas at www.qualityduringdesign.com. About meDianna Deeney helps engineers work with their cross-functional team to reduce concept design time and increase product success, using quality and reliability methods. She founded Quality during Design through her company Deeney Enterprises, LLC. Her vision is a world of products that are easy to use, dependable, and safe – possible by using Quality during Design engineering and product development.
Why switching to thinking about hypothesis testing is valuable --- Send in a voice message: https://podcasters.spotify.com/pod/show/david-nishimoto/message
Finding deviation from the mean data --- Send in a voice message: https://anchor.fm/david-nishimoto/message
The outcomes of probability --- Send in a voice message: https://anchor.fm/david-nishimoto/message
In this episode: Research methods students, international headquarters, podcast statistics, student mental health challenges, the risks of armchair quarterbacking, the formulation of theories, including Freud and Maslow, falsifiability and hypothesis testing, Rob teaches philosophy, the MBTI
We talk about hypothesis testing in UX. It's fun! You should try it.
Dr. David Brodbeck's Psychology Lectures from Algoma University
As we continue our look at stuff you already ought to know, let's talk hypothesis testing. Music “Toys On a Shelf' by A Step Behind
Rouven built sales at bexio and is now leading marketing, customer success and sales. He has some ideas on how to align marketing and sales and even more importantly improve his organisation through constant testing. The salespeople for instance are testing at least 3 hypotheses every year. It is one of their Key results. How such a test can look like, you will be hearing from Rouven in this episode. Happy learning!
Jingyi Jessica Li | Statistical Hypothesis Testing versus Machine Learning Binary Classification Jingyi Jessica Li (UCLA) discusses her paper "Statistical Hypothesis Testing versus Machine Learning Binary Classification". Jingyi noticed several high-impact cancer research papers using multiple hypothesis testing for binary classification problems. Concerned that these papers had no guarantee on their claimed false discovery rates, Jingyi wrote a perspective article about clarifying hypothesis testing and binary classification to scientists. #datascience #science #statistics 0:00 – Intro 1:50 – Motivation for Jingyi's article 3:22 – Jingyi's four concepts under hypothesis testing and binary classification 8:15 – Restatement of concepts 12:25 – Emulating methods from other publications 13:10 – Classification vs hypothesis test: features vs instances 21:55 - Single vs multiple instances 23:55 - Correlations vs causation 24:30 - Jingyi's Second and Third Guidelines 30:35 - Jingyi's Fourth Guideline 36:15 - Jingyi's Fifth Guideline 39:15 – Logistic regression: An inference method & a classification method 42:15 – Utility for students 44:25 – Navigating the multiple comparisons problem (again!) 51:25 – Right side, show bio-arxiv paper
Francis Corrigan is Director of Decision Intelligence at Target Corporation. Embedded within the Global Supply Chain, Decision Intelligence combines data science with model thinking to help decision-makers solve problems. 00:00 Intro 01:21 Data Science applications in Logistics and Supply Chain, Cost and Performance trade-off 03:21 Amazon vs Target fulfillment Model, Owning vs Coordinating with Last Mile companies e.g. FedEx 08:36 Suez Canal Container Blockage, Fallback plan at Target 10:37 Predicting products to Stock in Bottle Neck Scenarios 12:42 Air Freight vs Sea Shipments Costs, Ideal vs Real World Deliveries 15:48 Lack of Good Data and Prediction Challenges 18:00 Managing Expectations as Head of Analytics, Importance of Communicating 20:11 Stakeholder Management & Data Science Newsletter 23:39 Technical and Non-technical Teams Coordination, Speed Reading 26:36 Data Stories and Visualizations 29:47 Reporting Pipelines vs Story Narration 31:37 Times Series, Prophet, Flourish and Hans Rosling 35:28 Economist turned Data Scientist, Embarrassment as Motivation 38:20 Lack of Practical Skills of Data Science at University 41:18 Employer's Perspectives on Data Science Talent 45:24 What Causes Data Teams Failure 48:40 COVID 19 and Times Series Corruption, Anomaly Detection 56:15 Toilet Paper Demand Scenario, Commodity Pricing Alerts 59:50 Automating Alerts for Panic Situation 01:02:10 Pandemic as a Blessing for Digital Business, Exponential Growth Rates and Tuition Fee Reimbursement for Employees 01:06:06 Data as Decision Support System, Strategic Decision Indicators 01:08:08 Capital in 21st Century, Thomas Piketty and Free Markets 01:11:31 Failures of Capitalist Societies on Individual Front and Socialist Aversion of Wealth Generation 01:15:15 UBI, Interventions, and CEO to Lowest Paid Worker Ratio 01:18:25 Career Blunders and Regrets 01:22:12 Psychometric Tests for Intellect Filtering, Behavioral Stability and Creativity Trade-off 01:24:08 Target's Epic Failure in Canada, What Data Science could have Prevented 01:25:08 Gameplan to Compete with Walmart and Amazon 01:28:00 Sarimax, Armiax and Volatility Management, Planning vs Forecasting 01:31:33 Deep NNs or Lack thereof, Explainability and Monte Carlo as Alternative 01:34:00 Model Parsimony in Times Series, Baseline Models in Excel 01:37:50 R vs Python, Specific Use Cases 01:40:25 Delegating and Element of Trust 01:43:20 Time and Space Complexity of Models, Netflix and Deployments at Target 01:46:00 Political Impacts on Shipments, Narratives and Hypothesis Testing 01:48:00 Nate Silver, Nassem Talib, and Early Inspirations 01:52:05 Work-life Balance
This talk was given by Matthew Brensilver on 2021.08.10 at the Insight Meditation Center in Redwood City, CA. ******* For more talks like this, visit AudioDharma.org ******* If you have enjoyed this talk, please consider supporting AudioDharma with a donation at https://www.audiodharma.org/donate/. ******* This talk is licensed by a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
After completely blowing an ambitious summer schedule of weekly episodes, Patrick and Greg sit poolside and shoot the breeze about the fascinating history of quantitative methods. No, for real; it's actually fascinating. Anyway, they start with some of the old gamblers from the 1600s and work their way through the early decades of the 20th century. This discussion sets the stage nicely for a future summer episode, at least in theory given how long it took to get this one out. HappySummer!
Health Tech Matters: Talks About Healthcare Products and Design
In this episode, we are discussing nutrition trends, hypothesis testing, and primary market research. Our guest: Josh Hix, CEO at Season, ex-founder of a meal delivery startup Plated. Plated was acquired by a grocery store chain Albertsons Companies for $300 million. Website: https://helloseason.com/ ______________ How to find me? Maria Borysova, healthcare product designer: https://www.linkedin.com/in/maria-borysova/ ______________ Quotes: On nutrition nowadays "It's inevitable that we get to a state where nutrition is integrated into healthcare because it's just such a foundational part of treatment, especially of chronic illness, which seems to be pretty universally accepted as the most expensive part of the system. You have this intervention that not only works from a clinical standpoint and from a cost standpoint but also makes the patient feel better. Unlike some other kinds of interventions, there's a quality of life improvement." ______________ On starting a new company after selling Plated "I don't think most people are actually very happy doing nothing. Just sitting on the beach literally, or metaphorically is not a particularly appealing option. I didn't want to go be a full-time investor, obviously, lots of entrepreneurs do or so. And I liked building products and companies. It feels like the best way to use my time. Both selfishly from a personal fulfillment and enjoyment standpoint and also hopefully from a positive impact on other people standpoint." ______________ On a personal message "All of us, myself included to a very real degree live in our own echo chambers. It makes it hard to listen to each other." ______________ On timing and market "The old cliche of being early is the same as being wrong. Like forget about being unpleasant and not fun, it's not effective to work on something where the market's not there yet. Whether it's the technology or the regulatory environment or consumer adoption, or some combination of all three of them, you've got to have the right market conditions. Otherwise, you're just pushing on a rope. It's not going to go anywhere and that's not effective for anybody. It's not a good use of time or capital or anything else." --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
https://astralcodexten.substack.com/p/two-unexpected-multiple-hypothesis I. Start with Lior Pachter's Mathematical analysis of "mathematical analysis of a vitamin D COVID-19 trial". The story so far: some people in Cordoba did a randomized controlled trial of Vitamin D for coronavirus. The people who got the Vitamin D seemed to do much better than those who didn’t. But there was some controversy over the randomization, which looked like this Remember, we want to randomly create two groups of similar people, then give Vitamin D to one group and see what happens. If the groups are different to start with, then we won't be able to tell if the Vitamin D did anything or if it was just the pre-existing difference. In this case, they checked for fifteen important ways that the groups could be different, and found they were only significantly different on one - blood pressure. Jungreis and Kellis, two scientists who support this study, say that shouldn't bother us too much. They point out that because of multiple testing (we checked fifteen hypotheses), we need a higher significance threshold before we care about significance in any of them, and once we apply this correction, the blood pressure result stops being significant. Pachter challenges their math - but even aside from that, come on! We found that there was actually a big difference between these groups! You can play around with statistics and show that ignoring this difference meets certain formal criteria for statistical good practice. But the difference is still there and it's real. For all we know it could be driving the Vitamin D results.
Health Tech Matters: Talks About Healthcare Products and Design
In this episode, we are discussing how artificial intelligence can be used for mental health treatment, pricing models and hypothesis testing. Quotes from the episode: On AI in mental healthcare I realized that some of the cognitive restructuring exercises that I found in self-help models online were very good. So the initial idea was: "Can we make AI listen, sympathetically?". When you're typing out what you're feeling, you're accepting that feeling. Then you're learning to say. And Wysa asks some really nice questions "Do you want to get back control in the situation?", "How can you be a good friend to yourself?". It forces you to think about it, but you do all the work. ___________ On B2B vs B2C healthcare products My hypothesis is that there is a path because healthcare is so difficult to break into and it takes so long to break into and they want evidence and they want some validation. It's much, much easier to start in B2C. I know that B2B is easier than B2C. Because you can do an iterative experiment in B2C, which you cannot do in B2B because they need you to have proven everything before they even start working with you. You start in B2C, you get that fast cycle of iteration that a product manager absolutely needs to get their product right. You get to hear directly from the users. You get to hold on to your mission and your sense of purpose, which in B2B really vanishes fast when you're not working directly with users. I think B2C is easier in some ways if you get product-market fit and once B2B takes off, revenue is easier, but getting the product right in B2B is very hard. ___________ About energy The construct that we're trying to manage time and there's not enough is such a lie. We are trying to manage energy. And if you have energy, then everything feels good. If you don't have energy, all the time in the world will not yield anything. ___________ On the future of mental health applications I'm seeing a convergence of biomarkers where people are tracking their own mental health or physical activity or heart rates and stress levels. So there's definitely a quantified self that is coming into mental health. ___________ Our guest: Jo Aggarwal - Founder/CEO at Wysa - clinically approved AI chatbot for mental health, 3 million users. Her LinkedIn: https://www.linkedin.com/in/joaggarwal/ Wysa: https://wysa.io/ ___________ How to find me? Maria Borysova, healthcare product designer: https://www.linkedin.com/in/maria-borysova/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
Welcome to the About Practice podcast, a show about bridging the gap between research and practice in education. In this episode Josh and Ryan talk about why they wanted to do a show like this, how they do research for their projects, the practical problems of consuming research, and why the Allen Iverson Talking About Practice interview is a surprisingly good analogy for education research. References: Barry, John M. The Great Influenza: The Story of the Deadliest Pandemic in History. Penguin Books, 2005. Aschwanden, Christie. “Science Isn't Broken.” Fivethirtyeight, 19 Aug. 2015, https://fivethirtyeight.com/features/science-isnt-broken/ Iverson, Allen. Interview, https://youtu.be/tknXRyUEJtU Sudeikis, Jason, performer. Ted Lasso, season 1, episode 6, https://youtu.be/wYS6cRizP-M Twitter: Joshua Rosenberg: jrosenberg6432 Ryan Estrellado: RyanEs
Cancelled for social justice? Abby was a PhD student wh [...]
This episode discusses hypothesis testing orientation
In social science we estimate the probability of the null hypothesis being wrong even though it might be true. Should we reject or accept the null hypothesis over the alternative hypothesis? This is an essential idea in inferential statistics. All slides to the entire series can be downloaded for free here: https://armintrost.de/en/professor/digital/social-research-methods/
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.08.02.232710v1?rss=1 Authors: Farine, D. R., Carter, G. G. Abstract: Generating insights about a null hypothesis requires not only a good dataset, but also statistical tests that are reliable and actually address the null hypothesis of interest. Recent studies have found that permutation tests, which are widely used to test hypotheses when working with animal social network data, can suffer from high rates of type I error (false positives) and type II error (false negatives). Here, we first outline why pre-network and node permutation tests have elevated type I and II error rates. We then propose a new procedure, the double permutation test, that addresses some of the limitations of existing approaches by combining pre-network and node permutations. We conduct a range of simulations, allowing us to estimate error rates under different scenarios, including errors caused by confounding effects of social or non-social structure in the raw data. We show that double permutation tests avoid elevated type I errors, while remaining sufficiently sensitive to avoid elevated type II errors. By contrast, the existing solutions we tested, including node permutations, pre-network permutations, and regression models with control variables, all exhibit elevated errors under at least one set of simulated conditions. Type I error rates from double permutation remain close to 5% in the same scenarios where type I error rates from pre-network permutation tests exceed 30%. The double permutation test provides a potential solution to issues arising from elevated type I and type II error rates when testing hypotheses with social network data. We also discuss other approaches, including restricted node permutations, testing multiple null hypotheses, and splitting large datasets to generate replicated networks, that can strengthen our ability to make robust inferences. Finally, we highlight ways that uncertainty can be explicitly considered during the analysis using permutation-based or Bayesian methods. Copy rights belong to original authors. Visit the link for more info
Audio Only Version of Intro to Hypothesis Testing Concept Video For More info: https://blogs.lt.vt.edu/jmrussell/topics/ --- Support this podcast: https://anchor.fm/john-russell10/support
Audio Only of Hypothesis Testing In-Depth Video Notes can be found here: https://blogs.lt.vt.edu/jmrussell/topics/ --- Support this podcast: https://anchor.fm/john-russell10/support
In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis.
Dr. Jerz's lecture on using Excel for one-sample hypothesis testing of the mean.
Dr. Jerz's lecture on one-sample hypothesis testing of the mean.
Editorials
This episode is the fifth in a series of episodes where Dr. Brown's college students were asked to reflect on the importance of having a basic understanding of the scientific method and hypothesis testing in understanding today's COVID-19 (coronavirus) pandemic. If a person does not currently possess such a scientific understanding, is it their duty to acquire this knowledge? No politics, no spin, no editing... just authentic scientific reflection and education at work. Listen to see what my students have to say! Below is the prompt my students were given in constructing their responses: Based on what you learned from the materials provided in class and from your prior knowledge, please answer all the below questions in regard to the current social distancing measures that almost all citizens in the USA (and the world) are currently being asked to follow because of the COVID-19 (coronavirus) epidemic. - What is the null hypothesis in this situation? - What is the alternative hypothesis that is being proposed? - What is the independent variable in this experiment? - What are some dependent variables? (certainly there are more than one that will be measured… list at least three… be specific) - What are some standardized variables? - Predicting ahead, what would the data have to look like in the end for us to reject the null hypothesis? (and accept the alternative hypothesis as a tentative truth) (What assumptions would we likely have to make in this situation?) - Predicting ahead, what would the data have to look like in the end for us to fail to reject our null hypothesis? (What assumptions would we likely have to make in this situation?) - Why is having a basic understanding of the scientific method and hypothesis testing important for you during this COVID-19 (coronavirus) time? Do you have a duty as a citizen of the world to acquire these basic understandings? (These are open-ended questions, so please answer them in a manner that resonates with you. Say what you really believe, not what you think I want you to say. Be specific. Apply it to your life right now. Please don’t get political – just answer the questions as they apply to you.) Notes: - Intro Music: freemusicarchive.org/music/Lee_Rose…_For_Podcasts/ “Let’s Start at the Beginning” & “Glass Android” - Student Voice: Sebastian - Learn more about Dr. Brown at www.ericbrownphd.com/
Using the IBM HR Analytics Employee Attrition & Performance dataset from Kaggle, I did a check on learning with Hypothesis Testing to retain and practice the concepts. Using Python, Numpy, and Pandas, I explored applying this method to the IBM dataset in order to challenge Job Satisfaction vs Longevity. I have always assumed that if an employee is with a company for an extended period of time they are satisfied with the company. Join me as I discuss what I've learned about Hypothesis testing and discover the results! Link to my GitHub Link to my Website Email me tlkdata2me@gmail.com --- Send in a voice message: https://anchor.fm/tlkdata2me/message
Professor Saba Balasubramanian talks about Hypothesis Testing. Brought to you by CRAMSURG www.cramsurg.org Our tune is"Inspiring Optimistic Upbeat Energetic Guitar Rhythm" by Free Music | https://soundcloud.com/fm_freemusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US
This episode is the fourth in a series of episodes where Dr. Brown's college students were asked to reflect on the importance of having a basic understanding of the scientific method and hypothesis testing in understanding today's COVID-19 (coronavirus) pandemic. If a person does not currently possess such a scientific understanding, is it their duty to acquire this knowledge? No politics, no spin, no editing... just authentic scientific reflection and education at work. Listen to see what my students have to say! Below is the prompt my students were given in constructing their responses: Based on what you learned from the materials provided in class and from your prior knowledge, please answer all the below questions in regard to the current social distancing measures that almost all citizens in the USA (and the world) are currently being asked to follow because of the COVID-19 (coronavirus) epidemic. - What is the null hypothesis in this situation? - What is the alternative hypothesis that is being proposed? - What is the independent variable in this experiment? - What are some dependent variables? (certainly there are more than one that will be measured… list at least three… be specific) - What are some standardized variables? - Predicting ahead, what would the data have to look like in the end for us to reject the null hypothesis? (and accept the alternative hypothesis as a tentative truth) (What assumptions would we likely have to make in this situation?) - Predicting ahead, what would the data have to look like in the end for us to fail to reject our null hypothesis? (What assumptions would we likely have to make in this situation?) - Why is having a basic understanding of the scientific method and hypothesis testing important for you during this COVID-19 (coronavirus) time? Do you have a duty as a citizen of the world to acquire these basic understandings? (These are open-ended questions, so please answer them in a manner that resonates with you. Say what you really believe, not what you think I want you to say. Be specific. Apply it to your life right now. Please don’t get political – just answer the questions as they apply to you.) Notes: - Intro Music: freemusicarchive.org/music/Lee_Rose…_For_Podcasts/ “Let’s Start at the Beginning” & “Glass Android” - Student Voice: Elizabeth - Learn more about Dr. Brown at www.ericbrownphd.com/
This episode is the third in a series of episodes where Dr. Brown's college students were asked to reflect on the importance of having a basic understanding of the scientific method and hypothesis testing in understanding today's COVID-19 (coronavirus) pandemic. If a person does not currently possess such a scientific understanding, is it their duty to acquire this knowledge? No politics, no spin, no editing... just authentic scientific reflection and education at work. Listen to see what my students have to say! Below is the prompt my students were given in constructing their responses: Based on what you learned from the materials provided in class and from your prior knowledge, please answer all the below questions in regard to the current social distancing measures that almost all citizens in the USA (and the world) are currently being asked to follow because of the COVID-19 (coronavirus) epidemic. - What is the null hypothesis in this situation? - What is the alternative hypothesis that is being proposed? - What is the independent variable in this experiment? - What are some dependent variables? (certainly there are more than one that will be measured… list at least three… be specific) - What are some standardized variables? - Predicting ahead, what would the data have to look like in the end for us to reject the null hypothesis? (and accept the alternative hypothesis as a tentative truth) (What assumptions would we likely have to make in this situation?) - Predicting ahead, what would the data have to look like in the end for us to fail to reject our null hypothesis? (What assumptions would we likely have to make in this situation?) - Why is having a basic understanding of the scientific method and hypothesis testing important for you during this COVID-19 (coronavirus) time? Do you have a duty as a citizen of the world to acquire these basic understandings? (These are open-ended questions, so please answer them in a manner that resonates with you. Say what you really believe, not what you think I want you to say. Be specific. Apply it to your life right now. Please don’t get political – just answer the questions as they apply to you.) Notes: - Intro Music: freemusicarchive.org/music/Lee_Rose…_For_Podcasts/ “Let’s Start at the Beginning” & “Glass Android” - Student Voice: Brayan - Learn more about Dr. Brown at www.ericbrownphd.com/
Consider a scenario where a group of agents, each receiving partially informative private signals, aim to learn the true underlying state of the world that explains their collective observations. These agents might represent a group of individuals interacting over a social network, a team of autonomous robots tasked with detection, or even a network of processors trying to collectively solve a statistical inference problem. To enable such agents to identify the truth from a finite set of hypotheses, we propose a distributed learning rule that differs fundamentally from existing approaches, in that it does not employ any form of ``belief-averaging". Instead, agents update their beliefs based on a min-rule. Under standard assumptions on the observation model and the network structure, we establish that each agent learns the truth asymptotically almost surely. As our main contribution, we prove that with probability 1, each false hypothesis is ruled out by every agent exponentially fast, at a network-independent rate that strictly improves upon existing rates. We then consider a scenario where certain agents do not behave as expected, and deliberately try to spread misinformation. Capturing such misbehavior via the Byzantine adversary model, we develop a computationally-efficient variant of our learning rule that provably allows every regular agent to learn the truth exponentially fast with probability 1. About the speaker: Aritra Mitra received the B.E. degree from Jadavpur University, Kolkata, India, and the M.Tech. degree from the Indian Institute of Technology Kanpur, India, in 2013 and 2015, respectively, both in electrical engineering. He is currently working toward the Ph.D. degree in electrical engineering at the School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA. His current research interests include the design of distributed algorithms for estimation, inference and learning; networked control systems; and secure control. He was a recipient of the University Gold Medal at Jadavpur University and the Academic Excellence Award at IIT Kanpur.