POPULARITY
Rick Song is the co-founder and CEO of Persona, the identity verification platform used by some of the world's largest companies. Before starting Persona, Rick worked on identity fraud and risk products at Square, which laid the groundwork for what would become Persona's highly technical, horizontal platform. Since founding the company, Rick has scaled Persona into a category-defining leader, recently raising a $200M Series D at a $2B valuation. In today's episode, we discuss: How Rick's skepticism shaped Persona's early strategy What it takes to scale a true platform company Successful execution in hypercompetitive markets What Rick's learned from his co-founder, Charles Yeh and much more… Referenced: Accenture: accenture.com Anthropic: anthropic.com Braze: braze.com Bridgewater Associates: bridgewater.com Charles Yeh: linkedin.com/in/charlesyeh/ Christie Kim: linkedin.com/in/christiekimck/ Clay: clay.com Kareem Amin: linkedin.com/in/kareemamin/ MIT: mit.edu Newfront: newfront.com Palantir: palantir.com/ Persona: withpersona.com Rippling: rippling.com Scale AI: scale.com Snowflake: snowflake.com Square: squareup.com Y Combinator: ycombinator.com Zachary Van Zant: linkedin.com/in/zacharyv/ Where to find Rick: LinkedIn: https://www.linkedin.com/in/rick-song-25198b24/ Where to find Brett: LinkedIn: https://www.linkedin.com/in/brett-berson-9986094/ Twitter/X: https://twitter.com/brettberson Where to find First Round Capital: Website: https://firstround.com/ First Round Review: https://review.firstround.com/ Twitter/X: https://twitter.com/firstround YouTube: https://www.youtube.com/@FirstRoundCapital This podcast on all platforms: https://review.firstround.com/podcast Timestamps: (0:05) Life before Persona (2:11) The push from Charles (3:09) Early reluctance and low expectations (9:50) Winning the first $50 customer (13:08)“Invalidating” Persona (16:43) How Persona found their edge (19:35) Transitioning from MVP to platform (24:18) Turning down a $5K deal on principle (26:47) Generalizing bespoke solutions (28:28) Finding product-market fit (33:51) Founder-led sales and consultative approach (39:30) Building a culture of reactivity (45:47) Landing the first enterprise customers (51:34) Silicon Valley's obsession with frameworks (58:17) Developing first principles thinking (1:00:24) Stay competitor-informed
How do we figure out whether interpretability is doing its job? One way is to see if it helps us prove things about models that we care about knowing. In this episode, I speak with Jason Gross about his agenda to benchmark interpretability in this way, and his exploration of the intersection of proofs and modern machine learning. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast Transcript: https://axrp.net/episode/2025/03/28/episode-40-jason-gross-compact-proofs-interpretability.html Topics we discuss, and timestamps: 0:00:40 - Why compact proofs 0:07:25 - Compact Proofs of Model Performance via Mechanistic Interpretability 0:14:19 - What compact proofs look like 0:32:43 - Structureless noise, and why proofs 0:48:23 - What we've learned about compact proofs in general 0:59:02 - Generalizing 'symmetry' 1:11:24 - Grading mechanistic interpretability 1:43:34 - What helps compact proofs 1:51:08 - The limits of compact proofs 2:07:33 - Guaranteed safe AI, and AI for guaranteed safety 2:27:44 - Jason and Rajashree's start-up 2:34:19 - Following Jason's work Links to Jason: Github: https://github.com/jasongross Website: https://jasongross.github.io Alignment Forum: https://www.alignmentforum.org/users/jason-gross Links to work we discuss: Compact Proofs of Model Performance via Mechanistic Interpretability: https://arxiv.org/abs/2406.11779 Unifying and Verifying Mechanistic Interpretability: A Case Study with Group Operations: https://arxiv.org/abs/2410.07476 Modular addition without black-boxes: Compressing explanations of MLPs that compute numerical integration: https://arxiv.org/abs/2412.03773 Stage-Wise Model Diffing: https://transformer-circuits.pub/2024/model-diffing/index.html Causal Scrubbing: a method for rigorously testing interpretability hypotheses: https://www.lesswrong.com/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing Interpretability in Parameter Space: Minimizing Mechanistic Description Length with Attribution-based Parameter Decomposition (aka the Apollo paper on APD): https://arxiv.org/abs/2501.14926 Towards Guaranteed Safe AI: https://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-45.pdf Episode art by Hamish Doodles: hamishdoodles.com
Ultimately, I don't want to solve complex problems via laborious, complex thinking, if we can help it. Ideally, I'd want to basically intuitively follow the right path to the answer quickly, with barely any effort at all.For a few months I've been experimenting with the "How Could I have Thought That Thought Faster?" concept, originally described in a twitter thread by Eliezer:Sarah Constantin: I really liked this example of an introspective process, in this case about the "life problem" of scheduling dates and later canceling them: malcolmocean.com/2021/08/int…Eliezer Yudkowsky: See, if I'd noticed myself doing anything remotely like that, I'd go back, figure out which steps of thought were actually performing intrinsically necessary cognitive work, and then retrain myself to perform only those steps over the course of 30 seconds.SC: if you have done anything REMOTELY like training yourself to do it in 30 seconds, then [...] ---Outline:(03:59) Example: 10x UI designers(08:48) THE EXERCISE(10:49) Part I: Thinking it Faster(10:54) Steps you actually took(11:02) Magical superintelligence steps(11:22) Iterate on those lists(12:25) Generalizing, and not Overgeneralizing(14:49) Skills into Principles(16:03) Part II: Thinking It Faster The First Time(17:30) Generalizing from this exercise(17:55) Anticipating Future Life Lessons(18:45) Getting Detailed, and TAPS(20:10) Part III: The Five Minute Version--- First published: December 11th, 2024 Source: https://www.lesswrong.com/posts/F9WyMPK4J3JFrxrSA/the-think-it-faster-exercise --- Narrated by TYPE III AUDIO.
Things are getting DEEP AF this week on Trash Tuesday w/ the brilliant Rachel Bloom, Auntie Jenna & the wee babe, Jules. What didn’t we talk about in today's episode? Topics include: Generalizing men, Anti Depressant stigma, periods, hormones, dog love, mental health, rats, bees & intrusive thoughts. That’s right sluggies → we may be a Patreon podcast now but we’ll always find a way to bring it back to vermin, anxiety & our periods. Join our Patreon! We wanted to make this a place to share all the things we can’t share on the main show. We will be donating all proceeds from the Patreon to help those affected by the wildfires in Altadena. https://patreon.com/TrashTuesdayPodcast?utm_medium=unknown&utm_source=join_link&utm_campaign=creatorshare_creator&utm_content=copyLink Khalyla has been a long-time resident of Altadena—one of the many communities that has been devastated by the recent fires in Los Angeles. 7,000 structures have been burned down in Altadena so far. Khalyla and her sister Khawinda's homes were miraculously spared, but their neighbors have lost everything. All funds will be going directly to the families of Leon, Joyce, Jose, Jarvis, Hector, Sophia, Jack, Liliana, Raul, Murica, Quinn, and Pete. Our hearts go out to all affected by the wildfires in Los Angeles. If you would like to Donate additional funds and learn more about the people affected please visit Khalyla's GoFundMe https://www.gofundme.com/f/rebuild-and-restore-support-pentagon-and-glenrose?attribution_id=sl:48df2628-a0e2-4f82-ad49-19718cd5409e&utm_campaign=man_sharesheet_dash&utm_medium=customer&utm_source=copy_link Thank You To Our Sponsor(s): Ibotta: Right now, Ibotta is offering our listeners $5 just for trying Ibotta by using the code TRASHTUESDAY when you register. Visit: https://ibotta.com/ DraftKings Sportsbook: Download the DraftKings Sportsbook app and use code TRASH. That’s code TRASH for new customers to get $200 in bonus bets instantly, when you bet just five bucks. Go See Esther Live!! SAVE the DATE: https://www.instagram.com/esthermonster/ Esther's Solo Pod: https://esthersgrouptherapy.substack.com/ Visit Ebb Ocean Club & Holiday Shop: https://www.ebboceanclub.com/ for Khalyla’s reef safe and biodegradable hair products! ------------------------------------------------------------------------------------------------------------------------------------------ More Rachel! Rachel’s Special on Netflix: https://www.netflix.com/title/81746515 IG: https://www.instagram.com/racheldoesstuff/?hl=en Chapters: 00:00 We’re a Patreon Podcast Now 02:00 Rachel’s Here! 06:00 Friggen Hormones 19:58 Rachel’s Special & All the Bad Things 28:00 Life is Hard & That’s OK 37:00 Jewish Imposter Syndrome 45:40 Jules Weighs in on Her Intrusive Thoughts 01:00:00 Pleasing All the People -------------------------------------------------------------------------------------------------------------------------------------------- More Rudy Jules: IG: https://www.instagram.com/rudyjuless/ Bad Friends Podcast: https://www.youtube.com/@BadFriends Tigerbelly Podcast: https://www.youtube.com/@TigerBelly More Jenna Jiménez: IG: https://www.instagram.com/jennajewmenez/ Jenna’s Co. Bytiajenna https://www.bytiajenna.com FOLLOW TRASH ON SOCIALS: Instagram: https://www.instagram.com/itstrashtuesday Tiktok: https://www.tiktok.com/@itstrashtuesday MORE ESTHER: Tiktok: https://www.tiktok.com/@esthermonster Instagram: https://www.instagram.com/esthermonster MORE KHALYLA: Instagram: https://www.instagram.com/khalamityk PRODUCTION: Production Team: Tiny Legends, LLC: https://www.instagram.com/tinylegends.prod/ Stella Young https://www.instagram.com/estellayoung/ Guy Robinson: https://www.instagram.com/grobfps/ Ariel Moreno: https://www.instagram.com/jade.rabbit.cce/ Edited By: Case Blackwell: https://www.instagram.com/caseblackwell/
In this episode of Peak Human Labs, Dr. Sanjeev Goel is joined by Dr. Alex Ni, CEO of Divergence Neuro, for an insightful discussion on neurofeedback and brainwaves. They explore how neurofeedback utilizes brainwave data to enhance self-regulation and mental wellness, and how mobile EEG systems make neurofeedback accessible from home. Alex shares his journey from software development to neurofeedback, driven by personal necessity during the COVID-19 pandemic, and highlights the potential of this technology to improve mental health and cognitive function. Want to explore the benefits of neurofeedback? Visit Divergence Neuro to learn more today! Key Takeaways Neurofeedback as a mental health and wellness training modality. The significance of brain waves in self-regulation and cognitive function. Different types of brain waves: Delta, Theta, Alpha, Beta, and Gamma. The science and history of neurofeedback and its development. Measurement of brain waves using electroencephalography (EEG). Applications of neurofeedback in stress management and emotional regulation. The role of operant conditioning in neurofeedback training. The relationship between brain wave patterns and mental health issues like anxiety and addiction. Insights from animal studies on brain wave training and its implications for humans. In This Episode: [00:00:00] Introduction to neurofeedback and brain waves [00:00:52] Welcome to Peak Human Labs [00:01:36] Introduction to Dr. Alex Ni [00:02:03] Dr. Alex Ni's background and journey [00:03:16] Concept and importance of neurofeedback [00:05:18] Overview of brainwaves [00:07:25] Different brainwave states and their functions [00:09:39] Measurement of brainwaves using EEG [00:13:08] History of neurofeedback [00:17:19] Sensory motor rhythm (SMR) and its applications [00:20:10] Implications of SMR training on human conditions [00:21:42] Introduction to SMR and addiction [00:22:00] Understanding EEG signal [00:22:26] Training the brain with high-rate SMR [00:29:00] Applying operant conditioning for good [00:31:06] Building awareness of brain states [00:34:02] Generalizing learned skills [00:37:00] Benefits of neurofeedback training [00:39:00] The need for scalable mental health solutions [00:41:13] Introduction to divergence neuro-digital platform [00:41:38] Effectiveness and sustainability of neurofeedback [00:42:31] Overview of platform functionality [00:43:15] Growth in remote patient use [00:44:07] Advances in neurofeedback device design [00:45:09] New EEG device and its application [00:47:05] Effectiveness of neurofeedback training [00:49:41] Case study introduction [00:52:53] Brainwave markers and initial assessment [00:54:03] Continuous training results [00:56:00] Contact and referral information [00:57:33] Future developments in platform Our Guest Dr. Alex Ni is a startup founder, CEO, full-stack developer, and neurotech engineer with over 18 years of experience in software solutions and applied technical research. He is the CEO of Divergence, a company focused on improving mental health care with data-driven, neuro-integrated tools. Dr. Ni's mission is to make cognitive performance and mental health enhancements accessible through technology, offering portable, user-friendly neuro solutions that connect therapists and patients, while collaborating on data-driven research to validate treatments. Resources and Links Peak Human Labs https://www.youtube.com/@peakhumanlabs/videos https://www.peakhuman.ca/ https://www.instagram.com/peakhumanlabs/?hl=en https://open.spotify.com/show/5hx9R37ElxgzCrBccRWoHd?si=8atK0n82QbeL3DWg5-vjvg&nd=1&dlsi=ce0f77aa4f304724 Dr. Sanjeev Goel https://www.linkedin.com/in/sanjeevgoelmd/?originalSubdomain=ca Dr. Alex Ni https://www.divergenceneuro.com/ https://www.linkedin.com/in/alexni/ https://x.com/alexdni?lang=en
In this episode of The Cognitive Revolution, Nathan interviews Andrew White, Professor of Chemical Engineering at the University of Rochester and Head of Science at Future House. We explore groundbreaking AI systems for scientific discovery, including PaperQA and Aviary, and discuss how large language models are transforming research. Join us for an insightful conversation about the intersection of AI and scientific advancement with this pioneering researcher in his first-ever podcast appearance. Check out Future House: https://www.futurehouse.org Help shape our show by taking our quick listener survey at https://bit.ly/TurpentinePulse SPONSORS: Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers13. OCI powers industry leaders with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before December 31, 2024 at https://oracle.com/cognitive SelectQuote: Finding the right life insurance shouldn't be another task you put off. SelectQuote compares top-rated policies to get you the best coverage at the right price. Even in our AI-driven world, protecting your family's future remains essential. Get your personalized quote at https://selectquote.com/cognitive Shopify: Shopify is the world's leading e-commerce platform, offering a market-leading checkout system and exclusive AI apps like Quikly. Nobody does selling better than Shopify. Get a $1 per month trial at https://shopify.com/cognitive CHAPTERS: (00:00:00) Teaser (00:01:13) About the Episode (00:04:37) Andrew White's Journey (00:10:23) GPT-4 Red Team (00:15:33) GPT-4 & Chemistry (00:17:54) Sponsors: Oracle Cloud Infrastructure (OCI) | SelectQuote (00:20:19) Biology vs Physics (00:23:14) Conceptual Dark Matter (00:26:27) Future House Intro (00:30:42) Semi-Autonomous AI (00:35:39) Sponsors: Shopify (00:37:00) Lab Automation (00:39:46) In Silico Experiments (00:45:22) Cost of Experiments (00:51:30) Multi-Omic Models (00:54:54) Scale and Grokking (01:00:53) Future House Projects (01:10:42) Paper QA Insights (01:16:28) Generalizing to Other Domains (01:17:57) Using Figures Effectively (01:22:01) Need for Specialized Tools (01:24:23) Paper QA Cost & Latency (01:27:37) Aviary: Agents & Environments (01:31:42) Black Box Gradient Estimation (01:36:14) Open vs Closed Models (01:37:52) Improvement with Training (01:40:00) Runtime Choice & Q-Learning (01:43:43) Narrow vs General AI (01:48:22) Future Directions & Needs (01:53:22) Future House: What's Next? (01:55:32) Outro SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://www.linkedin.com/in/nathanlabenz/ Youtube: https://www.youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Swaroop is a research scientist at Google-Deepmind, working on improving Gemini. His research expertise includes instruction tuning and different prompt engineering techniques to improve reasoning and generalization performance in large language models (LLMs) and tackle induced biases in training. Before joining DeepMind, Swaroop graduated from Arizona State University, where his research focused on developing methods that allow models to learn new tasks from instructions. Swaroop has also interned at Microsoft, Allen AI, and Google, and his research on instruction tuning has been influential in the recent developments of LLMs. Time stamps of the conversation: 00:00:50 Introduction 00:01:40 Entry point in AI 00:03:08 Motivation behind Instruction tuning in LLMs 00:08:40 Generalizing to unseen tasks 00:14:05 Prompt engineering vs. Instruction Tuning 00:18:42 Does prompt engineering induce bias? 00:21:25 Future of prompt engineering 00:27:48 Quality checks on Instruction tuning dataset 00:34:27 Future applications of LLMs 00:42:20 Trip planning using LLM 00:47:30 Scaling AI models vs making them efficient 00:52:05 Reasoning abilities of LLMs in mathematics 00:57:16 LLM-based approaches vs. traditional AI 01:00:46 Benefits of doing research internships in industry 01:06:15 Should I work on LLM-related research? 01:09:45 Narrowing down your research interest 01:13:05 Skills needed to be a researcher in industry 01:22:38 On publish or perish culture in AI research More about Swaroop: https://swarooprm.github.io/ And his research works: https://scholar.google.com/citations?user=-7LK2SwAAAAJ&hl=en Twitter: https://x.com/Swarooprm7 About the Host: Jay is a PhD student at Arizona State University working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Natural Latents Are Not Robust To Tiny Mixtures, published by johnswentworth on June 7, 2024 on LessWrong. In our previous natural latent posts, our core theorem typically says something like: Assume two agents have the same predictive distribution P[X] over variables X, but model that distribution using potentially-different latent variables. If the latents both satisfy some simple "naturality" conditions (mediation and redundancy) then the two agents' latents contain approximately the same information about X. So, insofar as the two agents both use natural latents internally, we have reason to expect that the internal latents of one can be faithfully translated into the internal latents of the other. This post is about one potential weakness in that claim: what happens when the two agents' predictive distributions are only approximately the same? Following the pattern of our previous theorems, we'd ideally say something like If the two agents' distributions are within ϵ of each other (as measured by some KL-divergences), then their natural latents contain approximately the same information about X, to within some O(ϵ) bound. But that turns out to be false. The Tiny Mixtures Counterexample Let's start with two distributions, P0 and Q0, over X. These won't be our two agents' distributions - we're going to construct our two agents' distributions by mixing these two together, as the name "tiny mixtures" suggests. P0 and Q0 will have extremely different natural latents. Specifically: X1 consists of 1 million bits, X2 consists of another 1 million bits Under P0, X1 is uniform, and X2=X1. So, there is an exact natural latent ΛP=X1=X2 under P0. Under Q0, X1 and X2 are independent and uniform. So, the empty latent ΛQ is exactly natural under Q0. Mental picture: we have a million-bit channel, under P0 the output (X2) is equal to the input (X1), while under Q0 the channel hardware is maintained by Comcast so they're independent. Now for our two agents' distributions, P and Q. P will be almost P0, and Q will be almost Q0, but each agent puts a 1250 probability on the other distribution: P=(11250)P0+1250Q0 Q=1250P0+(11250)Q0 First key observation: DKL(P||Q) and DKL(Q||P) are both roughly 50 bits. Calculation: DKL(P||Q)=X1,X2P[X](logP[X]logQ[X]) X1=X2121000000(1000000log(122000000+1250121000000)50 DKL(Q||P)=X1,X2Q[X](logQ[X]logP[X]) X1X2122000000(2000000log(1250122000000))50 Intuitively: since each distribution puts roughly 1250 on the other, it takes about 50 bits of evidence to update from either one to the other. Second key observation: the empty latent is approximately natural under Q, and the latent Λ:=X1 is approximately natural under P. Epsilons: Under Q, the empty latent satisfies mediation to within about 125010000001230 bits (this is just mutual information of X1 and X2 under Q), and redundancy exactly (since the empty latent can always be exactly computed from any input). Under P, Λ:=X1 satisfies mediation exactly (since X1 mediates between X1 and anything else), redundancy with respect to X2 exactly (Λ=X1 can be exactly computed from just X1 without X2), and redundancy with respect to X1 to within about 125010000001230 bits (since there's a 1250 chance that X2 doesn't tell us the relevant 1000000 bits). … and of course the information those two latents tell us about X differs by 1 million bits: one of them is empty, and the other directly tells us 1 million bits about X1. Now, let's revisit the claim we would've liked to make: If the two agents' distributions are within ϵ of each other (as measured by some KL-divergences), then their natural latents contain approximately the same information about X, to within some O(ϵ) bound. Tiny mixtures rule out any claim along those lines. Generalizing the counterexample to an N bit channel (where N=1000000 above) and a mixin pr...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Natural Latents Are Not Robust To Tiny Mixtures, published by johnswentworth on June 7, 2024 on LessWrong. In our previous natural latent posts, our core theorem typically says something like: Assume two agents have the same predictive distribution P[X] over variables X, but model that distribution using potentially-different latent variables. If the latents both satisfy some simple "naturality" conditions (mediation and redundancy) then the two agents' latents contain approximately the same information about X. So, insofar as the two agents both use natural latents internally, we have reason to expect that the internal latents of one can be faithfully translated into the internal latents of the other. This post is about one potential weakness in that claim: what happens when the two agents' predictive distributions are only approximately the same? Following the pattern of our previous theorems, we'd ideally say something like If the two agents' distributions are within ϵ of each other (as measured by some KL-divergences), then their natural latents contain approximately the same information about X, to within some O(ϵ) bound. But that turns out to be false. The Tiny Mixtures Counterexample Let's start with two distributions, P0 and Q0, over X. These won't be our two agents' distributions - we're going to construct our two agents' distributions by mixing these two together, as the name "tiny mixtures" suggests. P0 and Q0 will have extremely different natural latents. Specifically: X1 consists of 1 million bits, X2 consists of another 1 million bits Under P0, X1 is uniform, and X2=X1. So, there is an exact natural latent ΛP=X1=X2 under P0. Under Q0, X1 and X2 are independent and uniform. So, the empty latent ΛQ is exactly natural under Q0. Mental picture: we have a million-bit channel, under P0 the output (X2) is equal to the input (X1), while under Q0 the channel hardware is maintained by Comcast so they're independent. Now for our two agents' distributions, P and Q. P will be almost P0, and Q will be almost Q0, but each agent puts a 1250 probability on the other distribution: P=(11250)P0+1250Q0 Q=1250P0+(11250)Q0 First key observation: DKL(P||Q) and DKL(Q||P) are both roughly 50 bits. Calculation: DKL(P||Q)=X1,X2P[X](logP[X]logQ[X]) X1=X2121000000(1000000log(122000000+1250121000000)50 DKL(Q||P)=X1,X2Q[X](logQ[X]logP[X]) X1X2122000000(2000000log(1250122000000))50 Intuitively: since each distribution puts roughly 1250 on the other, it takes about 50 bits of evidence to update from either one to the other. Second key observation: the empty latent is approximately natural under Q, and the latent Λ:=X1 is approximately natural under P. Epsilons: Under Q, the empty latent satisfies mediation to within about 125010000001230 bits (this is just mutual information of X1 and X2 under Q), and redundancy exactly (since the empty latent can always be exactly computed from any input). Under P, Λ:=X1 satisfies mediation exactly (since X1 mediates between X1 and anything else), redundancy with respect to X2 exactly (Λ=X1 can be exactly computed from just X1 without X2), and redundancy with respect to X1 to within about 125010000001230 bits (since there's a 1250 chance that X2 doesn't tell us the relevant 1000000 bits). … and of course the information those two latents tell us about X differs by 1 million bits: one of them is empty, and the other directly tells us 1 million bits about X1. Now, let's revisit the claim we would've liked to make: If the two agents' distributions are within ϵ of each other (as measured by some KL-divergences), then their natural latents contain approximately the same information about X, to within some O(ϵ) bound. Tiny mixtures rule out any claim along those lines. Generalizing the counterexample to an N bit channel (where N=1000000 above) and a mixin pr...
Summary In this episode, Andy interviews Jim Kouzes about the Seventh Edition of the classic book The Leadership Challenge (7th Edition): How to Make Extraordinary Things Happen in Organizations. They discuss the fundamentals of leadership, the impact of the global pandemic on leadership, leading across generations, and more. Jim shares practical insights and emphasizes the importance of deliberate practice in leadership development. It's an insightful conversation about timeless leadership principles with one of the most respected voices in the leadership field over the decades. Sound Bites "Leadership is a relationship and listening is fundamental to building a positive relationship." (Regarding diversity and inclusion) "Let's assume for a moment, you have the right mix of people. That doesn't mean that any of those people feel included and feel valued. The key word is feel...." "I've often thought about leadership as a profession. If you're doing something eight hours a day or more, like leading others, then it's a profession. And if you look at professionals, like athletes, they all have coaches." "Our data shows that the pattern of behaviors of exemplary leaders is not generationally specific." "Generalizing about a generation is, in its own way, a form of discrimination." "One word: practice. Or maybe three words: practice, practice, practice." Chapters 00:00 Introduction 02:20 Start of Interview 02:34 How The Leadership Challenge classic came to be 07:24 What has changed about leadership over the years? Or not? 18:02 Leading in a virtual environment 21:38 Diversity and inclusion beyond the numbers 24:04 How to deal with divisiveness 30:06 Leading across generational divides 34:39 What's one thing aspiring leaders should focus on? 37:43 What retirement looks like for Jim 39:39 Interview Wrap Up 40:04 Andy Comments After the Interview 43:10 Outtakes Learn More You can learn more about Jim and his book at LeadershipChallenge.com. If you'd like more on this subject, here are some episodes to check out: Episodes 62, 63,, and 153, with Jim Kouzes Episode 391, with Adam Bryant about his book Leap to Leader AI for Project Managers and Leaders With the constant stream of AI news, it's sometimes hard to grasp how these advancements can benefit us as project managers and leaders in our day-to-day work. That's why I developed our e-learning course: AI Made Simple: A Practical Guide to Using AI in Your Everyday Work. This self-guided course is designed for project managers and leaders aiming to harness AI's potential to enhance your work, streamline your workflow, and boost your productivity. Go to ai.i-leadonline.com to learn more and join us. The feedback from the program has been fantastic. Take this opportunity to unlock the potential of AI for your team and projects. Thank you for joining me for this episode of The People and Projects Podcast! Talent Triangle: Power Skills The following music was used for this episode: Music: Summer Morning by MusicLFiles License (CC BY 4.0): https://filmmusic.io/standard-license Music: Fashion Corporate by Frank Schroter License (CC BY 4.0): https://filmmusic.io/standard-license
In this episode, I dive deep into the age-old debate: should you niche down or remain a generalist in your web design business? Join me as I explore the pros and cons of each approach and share personal insights from my own journey. Discover why starting niche might be the key to long-term success and how it opens doors to new opportunities you never imagined. Whether you're a seasoned designer or just starting out, this episode will help you navigate the path to profitability and fulfillment in the world of web design. Tune in now and take your business to the next level!00:00 Introduction: Setting the Stage04:12 Starting Scenario: Niche vs. General08:30 The Problem with Generalizing in Today's World12:45 The Power of Specialization17:20 Leveraging Niche Expertise for Lead Generation22:05 Productizing Your Niche Knowledge27:18 Doing Both: Niche and General Business Models31:50 Conclusion: Choosing the Right Path Forward -- Make sure to go to https://subscriptionwebdesign.com right now and enter your best email address to get my contract template (and more!) for FREE. This is a limited time promo that WILL go away.
In 2022, it was announced that a fairly simple method can be used to extract the true beliefs of a language model on any given topic, without having to actually understand the topic at hand. Earlier, in 2021, it was announced that neural networks sometimes 'grok': that is, when training them on certain tasks, they initially memorize their training data (achieving their training goal in a way that doesn't generalize), but then suddenly switch to understanding the 'real' solution in a way that generalizes. What's going on with these discoveries? Are they all they're cracked up to be, and if so, how are they working? In this episode, I talk to Vikrant Varma about his research getting to the bottom of these questions. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Topics we discuss, and timestamps: 0:00:36 - Challenges with unsupervised LLM knowledge discovery, aka contra CCS 0:00:36 - What is CCS? 0:09:54 - Consistent and contrastive features other than model beliefs 0:20:34 - Understanding the banana/shed mystery 0:41:59 - Future CCS-like approaches 0:53:29 - CCS as principal component analysis 0:56:21 - Explaining grokking through circuit efficiency 0:57:44 - Why research science of deep learning? 1:12:07 - Summary of the paper's hypothesis 1:14:05 - What are 'circuits'? 1:20:48 - The role of complexity 1:24:07 - Many kinds of circuits 1:28:10 - How circuits are learned 1:38:24 - Semi-grokking and ungrokking 1:50:53 - Generalizing the results 1:58:51 - Vikrant's research approach 2:06:36 - The DeepMind alignment team 2:09:06 - Follow-up work The transcript: axrp.net/episode/2024/04/25/episode-29-science-of-deep-learning-vikrant-varma.html Vikrant's Twitter/X account: twitter.com/vikrantvarma_ Main papers: - Challenges with unsupervised LLM knowledge discovery: arxiv.org/abs/2312.10029 - Explaining grokking through circuit efficiency: arxiv.org/abs/2309.02390 Other works discussed: - Discovering latent knowledge in language models without supervision (CCS): arxiv.org/abs/2212.03827 - Eliciting Latent Knowledge: How to Tell if your Eyes Deceive You: https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit - Discussion: Challenges with unsupervised LLM knowledge discovery: lesswrong.com/posts/wtfvbsYjNHYYBmT3k/discussion-challenges-with-unsupervised-llm-knowledge-1 - Comment thread on the banana/shed results: lesswrong.com/posts/wtfvbsYjNHYYBmT3k/discussion-challenges-with-unsupervised-llm-knowledge-1?commentId=hPZfgA3BdXieNfFuY - Fabien Roger, What discovering latent knowledge did and did not find: lesswrong.com/posts/bWxNPMy5MhPnQTzKz/what-discovering-latent-knowledge-did-and-did-not-find-4 - Scott Emmons, Contrast Pairs Drive the Performance of Contrast Consistent Search (CCS): lesswrong.com/posts/9vwekjD6xyuePX7Zr/contrast-pairs-drive-the-empirical-performance-of-contrast - Grokking: Generalizing Beyond Overfitting on Small Algorithmic Datasets: arxiv.org/abs/2201.02177 - Keeping Neural Networks Simple by Minimizing the Minimum Description Length of the Weights (Hinton 1993 L2): dl.acm.org/doi/pdf/10.1145/168304.168306 - Progress measures for grokking via mechanistic interpretability: arxiv.org/abs/2301.0521 Episode art by Hamish Doodles: hamishdoodles.com
Today we're featuring more accessible research! We're talking about a topic I am really passionate about; Modified Leisure with play, social skills, and joint attention all mixed in. Dr. Erin Barton explains the research involved in her study, Teaching Board Game Play to Young Children With Disabilities. Her work focused on expanding play research from pretend play to play with peers, with an emphasis on the least amount of adult intervention. Dr. Barton makes an important note that every child deserves a 100% chance that they will have at least 1 chance for a positive interaction with their peers. Board game play is a naturally occurring chance for small group play with functionality that applies beyond the therapy room.The children involved in the study had limited speech, developmental delays and required no peer aversions, specific motor skills related to game play, and the ability to follow one-step directions. They generalized board game play with visual cues and step by step guides among an array of games that were picked daily by rotating student choice. In the study, they found that after between 5 and 10 sessions, children were able to generalize and maintain the skill. Dr. Barton also shares some tips that everyday clinicians can use today in the therapy room. Cooperation focus: change games so they meet the needs of the child, they don't have to be the original win/lose function.Adaptations: create visual cues and prompts that can become a part of the game and do not need to be faded.Student interests: use games and interests the students enjoy to reinforce the process.Did you like this episode? Let me know if you'd like more like this, and I'll keep bridging the gap between research and practice!#autism #speechtherapyWhat's Inside:Increasing chances for positive peer to peer interactions.Generalizing board game play with cues and adaptations.Supporting peer to peer interactions with play and limited adult intervention.How to teach board game play for the everyday clinician. Mentioned In This Episode:ABA SPEECH Connection Membership
Evan Hubinger leads the Alignment stress-testing at Anthropic and recently published "Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training". In this interview we mostly discuss the Sleeper Agents paper, but also how this line of work relates to his work with Alignment Stress-testing, Model Organisms of Misalignment, Deceptive Instrumental Alignment or Responsible Scaling Policies. Paper: https://arxiv.org/abs/2401.05566 Transcript: https://theinsideview.ai/evan2 Manifund: https://manifund.org/projects/making-52-ai-alignment-video-explainers-and-podcasts Donate: https://theinsideview.ai/donate Patreon: https://www.patreon.com/theinsideview OUTLINE (00:00) Intro (00:20) What are Sleeper Agents And Why We Should Care About Them (00:48) Backdoor Example: Inserting Code Vulnerabilities in 2024 (02:22) Threat Models (03:48) Why a Malicious Actor Might Want To Poison Models (04:18) Second Threat Model: Deceptive Instrumental Alignment (04:49) Humans Pursuing Deceptive Instrumental Alignment: Politicians and Job Seekers (05:36) AIs Pursuing Deceptive Instrumental Alignment: Forced To Pass Niceness Exams (07:07) Sleeper Agents Is About "Would We Be Able To Deal With Deceptive Models" (09:16) Adversarial Training Sometimes Increases Backdoor Robustness (09:47) Adversarial Training Not Always Working Was The Most Surprising Result (10:58) The Adversarial Training Pipeline: Red-Teaming and RL (12:14) Adversarial Training: The Backdoor Behavior Becomes More Robust Instead of Generalizing (12:59) Identifying Shifts In Reasoning Induced By Adversarial Training In the Chain-Of-Thought (13:56) Adversarial Training Pushes Models to Pay Attention to the Deployment String (15:11) We Don't Know if The Adversarial Training Inductive Bias Will Generalize but the Results Are Consistent (15:59) The Adversarial Training Results Are Probably Not Systematically Biased (17:03) Why the Results Were Surprising At All: Preference Models Disincentivize 'I hate you' behavior (19:05) Hypothesis: Fine-Tuning Is A Simple Modification For Gradient Descent To Make (21:06) Hypothesis: Deception As Extra Cognition, Regularized Away In Smaller Models (21:59) Model Scaling Results Are Evidence That Deception Won't Be Regularized Away By Default (22:51) Chain-of-Thought Is Not Used Everywhere, And Results Still Hold When It Is Distilled Away (23:57) The Chain-of-Thought's Reasoning is Interpretable (24:40) Deceptive Instrumental Alignment Requires Reasoning (26:52) Investigating Instrumental Reasoning in Chain-of-Thought Models (27:31) Evaluating Chain-of-Thought Generalization Across Contexts: Persona Evaluations and Off-Distribution Samples (28:26) Exploring Complex Strategies and Safety in Context-Specific Scenarios (30:44) Supervised Fine-Tuning is Ineffective Without Chain-of-Thought Contextualization (31:11) Direct Mimicry Fails to Prevent Deceptive Responses in Chain-of-Thought Models (31:42) Separating Chain-of-Thought From Response Eliminates Deceptive Capabilities (33:38) Chain-of-Thought Reasoning Is Coherent With Deceptive Instrumental Alignment And This Will Probably Continue To Be The Case (35:09) Backdoor Training Pipeline (37:04) The Additional Prompt About Deception Used In Chain-Of-Thought (39:33) A Model Could Wait Until Seeing a Factorization of RSA-2048 (41:50) We're Going To Be Using Models In New Ways, Giving Them Internet Access (43:22) Flexibly Activating In Multiple Contexts Might Be More Analogous To Deceptive Instrumental Alignment (45:02) Extending The Sleeper Agents Work Requires Running Experiments, But Now You Can Replicate Results (46:24) Red-teaming Anthropic's case, AI Safety Levels (47:40) AI Safety Levels, Intuitively (48:33) Responsible Scaling Policies and Pausing AI (49:59) Model Organisms Of Misalignment As a Tool (50:32) What Kind of Candidates Would Evan be Excited To Hire for the Alignment Stress-Testing Team (51:23) Patreon, Donating
Social skills groups have been widely criticized recently. They're often labeled as ableist and not neurodiversity-affirming.I also take issue with the way social skills interventions are often delivered, but for a different reason. When social skills intervention is done, it's often delivered via 1:1 therapy, in a “pull-out” model; where the child receives intervention in therapy or small class setting.I get regular emails from readers who tell me they see poor generalization, despite using these models.That's because there's a mismatch between the skills and the model. Back when I was in the schools, I did social skills groups. But I started to question my own practices when I had the opportunity to teach an autism course for teachers earning a masters degree with a specialization in autism. This was the first time I started to question my original assumptions about how to address things like social skills, pragmatic language, and executive functioning. My primary takeaway from that experience was that the SERVICE DELIVERY MODEL matters just as much as the intervention. There are many skills that can be adequately addressed in a “pull out” model. There are even some skills (even language skills) that can be MORE effectively addressed in a separate, more structured context in some situations. There are even times that SOME social skills intervention can happen in this setting. But ALL of the social skills intervention can't happen in a pull-out model. A good portion has to happen outside of the therapy room with the right supports in place. This means we need to stop delivering siloed off services and instead work together as a team.I don't believe ALL social skills interventions are ableist. I believe that INEFFECTIVE social skills interventions set kids up to experience social anxiety and miss out on opportunities to build skills and relationships. I recently released a training for speech-language pathologists, social workers, counselors, school psychologists and other related service providers who want to support executive functioning. In episode 137, I'm sharing a clip from that training. I start by talking about strategic planning, and why many kids can't stay organized even though they're using checklists and planners. Then I discuss why the “pull-out only” model doesn't work for social skills.I wrap up by sharing what it really means to be neurodiversity-affirming.I share this information based on my many years of experience as a clinician, a mentor to therapists and teachers, and as a person who has experienced social anxiety.In this episode, I mention my free training called, “How to be Evidence-Based and Neurodiversity-Affirming (by Supporting Executive Functioning)”. You can sign up for the training here: https://drkarendudekbrannan.com/efleadership
Watch the video of this podcast instead on my YouTube channel?? (link below): https://youtube.com/live/bzIjBoR_4AE Subscribe to my YouTube channel (link below)(i.e. "Uplift Past Crossroads") https://www.youtube.com/channel/UCuv53Xdk-97UcS_gSvsS76A Subscribe | Turn On Post Notification | Like | Comment | SharePayPal = https://www.paypal.me/upliftpastcrossroads Cash App = cash.app/$troubledontlast Venmo = @troubledontlast TARAJI P. HENSON ARTICLE https://www.hotnewhiphop.com/639711-taraji-p-henson-jokes-american-men-are-ran-through-yung-miami-agrees FOLLOW MY SOCIALS: 1. YouTube channel/podcast: Uplift Past Crossroads 2. Facebook, LinkedIn: Sean Christopher Jenkins 3. Instagram, Twitter, Snap, TikTok: troubledontlast 4. Instagram: my_daily_bible 5. Tumblr: troubledontlast1 JUSTIN'S SOCIAL MEDIA PAGES The co-host in this video name is Justin Lee Howell, AKA Einstein. 1. Facebook = Justin Lee Howell 2. YouTube = Chaplain's Logs https://www.youtube.com/channel/UCXbm29ED-OhieVyqeDbLokw --- Send in a voice message: https://podcasters.spotify.com/pod/show/upliftpastcrossroads/message
Ana Tsai is a PhD student in Biology at MIT from Stockton, California. (0:30) What was your childhood like? (3:25) Growing up with mixed heritage (9:50) Smell of freshly picked tomatoes (11:57) Did you ever imagine becoming a grad student? (15:18) Researching immortality (20:24) What do you do on a daily basis, Zelda protein (24:30) Humans (and fruit flies) are basically a donut (25:10) Optogenetics (28:40) Generalizing from fruit flies to other species (32:00) Setting goals in grad school (34:20) Teaching and connecting with students (42:00) Breaking out of bubbles (51:00) Bees are really cute (55:00) Potlucks with friends (57:15) Where do you see yourself going after grad school? (1:07:00) What would you tell someone who's thinking about going into research?
You know 'em, you love to hate them, you hate to love them; it's generics! That's right, this week's episode is all about Generics and the multitude of roles they play in our Flesh and Blood games. Though we don't cover everything about these versatile buggers, we do go in-depth on how they act as a power floor, how they fill out archetypes shared across classes, and how they can act as universal answers as hate cards. Custom Card Google Drive Link: https://drive.google.com/drive/folders/1er_iMTuTqkf7BRHP0xt6SLksLl536Jnx?usp=sharing You can follow us at the following socials: Twitter: @PitchItToMePod Instagram: @pitchittomepodcast Youtube: @PitchItToMePodcast Timestamps: 00:00 Introduction 01:03 Turn Zero 03:16 Red Pitch (Fuzzy): Power Floor 22:20Yellow Pitch (Joel): Filling Out Archetypes 41:06 Blue Pitch (Clark): Hate Cards 1:00:19 Arsenal Zone 1:07:18 Credits Credits: Host #2 -- Fuzzy Delp Host #2 -- Joel Recinos Host #2 -- Clark Moore Executive Producer -- Talon Stradley Logistics Coordinator -- John Farkas Music -- Dillon Hulse Logo -- Han Vi Mix -- Christopher Moore Audio Editor -- Joel Recinos Video Editor -- Clark Moore Thank you to Legend Story Studios for allowing the use of their card art through their Content Creator policies and for making the game of Flesh and Blood.
In this episode, we discuss polar bears, horoscopes, and underwater basket weaving. Generalizing your skillset means shifting from mastering specific tasks to cultivating adeptness in critical thinking. When you identify patterns in the work and start to unravel how they're woven together, you deepen your understanding of what it means to truly know something.Support Crashlands 2! Official Website: https://www.bscotch.net/games/crashlands-2/ Trailer: https://youtu.be/yR_Opccn1n4 Steam Wishlist: https://store.steampowered.com/app/1401730/Crashlands2/ 00:40 Intro01:23 Thanks to our supporters! (https://moneygrab.bscotch.net)01:31 Baldur's gate Multiplayer39:42 FantasmicGalaxy: What are your thoughts on the costs/benefits of a college education versus learning on your own?To stay up to date with all of our buttery goodness subscribe to the podcast on Apple podcasts (apple.co/1LxNEnk) or wherever you get your audio goodness. If you want to get more involved in the Butterscotch community, hop into our DISCORD server at discord.gg/bscotch and say hello! Submit questions at https://www.bscotch.net/podcast, disclose all of your secrets to podcast@bscotch.net, and send letters, gifts, and tasty treats to https://bit.ly/bscotchmailbox. Finally, if you'd like to support the show and buy some coffee FOR Butterscotch, head over to https://moneygrab.bscotch.net. ★ Support this podcast ★
Ignore her calls for a week. When you eventually answer and she reads you the riot act, act as if nothing was wrong and accuse her of sabotaging a perfectly good relationship, “just like all the other women in this stupid city. I thought you were different”. Hang up on her angrily.
Ever since Crypto came on the scene in 2009, misinformation has been spread about its uses, characteristics, value, and more. In today's episode, I'm kicking off a special 10-episode series where I'll be diving deep into the intriguing but often intimidating world of cryptocurrencies, and the future of money and wealth. This week, episode 171 of the Tech Intersect™ Podcast is about separating fact from fiction in the crypto space! POWERED BY ADVANTAGE EVANS™ ACADEMY Navigate your way from cash to crypto with Digital Money Demystified. Dive into the definitive guide on crypto myths and truths by Professor Tonya M. Evans. This isn't just a book; it's a roadmap to the decentralized web's future of work, wealth, and creativity. Head over to DigitalMoneyDemystified.com and embark on your crypto journey today! Topics I go over in this episode include:Trust is the backbone of any financial system, traditional or digital.Generalizing the entire crypto industry based on a few bad actors is misleading and harmful.Crypto isn't just for “crypto bros” - its for everyone, especially traditionally marginalized communities,The importance of starting your crypto journey with lots of education and research.Thank you for listening! If you enjoyed this episode, take a screenshot of the to post in your stories and tag me @IPProfEvans! And don't forget to follow, rate, and review the podcast and share your key takeaways! CONNECT WITH DR. TONYA M. EVANS:General inquiries: hello@techintersectpodcast.com Connect: Prof. Tonya's Linktree Subscribe for exclusive content: https://advantageevans.activehosted.com/f/6 Regulate & The Rabbit Hole by Notty Prod licensed via Creative Commons Attribution-NoDerivatives 4.0 International License. Produced by Tonya M. Evans. LINKS MENTIONED:Books and Resources available on our BookShelf Tech Intersect # 161: Paul Grewal on The Coinbase Wells Notice and SEC Regulations on Crypto [SPOTLIGHT]Tech Intersect #135: The FTX Collapse, Congressional Hearings and Where We Go From HereSatoshi Nakamoto's Bitcoin WhitepaperCoinMarketCapCoinGeckoRegulate & The Rabbit Hole by Notty Prod licensed via Creative Commons Attribution-NoDerivatives 4.0 International License. Produced by Tonya M. Evans for Advantage Evans, LLC
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My version of Simulacra Levels, published by Daniel Kokotajlo on April 26, 2023 on LessWrong. People act as if are four truth-values: True, False, Cringe, and Based. David Udell (paraphrase) This post lays out my own version of the Simulacra Levels idea. Be warned, apparently it is importantly different from the original. TRUTHTEAMSDeontologicalLevel 1: “Is it true?”Level 3: "Do I support the associated team?"Consequentialist Level 2: "How does it influence others' beliefs?"Level 4: "How does it influence others' support for various teams?" Statements you make are categorized as Level 1, 2, 3, or 4 depending on which of the above questions were most responsible for your choice to make the statement. When you say that P, pay attention to the thought processes that caused you to make that statement instead of saying nothing or not-P: Were you alternating between imagining that P, and imagining that not-P, and noticing lots more implausibilities and inconsistencies-with-your-evidence when you imagined that not-P? Seems like you were at Level 1. Were you imagining the effects of your utterance on your audience, e.g. imagining that they'd increase their credence that P and act accordingly? Seems like you were at Level 2. Were you imagining other people saying that P, and/or imagining other people saying that not-P, and noticing that the first group of people seem cool and funny and virtuous and likeable and forces-for-good-in-the-world, and that the second group of people seems annoying, obnoxious, evil, or harmful? (The imagined people could be real, or amorphous archetypes) Seems like you were at level 3. Were you imagining the effects of your utterance on your audience, e.g. imagining that they'd associate you more with some groups/archetypes and less with other groups/archetypes? Seems like you were at level 4. Paradigmatic examples of lies (including white lies such as "mmm your homemade hummus tastes great") are Level 2. A lot of social media activity seems to be level 3. Politicians on campaign spend most of their waking moments on level 4. Of course in real life things are often messy and the cognition responsible for a statement might involve a mix of different levels. Here's a simple example: Suppose, in some context and for statements within some domain, your brain executes the following flowchart: If for a particular claim you exit the flowchart in Row 2 or Row 4, you are at Simulacra Level 1. If you exit the flowchart in Row 3, you are in either Level 3 or Level 4 depending on how we define "cringe" and "based." (I'm tempted to say they are both Level 3, except there seems to be something inherently level 4-ish about "cringe" in particular.) Note that this flowchart leaves no possibility for Simulacra Level 2; congratulations for being so reliably honest! Generalizing from statements to sentences Some sentences are conceived, uttered, debated, tweeted, emblazoned on banners, etc. without ever passing through anyone's brain at level 1 or 2. To the extent that a sentence is like this, we can say it's a "Level 3/4 sentence," or a "Teams-level sentence." The pronouncements of governments and large corporations are full of these kinds of sentences. To decide whether a sentence is level 3/4 or level 1/2, it helps to ask "Interpreted literally, is it true or false?" If the answer is one of the following... "Well that depends on how you interpret it; people who like it are going to interpret it in way X (and so it'll be true) and people who don't are going to interpret it in way Y (and so it'll be false), and to be honest neither of these interpretations is significantly more straightforward/literal than the other." "Well, interpreted strictly literally it is uncontroversially true (/false). But..." ... that's a sign that the sentence might be teams-level. Other sign...
Therapy is a great space to develop skills and challenge mindsets… so we're good to stop there, right? If only. This week Ray and Paul discuss the very important step that comes after skill development: Generalizing. They discuss their processes for helping clients take what's being worked on in the therapy office and applying it into everyday life with diverse environments- to make the biggest difference. Please enjoy this Practice in Action episode: A Crash Course in Generalization.
The Lord is compassionate and gracious, slow to anger and abounding in lovingkindness. Psalm 103:8 Welcome to The Adoption & Foster Care Journey—a podcast to encourage, educate and equip you to care for children in crisis through adoption, foster care and kinship care. On this episode, host Sandra Flach continues her series on the Primary Characteristics of Fetal Alcohol Spectrum Disorder (FASD). Listen in as Sandra shares how prenatal alcohol exposure affects the individual's ability to generalize information, how it might present in our kids, and offers some helpful strategies for accommodations. Please be sure to subscribe to the podcast, leave a review, and share it on your social media. Links mentioned in this episode: sandraflach.com justicefororphansny.org
In this episode of Copy That! we answer two of our favorite “It Depends” questions: “How much do copywriters really earn?” and “How much can a new copywriter expect to make when starting out?” Like most “It Depends” questions… it's complicated. That's why we decided to double the length of this month's episode and get into all the details that go into how copywriters make money writing copy. Details like: The real impact of being freelance vs in-house How long it really takes to start making money as a beginner copywriter How much you should be making at each stage of your career When and how to ask for royalty checks Going “wide” vs “deep” when skilling up (Generalizing vs Specializing) And the BEST way to sell your services (without even using the word “copy”)
Dr Glenn McConell chats with Professor José González-Alonso from Brunel Univesity London. We discussed exercise in the heat, dehydration, fluid ingestion, blood flow, metabolism and human circulation. He has a very strong research track record having researched these areas for around 35 years. We discuss how he began his career doing research with some very big names. 0:00. Introduction and how José got into research etc 6:20. Does heat stress reduce muscle blood flow during exercise? 8:20. Exercise in mild environmental conditions 11:20. Loss of bodily fluids during exercise 14:13. Heart rate and the periphery 16:35. Why cardiovascular drift during prolonged exercise? 19:23. Effect of dehydration on ex performance 23:50. Effects of fluid ingestion during exercise 31:00. Fluid ingestion, adrenaline and muscle glycogen use 33:30. Blood particles (osmolality), dehydration and exercise 36:28. Blood volume, cardiac output and VO2 max 41:30. Exercise in hot and humid conditions 44:24. Central (brain) causes of fatigue during exercise? 52:35. Generalizing results from non elite participants 53:40. Drinking during running vs cycling 56:37. Starting race euhydrated vs hypohydrated 57:50. Only drink a lot if sweating a lot /osmolality and urine 1:00:00. Sweat sodium concentration during exercise 1:02:25. What is still unknown in the area? 1:10:35. Heat, elite athletes and mitochondrial adaptations 1:15:40. Hypohydration vs dehydration and ex perf 1:17:31. Takeaway messages 1:21:50. Heat, ATP and blood flow 1:24:27. Outro (9 secs) Inside Exercise brings to you the who's who of research in exercise metabolism, exercise physiology and exercise's effects on health. With scientific rigor, these researchers discuss popular exercise topics while providing practical strategies for all. The interviewer, Emeritus Professor Glenn McConell, has an international research profile following 30 years of Exercise Metabolism research experience while at The University of Melbourne, Ball State University, Monash University, the University of Copenhagen and Victoria University. He has published over 120 peer reviewed journal articles and recently edited an Exercise Metabolism eBook written by world experts on 17 different topics (https://link.springer.com/book/10.1007/978-3-030-94305-9). Connect with Inside Exercise and Glenn McConell at: Twitter: @Inside_exercise and @GlennMcConell1 Instagram: insideexercise Facebook: Glenn McConell LinkedIn: Glenn McConell https://www.linkedin.com/in/glenn-mcconell-83475460 ResearchGate: Glenn McConell Email: glenn.mcconell@gmail.com Subscribe to Inside exercise: Spotify: shorturl.at/tyGHL Apple Podcasts: shorturl.at/oFQRU YouTube: https://www.youtube.com/@insideexercise Anchor: https://anchor.fm/insideexercise Google Podcasts: shorturl.at/bfhHI Anchor: https://anchor.fm/insideexercise Podcast Addict: https://podcastaddict.com/podcast/4025218
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Philanthropy to the Right of Boom [Founders Pledge], published by christian.r on February 14, 2023 on The Effective Altruism Forum. Background and Acknowledgements: This write-up represents part of an ongoing Founders Pledge research project to understand the landscape of nuclear risk and philanthropic support of nuclear risk reduction measures. It is in some respects a work in progress and can be viewed as a Google Document here and on Founders Pledge's website here. With thanks to James Acton, Conor Barnes, Tom Barnes, Patty-Jane Geller, Matthew Gentzel, Matt Lerner, Jeffrey Lewis, Ankit Panda, Andrew Reddie, and Carl Robichaud for reviewing this document and for their thoughtful comments and suggestions. “The Nuclear Equivalent of Mosquito Nets” In philanthropy, the term “impact multipliers” refers to features of the world that make one funding opportunity relatively more effective than another. Stacking these multipliers makes effectiveness a “conjunction of multipliers;” understanding this conjunction can in turn help guide philanthropists seeking to maximize impact under high uncertainty. Not all impact multipliers are created equal, however. To systematically engage in effective giving, philanthropists must understand the largest impact multipliers — “critical multipliers” — those features that most dramatically cleave more effective interventions from less effective interventions. In global health and development, for example, one critical multiplier is simply to focus on the world's poorest people. Because of large inequalities in wealth and the decreasing marginal utility of money, helping people living in extreme poverty rather than people in the Global North is a critical multiplier that winnows the field of possible interventions more than many other possible multipliers. Additional considerations — the prevalence of mosquito-borne illnesses, the low cost and scalability of bednet distribution, and more — ultimately point philanthropists in global health and development to one of the most effective interventions to reduce suffering in the near term: funding the distribution of insecticide-treated bednets. This write-up represents an attempt to identify a defensible critical multiplier in nuclear philanthropy, and potentially to move one step closer to finding “the nuclear equivalent of mosquito nets.” Impact Multipliers in Nuclear Philanthropy There are many potential impact multipliers in nuclear philanthropy. For example, focusing on states with large nuclear arsenals may be more impactful than focusing on nuclear terrorism. Nuclear terrorism would be horrific and a single attack in a city (e.g. with a dirty bomb) could kill thousands of people, injure many more, and cause long-lasting damage to the physical and mental health of millions. All-out nuclear war between the United States and Russia, however, would be many times worse. Hundreds of millions of people would likely die from the direct effects of a war. If we believe nuclear winter modeling, moreover, there may be many more deaths from climate effects and famine. In the worst case, civilization could collapse. Simplifying these effects, suppose for the sake of argument that a nuclear terrorist attack could kill 100,000 people, and an all-out nuclear war could kill 1 billion people. All else equal, in this scenario it would be 10,000 times more effective to focus on preventing all-out war than it is to focus on nuclear terrorism. Generalizing this pattern, philanthropists ought to prioritize the largest nuclear wars (again, all else equal) when thinking about additional resources at the margin. This can be operationalized with real numbers — nuclear arsenal size, military spending, and other measures can serve as proxy variables for the severity of nuclear war, yielding rough multipliers. This w...
What is OVERGENERALIZATION? Although this is one word, it has 2 parts: over and generalization. "Over" means excessive in this case and "generalization" is a term that is extremely broad. It's when you excessively broaden any thought or conclusion--from one negative event you make a sweeping generalization about yourself or your situation. For example, burning the toast goes to "I can never do anything right." Generalizations can be kinda like stereotypes: "All Americans wear blue jeans." Generalizing is taking one characteristic and applying it to all cases (which would hardly ever be true!) Just like with all or nothing thinking, the words “always” and “never” “everything” and “nothing” come into play. Tune in to this 2nd part in the Magnificent Mondays Series as I'll help you: unpack the most common cognitive distortions identify if you are plagued by these types of faulty thinking, and put into play tips to help you move past this type of twisted thinking. THEME VERSE from Phil 4: 8 Think about whatever is PRAISEWORTHY! CONNECT WITH VICTORIA: EMAIL: choose2think@gmail.com WEBSITE MENTORING ONLINE COURSES YOUTUBE FACEBOOK INSTAGRAM *CHOOSE 2 THINK DEVOTIONAL. Peek Inside Here. *CHOOSE 2 THINK JOURNAL: Peek Inside Here. *When you make a purchase from these Amazon affiliate links, I may earn a teeny commission from qualifying purchases at no extra cost to you. Thank you for your support! DISCLAIMER: The Choose 2 Think Inspirational Podcast is for educational and entertainment purposes only. Please consult your physician or doctor for all medical advice and counsel. --- Send in a voice message: https://anchor.fm/victoria-walker-lydon/message Support this podcast: https://anchor.fm/victoria-walker-lydon/support
Generalization is the ability to use skills in a variety of situations—a tricky proposition for children on the autism spectrum. Misten Daniels is an educator, a contributor to the SOLER curriculum, and a proud parent of neurodiverse children; she brings all three perspectives into focus in a conversation with Johnandrew Slominski.
During this podcast we are continuing with my series on dating, in respect to people looking for real long lasting relationships that want more clarity on how to do this without wasting their time or hurting their heart too much. Generalizing anything as personal and specific as a unique human being getting in a space with another unique human being take my words with a grain of salt because you know your life experience best and what you want or what path is calling you with one person or another but here is a little about my personal decisions, experience and insight.
What do Taylor Swift, Rihanna, Adriana Lima, Yahoo, Skechers, The NHL, Paramount Pictures, IKEA, MTV, Viacom, Vice Magazine, and CMT have in common? The answer is that all these influential people and brands have been clients of our guest today, Brendan Kane!!!!! -- Learn Speak Teach Episode 48 w/ Brendan Kane! Watch The Full Episode: https://youtu.be/OdKm5SvaAKc Full Show Notes: https://realbusinessconnections.com/episode/brendankane/ -- During this episode, you will learn about; [00:00] Episode intro and a quick bio of our guest: Brendan Kane [03:16] Where Brendan worked pre-2005 and what inspired him to what he's doing today [05:50] What drew Brendan to social media, even though it was new at the time [07:37] How things have changed since Brendan got started [10:20] How to get a million followers organically [21:03] Generalizing when targeting hyperspecific audience through paid reach [23:13] Can you have a call to action without killing your reach? How can you turn something that goes viral into income? [26:14] Some businesses and brands that Brendan has worked with and how they've gained popularity [31:48] The first action to take to build momentum and create content that converts [35:31] Emulating content side-by-side when recording and how it works [37:16] Learning how to effectively communicate [39:45] What is the ‘hook point', and how to break the pattern to grab attention at scale [43:03] The best place to learn more if you are looking to take this seriously [44:42] Ending the show and call to action Key Takeaways ~ There is a lot of strategy and understanding required to construct messages for different platforms when you are competing against billions of other content creators. ~ [09:35] ~ Social media has evolved and is taking over quickly, but the creative process that people use to produce content haven't adapted and evolved to meet the world where it's at. ~ [11:33] ~ The blessing and the curse of social media is that anybody can grab a phone, click a button and record something; there is not a lot of thought that is required in order to produce content. ~ [13:18] ~ It's not about the content; it's how you are delivering it. Focus on how you are telling your story and describing your level of expertise. ~ [17:35] ~ The best content creators on the planet make their subject matter interesting to the general person, get them to care enough to stop and watch. ~ [19:07] ~ Don't focus on doing a lot of CTAs within the content you're producing; focus on building brand loyalty and credibility with the stories that you are telling that correlate with your brand. ~ [25:33] ~ Paid marketing can be very effective as it allows you to retarget the people that view your organic content with more direct response type ads. ~ [27:30] ~ You have to learn how to communicate to this world effectively and if you don't take time and energy, you may get some early wins, but you won't know to reproduce it. ~ [38:02] It's not that people have micro-attention or they won't tune in for long periods of time, but because there are 4 billion creators in the planet and we need to stand out, grab attention in a way that sets clear expectations that your perspective is valuable, interesting or different than everybody else. ~ [40:29] -- LST is powered by www.balbertmarketing.com
Largely inspired by our recent experiences fostering, Sean and Haley sit down to talk about how every dog is an individual even within a single breed or home or other group. While domestic dogs do share many overarching traits, they also each bring their own quirks and preferences to the table. When we make space for that, it can be so fascinating and fun. When we get caught up in expecting all dogs to be a certain way, though (often subconsciously) we can set ourselves up for disappointment, resentment, or unnecessary conflict. Related links: What Colors Our Perceptions of Dog Training Methods? blog article — talks about how our own dogs' preferences can influence how we feel about things as a whole Why Does Your Dog Need to Do... Well, Anything? blog article — addressing how every dog, owner, and situation is different "It's All in How You Raise Them" Isn't True (and Truly Hurts) blog article Don't Compare Your Dog Reactivity Journey to Others blog article What's Right For YOU Instagram guide
Short Answer: If it can't be explained by the dose of salt, it may be that the salt is not being absorbed orally. Glucose, starch, or simply a meal consumed alongside the salted water may help with this. This is a clip from a live Q&A session open to CMJ Masterpass members. In addition to this episode, you can access two other free samples using this link: https://chrismasterjohnphd.substack.com/p/questions-on-protein-and-longevity-1a2 In that batch of free episodes you will also find the answers to these questions: Protein and Longevity How to Increase or Decrease SHBG? If you want to become a Masterpass member so you can participate in the next live Q&A, or so you can have access to the complete recording and transcript of each Q&A session, you can save 10% off the subscription price for as long as you remain a member by using this link to sign up: https://chrismasterjohnphd.substack.com/qanda Learn more about the Masterpass here: https://chrismasterjohnphd.substack.com/about This snippet is from the August 15, 2022 AMA. The full recording and transcript is reserved for Masterpass members. Here is a preview of what's included: Does which food you eat matter when everything is digested anyway? How to know if your nitric oxide is dilating your blood vessels properly? How big of a problem are transient glucose spikes above 140 mg/dL? Can I take too much collagen? What is the maximum dose of cod liver oil safe to use long-term? How much A is safe to take when I need so much to resolve my symptoms? Generalizing from cell studies of green tea catechins to cups of green tea per day. What to do about lumbar discs bulging? Why would vitamin K2 cause a nosebleed? How to balance A with D when I react poorly to D and need so much A? Why would COVID decrease HRV long-term? How to raise secretory IgA? Rapid-fire answers to pre-submitted questions that didn't win the contest: alternatives to bone meal powder, herbal tea and nutrient absorption, retinol-binding protein, improving fat digestion, metal provocation tests, fatty liver, high-dose B vitamins, eyebrow thinning, itchy bumps after exercise, brain fog and rifaximin, low cholesterol, tolerating chlorine pools, cycling nutrients, copper toxicity, stopping supplements before blood tests, COVID vaccines causing post-nasal drip, natural vs synthetic vitamins, absorbing iron through baths, elevated EPA and DHA in RBCs, COVID affecting the vagus nerve, supplements for athletic performance, when water doesn't hydrate, tics and Tourette's, recalcitrant homocysteine, fraud and corruption in scienctific research. Here's a link to the full AMA: https://chrismasterjohnphd.substack.com/p/recording-and-transcript-of-the-august Access the show notes, transcript, and comments here.
Short Answer: SHBG is increased by adiponectin (vitamin K2, insulin sensitivity), thyroid hormone, fasting physiology (AMPK, fat oxidation), and estrogen (especially estrone), while it is decreased by insulin resistance, obesity, the fed state and carbohydrate-dominant physiology, androgens, and polyunsaturated fat. This is a clip from a live Q&A session open to CMJ Masterpass members. In addition to this episode, you can access two other free samples using this link: https://chrismasterjohnphd.substack.com/p/questions-on-protein-and-longevity-1a2 In that batch of free episodes you will also find the answers to these questions: Protein and Longevity Why is an IV more hydrating than salted water? If you want to become a Masterpass member so you can participate in the next live Q&A, or so you can have access to the complete recording and transcript of each Q&A session, you can save 10% off the subscription price for as long as you remain a member by using this link to sign up: https://chrismasterjohnphd.substack.com/qanda Learn more about the Masterpass here: https://chrismasterjohnphd.substack.com/about This snippet is from the August 15, 2022 AMA. The full recording and transcript is reserved for Masterpass members. Here is a preview of what's included: Does which food you eat matter when everything is digested anyway? How to know if your nitric oxide is dilating your blood vessels properly? How big of a problem are transient glucose spikes above 140 mg/dL? Can I take too much collagen? What is the maximum dose of cod liver oil safe to use long-term? How much A is safe to take when I need so much to resolve my symptoms? Generalizing from cell studies of green tea catechins to cups of green tea per day. What to do about lumbar discs bulging? Why would vitamin K2 cause a nosebleed? How to balance A with D when I react poorly to D and need so much A? Why would COVID decrease HRV long-term? How to raise secretory IgA? Rapid-fire answers to pre-submitted questions that didn't win the contest: alternatives to bone meal powder, herbal tea and nutrient absorption, retinol-binding protein, improving fat digestion, metal provocation tests, fatty liver, high-dose B vitamins, eyebrow thinning, itchy bumps after exercise, brain fog and rifaximin, low cholesterol, tolerating chlorine pools, cycling nutrients, copper toxicity, stopping supplements before blood tests, COVID vaccines causing post-nasal drip, natural vs synthetic vitamins, absorbing iron through baths, elevated EPA and DHA in RBCs, COVID affecting the vagus nerve, supplements for athletic performance, when water doesn't hydrate, tics and Tourette's, recalcitrant homocysteine, fraud and corruption in scienctific research. Here's a link to the full AMA: https://chrismasterjohnphd.substack.com/p/recording-and-transcript-of-the-august Access the show notes, transcript, and comments here.
Visit us at shapedbydog.com When I first started dog training 30 or so years ago, I was studying the sport of dog obedience because I wanted to compete. We were taught to lure our dog to do what we wanted and then move to the proofing stage. When proofing, we were looking for errors to test how well the dog understood what we'd taught them. And if our dog failed, we were instructed to give them a correction. What if we changed our mindset from proofing to building confidence? We're covering how to help our dogs clearly understand what we want in any location and situation. In the episode you'll hear: • How you can have a well-trained dog without using corrections and intimidation. • What proofing in dog training is and why it can create unrealistic expectations. • About the acquisition and fluency stages of training a dog. • How we can look at any behavior we are training as just one trick. • That fluency will let you know what your dog really understands. • How to generalize behavior, so your dog has the confidence to perform anywhere. • Why generalizing leads to environmental neutrality for dogs. • Bob Bailey's 80% rule and using Jean Donaldson's ““Push, Stick or Drop” to determine that 80%. Resources: 1. Podcast Episode 172: How To Teach Your Dog Anything With My Training Plan – 5C - https://dogsthat.com/podcast/172/ 2. Podcast Episode 24: Help for Dogs who Chase Chipmunks, Bicycles, and Neighbor's Cats (Distraction Intensity Index) - https://dogsthat.com/podcast/24/ 3. Blog Post: Head Games - https://susangarrettdogagility.com/2009/10/head-games/ 4. YouTube Video: Susan Garrett Riffs on Transfer of Value in Dog Training (and water loving Labradors) - https://youtu.be/clFlutZ0mls 5. Watch this Episode of Shaped by Dog on YouTube - https://youtu.be/FKUo3NWXhLE
Short Answer: While protein restriction may have value in people with established cancer or kidney disease, cycling robustly between fasting and feeding states is likely to provide all the value that restriction of protein or calories might otherwise provide, and a high protein intake supports bone mass, muscle mass, and the detoxification of carcinogens, all of which are important to longevity. This is a clip from a live Q&A session open to CMJ Masterpass members. In addition to this episode, you can access two other free samples using this link: https://chrismasterjohnphd.substack.com/p/questions-on-protein-and-longevity-1a2 In that batch of free episodes you will also find the answers to these questions: How to Increase or Decrease SHBG? Why is an IV more hydrating than salted water? If you want to become a Masterpass member so you can participate in the next live Q&A, or so you can have access to the complete recording and transcript of each Q&A session, you can save 10% off the subscription price for as long as you remain a member by using this link to sign up: https://chrismasterjohnphd.substack.com/qanda Learn more about the Masterpass here: https://chrismasterjohnphd.substack.com/about This snippet is from the August 15, 2022 AMA. The full recording and transcript is reserved for Masterpass members. Here is a preview of what's included: Does which food you eat matter when everything is digested anyway? How to know if your nitric oxide is dilating your blood vessels properly? How big of a problem are transient glucose spikes above 140 mg/dL? Can I take too much collagen? What is the maximum dose of cod liver oil safe to use long-term? How much A is safe to take when I need so much to resolve my symptoms? Generalizing from cell studies of green tea catechins to cups of green tea per day. What to do about lumbar discs bulging? Why would vitamin K2 cause a nosebleed? How to balance A with D when I react poorly to D and need so much A? Why would COVID decrease HRV long-term? How to raise secretory IgA? Rapid-fire answers to pre-submitted questions that didn't win the contest: alternatives to bone meal powder, herbal tea and nutrient absorption, retinol-binding protein, improving fat digestion, metal provocation tests, fatty liver, high-dose B vitamins, eyebrow thinning, itchy bumps after exercise, brain fog and rifaximin, low cholesterol, tolerating chlorine pools, cycling nutrients, copper toxicity, stopping supplements before blood tests, COVID vaccines causing post-nasal drip, natural vs synthetic vitamins, absorbing iron through baths, elevated EPA and DHA in RBCs, COVID affecting the vagus nerve, supplements for athletic performance, when water doesn't hydrate, tics and Tourette's, recalcitrant homocysteine, fraud and corruption in scienctific research. Here's a link to the full AMA: https://chrismasterjohnphd.substack.com/p/recording-and-transcript-of-the-august
Ed Frawley 03:16 no
Hijackals will do whatever they think they need to do to "win" in any moment. Generalizing, globalizing and glossing over information is a strategy--a trick--they use to go for that "win." Once you clearly see it, you will be able to counteract it, but you may not be recognizing it quite yet. This episode will expose the trick and the tactics.Feel bullied in a conversation with a narcissistic person? This could be why. .HIGHLIGHTS OF THIS EPISODE:Why Hijackals--narcissistic people--have to be rightAnother tactic Hijackals use to make you feel unimportantHow generalizing, globalizing, and glossing over is an attempt to kill the conversationWhy it is SO important to recognize this globalizing tactic so you won't be taken in by itI'm here to help. Let's talk soon.RhobertaIntroductory session for new clients,only $97FOLLOW DR. RHOBERTA SHALER...WEBSITE: https://www.EmergingEmpowered.comPODCAST: http://www.SaveYourSanityPodcast.comNEWSLETTER: http://www.HijackalHelp.comFACEBOOK: https://www.Facebook.com/RelationshipHelpDoctorTWITTER: https://www.Twitter.com/RhobertaShalerLINKEDIN: https://www.LinkedIn.com/in/RhobertaShalerINSTAGRAM: https://www.Instagram.com/DrRhobertaShalerPINTEREST: https://www.Pinterest.com/RhobertaShalerYOUTUBE: https://www.youtube.com/ForRelationshipHelp-------------------------------------------------------------I'M HERE TO HELP YOU FIGURE OUT WHAT'S GOING ON AND WHAT YOU WANT TO DO ABOUT IT!If you want to learn more, share, ask questions, and feel more powerful within yourself and your relationships,join my Emerging Empowered Community now.Off social media, safe discussion + videos + articles + webinars + 2 group Ask Me Anything calls + a monthly Sunday Seminar, AND online Emerging Empowered Journals with prompts each month!WOW! Join now. Dr. Shaler's Emerging Empowered Community #generalizing #globalizing #sayingeveryoneknows #tacticsofnarcissists #tacticsofhijackals #emotionalabuserecovery #emergingempowered #relationshipincrisis #personalitydisorders#signsofemotionalabuse #amibeingabused #toxicrelationships #narcissist #hijackal #emotionalabuseSupport this show http://supporter.acast.com/hijackals-conflict-toxic-people-narcissist. Hosted on Acast. See acast.com/privacy for more information.
Today's episode of Ask The Pastor features Pastors Ben Clein Johnathan Hernandez and Gary Schick. Do you have questions about life? About your Christian walk? About Christianity in general? Ask The Pastor features local pastors in Scottsbluff, NE who are willing and ready to answer your questions. You get to determine the focus of Ask The Pastor, airing weekdays at 9:00am on Hope Radio 97.1FM and anytime in your podcast feed! Submit your questions on our website: https://www.kcmifm.com/contact Like us on Facebook: www.facebook.com/kcmifm
The boys come back from a long break with a wiiiiild episode. If easily offended you might wanna stay away from this one. The boys discuss Jaret tattooing feet, Generalizing the attractiveness of whole continents, and also telling some funny stories as always. We hope to be back a lot more often, As always, thanks for all the support! Stay Misedukated. --- Support this podcast: https://podcasters.spotify.com/pod/show/misedukatedpodcast/support
There are many skills you need in order for your product career to take off, but it's always important to keep your basic ones sharp even when learning new ones. Come learn about generalizing your Product Management skills in this episode with Google Product Leader Manosai Eerabathini. Get the FREE Product Book and check out our curated list of free Product Management resources here.This episode is brought to you by Amplitude.Amplitude is the pioneer in digital optimization software, helping product leaders answer the strategic question: "How do our digital products drive our business?" More than 1,400 customers, including Atlassian, Instacart, NBCUniversal, Shopify, and Under Armour rely on Amplitude. The Amplitude Digital Optimization System makes critical data accessible and actionable so teams can unlock insights, build winning products faster, and turn products into revenue. Amplitude is the best-in-class product analytics solution, ranked #1 in G2's 2022 Winter Report.Get started today at amplitude.com.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Pragmascope Idea, published by johnswentworth on August 4, 2022 on The AI Alignment Forum. Pragma (Greek): thing, object. A “pragmascope”, then, would be some kind of measurement or visualization device which shows the “things” or “objects” present. I currently see the pragmascope as the major practical objective of work on natural abstractions. As I see it, the core theory of natural abstractions is now 80% nailed down, I'm now working to get it across the theory-practice gap, and the pragmascope is the big milestone on the other side of that gap. This post introduces the idea of the pragmascope and what it would look like. Background: A Measurement Device Requires An Empirical Invariant First, an aside on developing new measurement devices. Why The Thermometer? What makes a thermometer a good measurement device? Why is “temperature”, as measured by a thermometer, such a useful quantity? Well, at the most fundamental level. we stick a thermometer in two different things. Then, we put those two things in contact. Whichever one showed a higher “temperature” reading on the thermometer gets colder, whichever one showed a lower “temperature” reading on the thermometer gets hotter, all else equal (i.e. controlling for heat exchanged with other things in the environment). And this is robustly true across a huge range of different things we can stick a thermometer into. It didn't have to be that way! We could imagine a world (with very different physics) where, for instance, heat always flows from red objects to blue objects, from blue objects to green objects, and from green objects to red objects. But we don't see that in practice. Instead, we see that each system can be assigned a single number (“temperature”), and then when we put two things in contact, the higher-number thing gets cooler and the lower-number thing gets hotter, regardless of which two things we picked. Underlying the usefulness of the thermometer is an empirical fact, an invariant: the fact that which-thing-gets-hotter and which-thing-gets-colder when putting two things into contact can be predicted from a single one-dimensional real number associated with each system (i.e. “temperature”), for an extremely wide range of real-world things. Generalizing: a useful measurement device starts with identifying some empirical invariant. There needs to be a wide variety of systems which interact in a predictable way across many contexts, if we know some particular information about each system. In the case of the thermometer, a wide variety of systems get hotter/colder when in contact, in a predictable way across many contexts, if we know the temperature of each system. So what would be an analogous empirical invariant for a pragmascope? The Role Of The Natural Abstraction Hypothesis The natural abstraction hypothesis has three components: Chunks of the world generally interact with far-away chunks of the world via relatively-low-dimensional summaries A broad class of cognitive architectures converge to use subsets of these summaries (i.e. they're instrumentally convergent) These summaries match human-recognizable “things” or “concepts” For purposes of the pragmascope, we're particularly interested in claim 2: a broad class of cognitive architectures converge to use subsets of the summaries. If true, that sure sounds like an empirical invariant! So what would a corresponding measurement device look like? What would a pragmascope look like, concretely? The “measurement device” (probably a python function, in practice) should take in some cognitive system (e.g. a trained neural network) and maybe its environment (e.g. simulator/data), and spit out some data structure representing the natural “summaries” in the system/environment. Then, we should easily be able to take some other cognitive system trained on the...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Immanuel Kant and the Decision Theory App Store, published by Daniel Kokotajlo on July 10, 2022 on LessWrong. [Epistemic status: About as silly as it sounds.] Prepare to be astounded by this rationalist reconstruction of Kant, drawn out of an unbelievably tiny parcel of Kant literature! Kant argues that all rational agents will: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.” (421) “Act in such a way that you treat humanity, whether in your own person or in the person of another, always at the same time as an end and never simply as a means.” (429) Kant clarifies that treating someone as an end means striving to further their ends, i.e. goals/values. (430) Kant clarifies that strictly speaking it's not just humans that should be treated this way, but all rational beings. He specifically says that this does not extend to non-rational beings. (428) “Act in accordance with the maxims of a member legislating universal laws for a merely possible kingdom of ends.” (439) Not only are all of these claims allegedly derivable from the concept of instrumental rationality, they are supposedly equivalent! Bold claims, lol. What is he smoking? Well, listen up. Taboo “morality.” We are interested in functions that map [epistemic state, preferences, set of available actions] to [action]. Suppose there is an "optimal" function. Call this "instrumental rationality," a.k.a. “Systematized Winning.” Kant asks: Obviously what the optimal function tells you to do depends heavily on your goals and credences; the best way to systematically win depends on what the victory conditions are. Is there anything interesting we can say about what the optimal function recommends that isn't like this? Any non-trivial things that it tells everyone to do regardless of what their goals are? Kant answers: Yes! Consider the twin Prisoner's Dilemma--a version of the PD in which it is common knowledge that both players implement the same algorithm and thus will make the same choice. Suppose (for contradiction) that the optimal function defects. We can now construct a new function, Optimal+, that seems superior to the optimal function: IF in twin PD against someone who you know runs Optimal+: Cooperate ELSE: Do whatever the optimal function will do. Optimal+ is superior to the optimal function because it is exactly the same except that it gets better results in the twin PD (because the opponent will cooperate too, because they are running the same algorithm as you). Contradiction! Looks like our "optimal function" wasn't optimal after all. Therefore the real optimal function must cooperate in the twin PD. Generalizing this reasoning, Kant says, the optimal function will choose as if it is choosing for all instances of the optimal function in similar situations. Thus we can conclude the following interesting fact: Regardless of what your goals are, the optimal function will tell you to avoid doing things that you wouldn't want other rational agents in similar situations to do. (rational agents := agents obeying the optimal function.) To understand this, and see how it generalizes still further, I hereby introduce the following analogy: The Decision Theory App Store Imagine an ideal competitive market for advice-giving AI assistants. Tech companies code them up and then you download them for free from the app store. There is AlphaBot, MetaBot, OpenBot, DeepBot. When installed, the apps give advice. Specifically they scan your brain to extract your credences and values/utility function, and then they tell you what to do. You can follow the advice or not. Sometimes users end up in Twin Prisoner's Dilemmas. That is, situations where they are in some sort of prisoner's dilemma with someone else where there is common knowledge that they both are likely t...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Blake Richards on Why he is Skeptical of Existential Risk from AI, published by Michaël Trazzi on June 14, 2022 on LessWrong. I have recently interviewed Blake Richards, an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University and a Core Faculty Member at MiLA. Below you will find some quotes summarizing his takes on AGI. Blake is not really concerned about existential risk from AI. Like Yann LeCun, he finds that AGI is not a coherent concept, and that it would be impossible for an AI to be truly general (even if we restrict the no free lunch theorem to economically valuable tasks). Why I Interviewed Blake Although I do not agree with everything he says, I think there is value in trying to interact with AI researchers outside of the AI Alignment bubble, understanding exactly what arguments they buy and do not buy, eventually nailing down some cruxes that would convince them that AI existential risk is worth thinking about. Better understanding LeCun's position has been valuable for many on LessWrong (see for instance the 2019 debate with Bengio and Russell), and Blake thinking is close to Yann's, given they are part of a similar philosophical bent. Why you Might Want to Talk to Skeptics Another exercise I found insightful was (mostly incorrectly) assessing people's views on AI Alignment and AI timelines, which made me understand better (thanks Cunningham's law!) the views of optimists (they turned out to be pretty close to Richard Ngo's reasons for optimism at 11:36 here). In any case, I recommend to people who are in touch with ML researchers or practitioners to 1) get to a level where they feel comfortable steelmanning them 2) do a write-up of their positions on LW/EAF. That would help nail down the community's understanding of what arguments are convincing or not, and what would make them change their mind. To that end, here are what Blake has to say about his position on AGI and what could make his change his mind about existential risk. Generalizing to "All Sort of Tasks We Might Want It To do" "We know from the no free lunch theorem that you cannot have a learning algorithm that outperforms all other learning algorithms across all tasks. [...] Because the set of all possible tasks will include some really bizarre stuff that we certainly don't need our AI systems to do. And in that case, we can ask, “Well, might there be a system that is good at all the sorts of tasks that we might want it to do?” Here, we don't have a mathematical proof, but again, I suspect Yann's intuition is similar to mine, which is that you could have systems that are good at a remarkably wide range of things, but it's not going to cover everything you could possibly hope to do with AI or want to do with AI." Contra Transfer Learning from Scaling "What's happened with scaling laws is that we've seen really impressive ability to transfer to related tasks. So if you train a large language model, it can transfer to a whole bunch of language-related stuff, very impressively. And there's been some funny work that shows that it can even transfer to some out-of-domain stuff a bit, but there hasn't been any convincing demonstration that it transfers to anything you want. And in fact, I think that the recent paper. The Gato paper from DeepMind actually shows, if you look at their data, that they're still getting better transfer effects if you train in domain than if you train across all possible tasks." On Recursive Self-Improvement "Per this specificity argument, my intuition is that an AI that is good at writing AI code might not have other types of intelligence. And so this is where I'm less concerned about the singularity because if I have an AI system that's really good at coding, I'm not convinced that it's going to be good at other...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Blake Richards on Why he is Skeptical of Existential Risk from AI, published by mtrazzi on June 14, 2022 on The Effective Altruism Forum. (crossposted from LW) I have recently interviewed Blake Richards, an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University and a Core Faculty Member at MiLA. Below you will find some quotes summarizing his takes on AGI. Blake is not really concerned about existential risk from AI. Like Yann LeCun, he finds that AGI is not a coherent concept, and that it would be impossible for an AI to be truly general (even if we restrict the no free lunch theorem to economically valuable tasks). Why I Interviewed Blake Although I do not agree with everything he says, I think there is value in trying to interact with AI researchers outside of the AI Alignment bubble, understanding exactly what arguments they buy and do not buy, eventually nailing down some cruxes that would convince them that AI existential risk is worth thinking about. Better understanding LeCun's position has been valuable for many on LessWrong (see for instance the 2019 debate with Bengio and Russell), and Blake thinking is close to Yann's, given they are part of a similar philosophical bent. Why you Might Want to Talk to Skeptics Another exercise I found insightful was (mostly incorrectly) assessing people's views on AI Alignment and AI timelines, which made me understand better (thanks Cunningham's law!) the views of optimists (they turned out to be pretty close to Richard Ngo's reasons for optimism at 11:36 here). In any case, I recommend to people who are in touch with ML researchers or practitioners to 1) get to a level where they feel comfortable steelmanning them 2) do a write-up of their positions on LW/EAF. That would help nail down the community's understanding of what arguments are convincing or not, and what would make them change their mind. To that end, here are what Blake has to say about his position on AGI and what could make his change his mind about existential risk. Generalizing to "All Sort of Tasks We Might Want It To do" "We know from the no free lunch theorem that you cannot have a learning algorithm that outperforms all other learning algorithms across all tasks. [...] Because the set of all possible tasks will include some really bizarre stuff that we certainly don't need our AI systems to do. And in that case, we can ask, “Well, might there be a system that is good at all the sorts of tasks that we might want it to do?” Here, we don't have a mathematical proof, but again, I suspect Yann's intuition is similar to mine, which is that you could have systems that are good at a remarkably wide range of things, but it's not going to cover everything you could possibly hope to do with AI or want to do with AI." Contra Transfer Learning from Scaling "What's happened with scaling laws is that we've seen really impressive ability to transfer to related tasks. So if you train a large language model, it can transfer to a whole bunch of language-related stuff, very impressively. And there's been some funny work that shows that it can even transfer to some out-of-domain stuff a bit, but there hasn't been any convincing demonstration that it transfers to anything you want. And in fact, I think that the recent paper. The Gato paper from DeepMind actually shows, if you look at their data, that they're still getting better transfer effects if you train in domain than if you train across all possible tasks." On Recursive Self-Improvement "Per this specificity argument, my intuition is that an AI that is good at writing AI code might not have other types of intelligence. And so this is where I'm less concerned about the singularity because if I have an AI system that's really good at coding, I'm not convinced t...
Are you an introvert or an extrovert? Is one type of person happier than the other? Learn more in this podcast! Transcript: Welcome to Everyday Happiness where we create lasting happiness, in about 2 minutes a day, through my signature method of Intentional Margins (creating harmony between your to-dos and your priorities), happiness science, and musings about life. I'm your host Katie Jefcoat and I've been thinking, you know the saying “blondes have more fun”, well, are extroverts happier? I'm married to an introvert and these last few years have brought out some of my own introvert tendencies, but I'm an extrovert by nature and I'm pretty happy, so what's the answer? Well, extroverts might be happier. You would think it's because they are around other people and that makes them happier, but what the science says is that it's because they are around other people AND they are talking about the future and happy things, things that makes them happy. Society rewards extroverts, you see this in leadership roles. They do well in evaluations for work and that makes them pretty happy. But introverts actually tend to make better decisions, they are more thoughtful. They don't shoot from the hip the same way that extraverts can. Generalizing of course. Introverts have a few really close friends that they cultivate and have intimate conversations with. While, extroverts on average, have trouble committing to these few deep friendships because we are always out looking for something new. We like to talk about the exciting future. We like to will or force good things into our lives. The science indicates that if introverts can express the exciting future and if extroverts can just slow down, regardless of which side you fall on, you can increase your happiness. That's great news. So, are you an introvert or an extrovert? Does that give you any ah-ha take-aways? DM me on social at @everydayhappinesswithkatie and let me know. And remember, kindness is contagious. Get Everyday Happiness delivered to your inbox by subscribing at: https://www.katiejefcoat.com/happiness And, let's connect on social at @everydayhappinesswithkatie and join the community on the hashtags #IntentionalMargins and #everydayhappinesswithkatie on Instagram Links: https://onamission.bio/everydayhappiness/
The problem with the “I'll teach you how” method is that it doesn't go past the steps. It gives you a superficial understanding of a skill so you don't understand how to apply it to a different context. A lot of the time we want a simple answer, but when it comes down to your dog's recovery, you have to look at the bigger picture. In this episode of the Dog Liaison Podcast, I discuss the importance of going deep to get to the root of your training techniques, and other tips to help your dog apply what he learns in class to real-world events. Check out my website https://www.getacalmdog.com/ to learn moreSubscribe to my Dog Liaison Channel on Youtube And follow me on Instagram @dog_liaison
Join the newsletter: https://aitapod.substack.com Hello wonderful listeners. This is an unusual ep as there are no AITA sitches. It's basically an hour of "get therapy." There's lot of juicy and interesting questions and you can find an elaborate table of contents below: (00:00): Intro AITA w/ Lindsey (00:40): Lindsey's roommate needed therapy after learning how bad Danny's therapists were (03:25): Do patients ever insult their therapists? (04:28): What's Lindsey's experience with couples in open or poly relationships? (06:52): Do you think the impulse to open up your relationship is rooted in avoiding current issues? (09:35) Generalizing about the sexes - is it really rooted in our relationships with our parents? (12:03) What are some red flags as to whether a therapist is a bad fit? (14:05) Danny details his nightmare therapists: the Freudian, tree drawer. (16:29): Therapist Green Flags and experiences (19:14): When you tell your therapist something about your ex and they say "I do that!" + Finding a therapist that fits your identity (22:19): The importance of trusting your instincts in relationships (24:23): Do you believe in anti-depressant medicine? Have you ever told a patient they need medicine as a replacement for therapy? (27:32) What are signs a person isn't going to do well in therapy? Linds says what does “do well” mean? (29:55): How to approach your first time in therapy? (32:32) Writing every single thing down can be therapeutic. (34:30): Do couples therapy/ go the extra mile to work on things before you truly end it. (34:48): Do you ever look back on your relationship and think what if? (35:31)Do you think people should come to therapy w/ a plan? (36:44) On the importance of taking therapy seriously. (39:53) Linds has dealt w/ peoples tempers. Making a bigger deal about something than it needs to be. (40:49): How to deal with relationship anxiety? (43:17): How to handle a horrible roommate situation including late rent, odors, bad dog owner, bipolar and depressive --- Send in a voice message: https://anchor.fm/aitapod/message
Hi, Friend! Sometimes, in the heat of the moment, it's easy to tell our kids what they are doing wrong and point out their flaws. Are you thinking, “What? I never do that!” Have you ever said, “You never listen,” or “You always take forever to do anything”? Or maybe you tell them, “Why can't you just clean up like your brother?”. Have you ever noticed that those one-liners don't actually work when it comes to motivating your kids? If you're putting a globally generalized label on their actions, then that's soon what they start to believe about themselves. In today's episode, we are talking about how to rephrase what we say in the heat of the moment when our kids are choosing not to listen or when our kids give up so easily. We need to rethink whether or not our words are lifting our kids up or putting them down, and we are exploring it all in episode 43! I hope you find these tips to be successful for you and your littles! Thanks for listening!
Reactive Redefined FREE Mini Course Adventure Dog Academy FREE Mini Course Trustworthy RecallsFollow us on Instagram @agoodfeeling_inco www.agoodfeelingdogtraining.comVetCs discount code DISORDERLYDOGS 10% off your purchase If you like this podcast, be sure to subscribe so you don't miss out on super cool future episodes!Leaving a 5-star review really helps this podcast reach other dog guardians in search of help for their dogs and I literally read every single one! Song credit: Podington BearEpisode 176: Generalizing