POPULARITY
Ist es möglich, die Emotionen anderer Menschen zu steuern? Leon und Atze klären, warum Empathie ein Tor in fremde Gefühlswelten sein könnte und welche Schritte helfen, um die Gefühle außerhalb der eigenen Psyche mitzugestalten. Fühlt euch gut betreut Leon & Atze Instagram: https://www.instagram.com/leonwindscheid/ https://www.instagram.com/atzeschroeder_offiziell/ Mehr zu unseren Werbepartnern findet ihr hier: https://linktr.ee/betreutesfuehlen Tickets: Atze: https://www.atzeschroeder.de/#termine Leon: https://leonwindscheid.de/tour/ VVK Münster 2025: https://betreutes-fuehlen.ticket.io/ Start ins heutige Thema: 05:02 min. Quellen: Paper zum erweiterten Prozessmodell der interpersonellen Emotionsregulation: Nozaki, Y., & Mikolajczak, M. (2020). Extrinsic emotion regulation. Emotion, 20(1), 10. https://doi.org/10.1037/emo0000636 Eine Übersichtsarbeit zu “Empathie und interpersoneller Emotionsregulation” findet ihr hier: Zaki, J. (2020). Integrating empathy and interpersonal emotion regulation. Annual Review of Psychology, 71(1), 517–540. https://doi.org/10.1146/annurev-psych-010419-050830 Studien aus Heidelberg und Yale zu Empathie und interpersoneller Emotionsregulation: Geiger, E. J., Pruessner, L., Barnow, S., & Joormann, J. (2024). Empathy is associated with interpersonal emotion regulation goals in everyday life. Emotion, 24(4), 1092–1108. https://doi.org/10.1037/emo0001332 Geiger, E. J., Pruessner, L., Barnow, S., & Joormann, J. (2025). What empathizers do: Empathy and the selection of everyday interpersonal emotion regulation strategies. Journal of Affective Disorders, 370, 76–89. https://doi.org/10.1016/j.jad.2024.10.056 TEDx Talk: Daryl Davis – “Why I, as a black man, attend KKK rallies” https://www.youtube.com/watch?v=ORp3q1Oaezw Redaktion: Julia Ditzer Produktion: Murmel Productions
Welcome to Episode 148 of Praestabilis: Excellence in Marketing Welcome to Episode #148 of Praestabilis: Excellence in Marketing” Today I'm discussing “Motivation: Intrinsic or Extrinsic?” I believe that our businesses ... The post Praestabilis – Excellence in Marketing – 148 first appeared on Connie Ragen Green Podcast. The post Praestabilis – Excellence in Marketing – 148 appeared first on Connie Ragen Green Podcast.
This time, Ty and HB are heading down town to check out the depths of hell. Episode theme: Go to Hell Games played in the show: Four Last Things Pinstripe REPOSE ENA: Dream BBQ Death's Door Ty HB Moonshot Network Edited by Wheels
Sign up for our weekly newsletter here! Listen to Part Two of the Double Expresso here! In this episode of the Fueling Creativity in Education Podcast, hosts Dr. Matthew Worwood and Dr. Cyndi Burnett engage in an enlightening conversation with Dr. Teresa Amabile, a world-renowned expert in creativity research. Teresa shares fascinating insights from her impressive 50-year career, discussing her journey and the many chapters of her groundbreaking work. The conversation begins into her early interests in childhood creativity, sparked during her time in kindergarten, and how these experiences led her to study motivation and its effects on creativity. Teresa emphasizes the importance of intrinsic motivation and reflects on how extrinsic factors can sometimes bolster creativity, sharing practical tips for teachers and administrators. The episode also highlights Teresa's reflections on creativity within educational environments, stressing that creativity is not solely an individual trait but is significantly influenced by context. The discussion touches on her book "Creativity in Context" and explores how school environments can either nurture or stifle creative potential. Teresa advocates for a flexible, personalized approach to education, where students are encouraged to explore and play without the constraints of rigid, standardized assessments. Throughout the episode, Matthew and Cyndi explore these themes with Teresa, drawing valuable connections between research and practical application in educational settings. Creativity in Context About Dr. Teresa Amabile: Dr. Teresa Amabile is a world-renowned expert in creativity research, with 50 years of groundbreaking work in the field. She is the Edsel Bryant Ford Professor, Emerita, at Harvard Business School and originally trained as a chemist before earning her Ph.D. in psychology from Stanford University. Her research has explored the intersection of creativity, motivation, and the work environment, shaping how we understand and foster innovation. Dr. Amabile is the author of several influential books, including Growing Up Creative, Creativity in Context, The Progress Principle, and most recently, Retiring: Creating a Life that Works for You, as well as over 100 research articles and scholarly chapters. Her work continues to inspire educators, leaders, and organizations to cultivate environments that nurture creativity and innovation. Eager to bring more creativity into your school district? Check out our sponsor Curiosity2Create.org and join their Creativity Network for Educators at Curiosity2Connect! Check out our Podcast Website to dive deeper into Creativity in Education! For more information on Creativity in Education, check out: Matt's Website: Worwood Classroom Cyndi's Website: Creativity and Education
Peer pressure: in simple terms, peer pressure is the influence that individuals from the same age group or social circle exert on one another. https://links.artisanenglish.jp/PeerPressure Thanks for visiting ArtisanEnglish.jp's The Posts – The Podcast today. These podcasts and posts are created to help our students and anyone who wants to access them improve their English vocabulary. Take the first step to perfect your English ability, take a FREE TRIAL LESSON with me, David, at https://www.artisanenglish.jp/contact/ https://links.artisanenglish.jp/TrialLesson I provide 100% error correction, fantastic discussion topics and detailed after-lesson written feedback. Here's another term from today's episode that may have been new to you. Extrinsic motivation: a powerful concept influencing how we act and achieve our goals. It refers to the drive to do something because of external rewards or pressures. https://links.artisanenglish.jp/ExtrinsicMotivation Website: https://www.artisanenglish.jp Facebook: https://www.facebook.com/artisanenglish.jp Instagram: https://www.instagram.com/david.artisanenglish.jp/ X: https://x.com/ArtisanEnglish YouTube: https://www.youtube.com/@Artisanenglish Spotify Podcast: https://podcasters.spotify.com/pod/show/artisanenglishjp
This lecture text explores contract interpretation, discussing how courts determine the meaning of agreements using the plain meaning rule and extrinsic evidence, such as course of performance, course of dealing, and usage of trade, while also considering the parol evidence rule. It then differentiates performance obligations under common law and the U.C.C., contrasting substantial performance with the perfect tender rule, and introducing the concept of conditions. The material further explains breach, including material versus minor breaches and anticipatory repudiation, before outlining the rights of third parties through assignment, delegation, and third-party beneficiary contracts, finally addressing ways performance may be excused due to impossibility, impracticability, or frustration of purpose.This conversation delves into the complexities of contract law, focusing on the stages beyond formation, including interpretation, performance, conditions, breach, and third-party rights. The discussion emphasizes the importance of understanding the intent behind contracts, the standards for performance under common law and the UCC, and the implications of breaches. It also covers the roles of conditions, anticipatory repudiation, and the rights of third parties in contractual agreements, concluding with the circumstances under which performance may be excused.Understanding contract law goes beyond just formation.Contract interpretation focuses on the parties' intent.Extrinsic evidence plays a crucial role in ambiguous contracts.Substantial performance is key in common law contracts.The UCC applies a stricter perfect tender rule for goods.Conditions can be express or implied and affect performance duties.Material breaches excuse the non-breaching party from performance.Anticipatory repudiation allows immediate action against a breaching party.Third parties can gain rights through assignment, delegation, or as beneficiaries.Excuses for non-performance include impossibility and frustration of purpose.According to the plain meaning rule, courts interpret unambiguous contract language according to its ordinary meaning, without considering outside evidence.If contract language is ambiguous, courts may consider extrinsic evidence such as prior negotiations, drafts, industry standards, or other contemporaneous writings to determine the parties' intent.Course of performance refers to the parties' behavior under the current contract, while course of dealing refers to their conduct in previous contracts. Both provide insight into the parties' understanding of terms.The parol evidence rule's purpose is generally to prevent parties from using prior or contemporaneous oral or written statements to contradict or change the terms of a complete and final written contract.Common law substantial performance allows enforcement if the essential purpose is met with minor deviations, while the U.C.C.'s perfect tender rule requires goods to conform exactly to contract terms for the buyer to be obligated to accept them.Under the perfect tender rule, a seller might satisfy their obligation despite nonconforming goods by exercising their right to "cure" the defective tender within the contract performance period.A condition precedent is an event that must occur before a party is obligated to perform. An example from the source is a loan disbursement being conditioned on providing proof of income.A material breach is a serious violation going to the essence of the contract that excuses the non-breaching party's performance, while a minor breach is less significant and only entitles the injured party to damages.Upon anticipatory repudiation, the non-breaching party can treat it as a breach and sue immediately, suspend performance and wait, or urge performance and await retraction.An assignment is a transfer of rights under a contract, while a delegation is a transfer of duties. In a delegation, the original party typically remains liable
Welcome to Episode 148 of Praestabilis: Excellence in Marketing Welcome to Episode #148 of Praestabilis: Excellence in Marketing” Today I'm discussing “Motivation: Intrinsic or Extrinsic?” I believe that our businesses ... The post Praestabilis – Excellence in Marketing – 148 first appeared on Connie Ragen Green Podcast. The post Praestabilis – Excellence in Marketing – 148 appeared first on Connie Ragen Green Podcast.
Today, we talk about happiness. Is happiness a choice? Is it a skill? Is it a set of practices? Is it all those things? What's the difference between happiness and fulfilment or well-being? I am thrilled to welcome Dr. Mark Fabian to the show. Mark is a professor of public policy at the University of Warwick in the United Kingdom. He was previously a Fulbright scholar. He studies well-being from an interdisciplinary lens. Mark is also the author of a fabulous new book called Beyond Happy - How to Rethink Happiness and Find Fulfilment. So, we talk all about that in this episode, including what happiness is, the foundations of happiness, success and its hidden costs, relationships, happiness and the decisions we make, and something very important today - happiness and confronting nihilism. Show notes: Mark Fabian Beyond Happy - How to Rethink Happiness and Find Fulfilment. Theory of Subjective Well-being Hedonic treadmill Introjection Intrinsic motivation Extrinsic motivation Moneyball Sonya Lyobomirsky Epicureanism Laurie Santos Jonathan Haidt Cyber punk _ _ _ _ _ _ _ _ _ _ Learn more about The Decision-Making Studio: https://thedecisionmaking.studio/ All our podcast episodes are here: https://thedecisionmaking.studio/podcast Our latest newsletter: https://us19.campaign-archive.com/?u=f19fc74942b40b513cf66af32&id=cbd8d34efe Get in touch: https://thedecisionmaking.studio/contact-us
The psychology of mental strength and how to achieve your goals. The path to extraordinary achievement isn't paved with enjoyment. It's built on something far more powerful: alignment with our deepest values. Sustainable motivation comes not from the pleasure of the journey but from how we connect our actions to what matters most to us. This isn't about willpower or discipline—it's about meaning-making. When we frame our challenges within the context of our personal values, we transform mundane or difficult tasks into meaningful steps toward who we wish to become. The student studying for exams, the entrepreneur facing rejection, the artist pushing through creative blocks—all can tap into this deeper well of motivation by connecting present struggles to future purpose. Topics: The real source of mental strength 4 exercises to build the right type of motivation How to live closer to your values What if your next challenge became not a test of endurance but an expression of who you are becoming? Related Episodes 3 habits of mental strength Psychology of Fear How to build emotional intelligence Upgrade to Premium:
If you're a millennial, chances are you've got boomer parents. Parenting with reward and punishment. Sticks and carrots. Extrinsic motivation. And this might lead to a career choice based on reward and punishment. A well paying "prestigious" job. Implied negative consequences if you didn't go to college. Safe professions. And the result? A draining job where work is exhausting. Days drag on forever. And by Friday, you're counting the minutes till the weekend. What if you could achieve the opposite? An energising job, where you look forward to going to work, and don't care if you're working on Friday night, let alone the weekend. In this happiness episode, you're going to find out how.-If you're enjoying this podcast, don't forget to follow us on your favourite app.-Join the Escape the nine to five MOVEMENT: https://www.facebook.com/groups/etntf -To contact Steve directly, visit: https://www.nextstepcareers.nz/ Or email: steve@nextstepcareers.nz Or check out his socials: https://www.instagram.com/steveoehley/ https://www.linkedin.com/in/steve-oehley/ -Topics:Escape the 9 to 5 Escape the 9-5 Career Coach Career Advice Career Guidance Career Transition Career Change Next Step Careers LimitedAuckland Wellington Christchurch New Zealand
Stain, Stain, Go Away: 8 Causes of Extrinsic Tooth Staining By Kaitlyn Machado, RDH, BS, MEd, FADHA Original article published on Today's RDH: https://www.todaysrdh.com/stain-stain-go-away-8-causes-of-extrinsic-tooth-staining/ Need CE? Start earning CE credits today at https://rdh.tv/ce Get daily dental hygiene articles at https://www.todaysrdh.com Follow Today's RDH on Facebook: https://www.facebook.com/TodaysRDH/ Follow Kara RDH on Facebook: https://www.facebook.com/DentalHygieneKaraRDH/ Follow Kara RDH on Instagram: https://www.instagram.com/kara_rdh
In this episode of The Health Coach Show, we discuss how rewards can be useful to help clients change their behaviours. The big question is 'Are rewards helpful and do they help with long-term behaviour change?' How rewards can support behaviour and habit change: Rewards provide positive reinforcement Rewards can give us that little motivation boost on days when we don't feel like it - especially useful during the early stages of change when motivation is low. Rewards bring benefits into the present: providing immediate feedback that the behaviour is worth doing - particularly useful when changes may not be noticed for a few weeks. Examples of effective rewards in health coaching: Celebrating small milestones /consistency - reward yourself with a new book, water bottle or workout mat Replacing alcohol with zero alcohol drinks Buy the expensive chocolate (and therefore eat less) Non-food rewards - bath, listening to a podcast, going to a movie, reading a book/magazine for pleasure, permission to do nothing When rewards can be unhelpful: Extrinsic vs. intrinsic motivation: when we rely on the external rewards - ie the only reason we engage in a behaviour is to get the reward, we're extrinsically motivated - once the reward is removed, the motivation to do the behaviour stops. Risks of relying on rewards: Short-term compliance without long-term behaviour change. Creating dependency on rewards rather than fostering internal motivation. Reward chasing: Constantly using rewards can reduce their effectiveness. We can lose interest in routine rewards and look for additional rewards to stay motivated. Using unhealthy rewards: e.g. going out for a big breakfast after a gym session, or rewarding a long day with a glass of wine - can un-do the good work of the behaviour itself. In coaching, we can use both intrinsic and extrinsic rewards - the extrinsic can be really valuable in the early stages of change- while connecting to intrinsic ones over time. This might look like: Encouraging self-chosen rewards aligned with values. Social recognition (sharing progress with friends/family). Celebrating achievement with meaningful experiences (e.g., self care/connecting - a client rewarding themselves with a massage after sticking to a new routine for a week) Self-reward through positive self-talk and reflection on progress Connecting to how it makes you feel immediately or later that day - eg more energy, better sleep. Mini rewards to help stick to a behaviour Alternative treats Tracking progress can serve as a motivating reward: Visual progress (habit trackers, fitness apps) provides immediate feedback. Self-monitoring increases self-awareness and accountability. To learn more about health coaching, access free resources or to book one of our upcoming courses, visit our website: www.accreditedhealthcoaching.com.au
Solar Probe, Beach Waves, UFOs, Extrinsic memory, Sand Worms, and crossed eyes.
In this episode, my guest is Dr. Laurie Santos, Ph.D., a professor of psychology and cognitive science at Yale University and a leading researcher on happiness and fulfillment. We discuss what truly increases happiness, examining factors such as money, social comparison, free time, alone time versus time spent with others, pets, and the surprising positive impact of negative visualizations. We also explore common myths and truths about introverts and extroverts, the science of motivation, and how to adjust your hedonic set point to experience significantly more joy in daily life. Throughout the episode, Dr. Santos shares science-supported strategies for enhancing emotional well-being and cultivating a deeper sense of meaning and happiness. Read the full show notes at hubermanlab.com. Thank you to our sponsors AG1: https://drinkag1.com/huberman Eight Sleep: https://eightsleep.com/huberman ExpressVPN: https://expressvpn.com/huberman Function: https://functionhealth.com/huberman LMNT: https://drinklmnt.com/huberman David: https://drinklmnt.com/huberman Timestamps 00:00:00 Dr. Laurie Santos 00:02:52 Sponsors: Eight Sleep & ExpressVPN 00:06:00 Happiness, Emotion & Cognition; Emotional Contagion 00:11:18 Extrinsic vs. Intrinsic Rewards 00:14:43 Money, Comparison & Happiness 00:21:39 Tool: Increase Social Connection; Real-Time Communication 00:32:16 Sponsor: AG1 00:33:47 Technology, Information, Social Interaction 00:39:22 Loneliness, Youth, Technology 00:42:16 Cravings, Sustainable Actions, Dopamine 00:47:01 Social Connection & Predictions; Introverts & Extroverts 00:57:22 Sponsors: Function & LMNT 01:00:41 Social Connection & Frequency; Tools: Fun; “Presence” & Technology 01:07:53 Technology & Negative Effects; Tool: Senses & Grounding; Podcasts 01:15:11 Negativity Bias, Gratitude, Tool: “Delight” Practice & Shifting Emotions 01:25:01 Sponsor: David 01:26:17 Importance of Negative Emotions; Judgements about Happiness 01:34:16 Happiness & Cultural Differences, Tool: Focus on Small Pleasures 01:41:00 Dogs, Monkeys & Brain, “Monkey Mind” 01:47:40 Monkeys, Perspective, Planning 01:53:58 Dogs, Cats, Dingos; Pets & Happiness 02:00:49 Time Famish; Tools: Time Affluence Breaks; Time Confetti & Free Time 02:07:46 Hedonic Adaptation; Tool: Spacing Happy Experiences 02:15:27 Contrast, Comparison & Happiness; Tool: Bronze Lining, Negative Visualization 02:24:08 Visualization, Bannister Effect; Tool: Imagine Obstacles 02:29:12 Culture; Arrival Fallacy, Tool: Journey Mindset 02:37:11 Mortality, Memento Mori, Tool: Fleeting Experiences & Contrast 02:44:33 Awe 02:48:15 Timescales; Community Engagement & Signature Strengths; Tool: Job Crafting 02:56:55 Strength Date, Leisure Time; Tool: Doing for Others, Feel Good Do Good 03:01:42 Tool: Asking for Help 03:05:32 Zero-Cost Support, YouTube, Spotify & Apple Follow & Reviews, Sponsors, YouTube Feedback, Social Media, Protocols Book, Neural Network Newsletter Disclaimer & Disclosures
Michelle Merring, actress and Castability user, shares with us how she went from resenting self tape requests, getting so in her head that she struggled to even turn them in, to finding true confidence and joy in front of the camera. Through consistent self tape practice, she details exactly how she shifted her mindset and fell in love with self tapes, attracting more auditions and callbacks along the way. Michelle is the winner of our Self Tape May challenge last year after she completed over 16+ tapes on the app during Audrey Helps Actor's Self Tape May challenge. Check out her open scenes in the winners gallery in the Castability App to see her creativity in action. -Creativity in Self Taping and Finding Joy in the Process -Building Confidence Through Practice -Shift in Mindset: Auditioning as an Art -Role of Community in Self Taping -Overcoming Ego and Embracing Feedback -The Importance of Vulnerability and Learning -Reframing Rejection and Mistakes -Setting Intentions and Goals -Intrinsic vs Extrinsic Motivations Links:Winners Gallery in-appAudrey Helps Actors Self Tape MayFour Agreements book Extrinsic vs Intrinsic motivationsSelf Compassion by Kristin NeffMichelle's website Michelle's IMDB
In this episode, Dr. Grajdek delves into the psychology of workplace motivation, differentiating between intrinsic and extrinsic motivators and exploring strategies for fostering motivation. Aligning tasks with employee strengths, creating recognition programs, and addressing sources of demotivation are discussed. Tune in to learn more. Check out Stress-Free With Dr G on YouTubehttps://youtube.com/channel/UCxHq0osRest0BqQQRXfdjiQ The Stress Solution: Your Blueprint For Stress Management Masteryhttps://a.co/d/07xAdo7l
Insurance policies aren't always beacons of clarity. Industry jargon and technical terms can make them murky and ambiguous. This week, Robert Sallander discusses how to use extrinsic evidence- like industry custom and practice- to interpret insurance policies.Have a topic you'd like Bob to cover? Submit it to questions@gpsllp.com, or connect with Bob directly on LinkedIn.And if you'd like to know more about GPSL, check out our website.You can also find us on LinkedIn, X, and Facebook.
Slam the Gavel welcomes Katherine Moore, JD, MS and CFE and Carol Moore to the podcast. Katherine is part of a group of women in North Carolina, even in different districts, all engaged in protracted family court cases. Their concerns are seeing patterns in the family court system. Katherine discussed patterns that correlate to Joan Meier's studies with regard to how family court proceeded in her own case. Katherine became more interested in the study and how it pertains to mothers and fathers and what other information is out there on how family court is proceeding. Katherine looked into Joan Meier's study on women, where 28% of "credited" mother's have some success in court and are not summarily dismissed. However, the 76% that is dismissed isn't recognizing some of the patterns of extrinsic fraud that is intended to specifically discredit accounts of abuse. This has happened in Katherine's case. The evidence is not being entered, being suppressed and disregarded. The mothers and fathers are smart, they are bringing in evidence and data and it is being discredited and that is where the 76% is coming in to play and it is called intrinsic fraud. In Carol Moore's case, she was in district court for 12 years and made 15 appearances and believed the district court would was going to resolve the issues. This didn't happen, and Carol started to see that when she came in with evidence, her evidence was not ruled on. The judge filed a gatekeeper order against her due to a request for a change of venue and called her frivolous. Her evidence and facts were being concealed. Being denied her evidence to go into the courtroom and present her evidence, Carol made the decision to go Pro Se into Federal Court Middle District and presented her case on 42 U.S.Code sub-section 1983 and was in Federal Court for 14 months. Her case was dismissed in October due to Rooker-Feldman. Carol wanted a review and submitted evidence and brief to the fourth circuit and wanted the answers as to why her evidence wasn't considered. Carol encourages others, "Pro Se people should never be afraid and back down from Federal Court. I don't think that's an option, it's a pretty gutsy thing to do." To Reach Katherine Moore and Carol Moore: kmoore@protectivemoms.net******** Supportshow(https://www.buymeacoffee.com/maryannpetri)Maryann Petri: dismantlingfamilycourtcorruption.comhttps://www.tiktok.com/@maryannpetriFacebook: https://www.youtube.com/@slamthegavelpodcasthostmar5536Instagram: https://www.instagram.com/guitarpeace/Pinterest: Slam The Gavel Podcast/@guitarpeaceLinkedIn: https://www.linkedin.com/in/maryann-petri-62a46b1ab/ YouTube: https://www.youtube.com/@slamthegavelpodcasthostmar5536 Twitter https://x.com/PetriMaryann*DISCLAIMER* The use of this information is at the viewer/user's own risk. Not financial, medical nor legal advice as the content on this podcast does not constitute legal, financial, medical or any other professional advice. Viewer/user's should consult with the relevant professionals. Reproduction, distribution, performing, publicly displaying and making a derivative of the work is explicitly prohibited without permission from Support the showSupportshow(https://www.buymeacoffee.com/maryannpetri)http://www.dismantlingfamilycourtcorruption.com/
Stephanie Harrison had a dream job in New York City, a beautiful apartment, and all the signs of success. But deep down, she felt empty. To find happiness, she worked hard at perfecting herself and achieving more, but all she found was loneliness, depression, and a lack of fulfillment. After going through a breakdown, she started studying the psychology of happiness and made changes that transformed her life. Taking what she learned, she founded The New Happy, a movement that has helped thousands of people find fulfillment. In this episode, Stephanie explains how living authentically, building connections, and focusing on giving back can lead to a happier life, even under the pressures of building a business. In this episode, Hala and Stephanie will discuss: (00:00) Introduction (02:24) The Old Model of Happiness and Its Lies (04:00) The Trap of Chasing Perfection (06:43) Her Journey to Understanding Happiness (10:56) Unhappiness in America (12:20) Entrepreneurship and Mental Health (13:00) The Real Cost of Capitalism on Well-Being (15:00) What is 'The New Happy' Philosophy? (18:00) Self-Worth: Finding Value Beyond Achievement (21:59) Extrinsic vs. Intrinsic Goals: The Happiness Divide (29:59) Practical Steps to Living Authentically (30:00) A Daily Practice for Happiness (34:00) Loneliness: A Lack of Giving, Not Just Receiving (36:22) The Power of Gratitude (49:19) Understanding Self-Worth (55:05) The Key to Long-Term Happiness Stephanie Harrison is the founder of The New Happy. With a Master's in Positive Psychology from the University of Pennsylvania, she also developed well-being programs as Director of Learning at Thrive Global. She's the host of The New Happy Podcast and author of New Happy, where she debunks myths about success and shares a fresh, science-backed approach to joy. Stephanie's work has reached millions through social media, her book, and major platforms like Forbes and CNBC. She regularly speaks to leaders at Fortune 500 companies about creating supportive environments. Sponsored By: Fundrise - Add the Fundrise Flagship Fund to your portfolio in minutes at https://fundrise.com/PROFITING Found - Try Found for FREE at https://found.com/profiting Mint Mobile - To get a new 3-month premium wireless plan for just 15 bucks a month, go to https://mintmobile.com/profiting Working Genius - Get 20% off the $25 Working Genius assessment at https://www.workinggenius.com/ with code PROFITING at checkout Shopify - Sign up for a one-dollar-per-month trial period at https://youngandprofiting.co/shopify Indeed - Get a $75 job credit at https://indeed.com/profiting Teachable - Claim your free month of their Pro paid plan at https://teachable.com/ with code PROFITING Airbnb - Your home might be worth more than you think. Find out how much at airbnb.com/host Connect with Stephanie: Stephanie's Website: https://www.stephanielharrison.com/ Stephanie's LinkedIn: https://www.linkedin.com/in/stephanieleighharrison Stephanie's Instagram: https://www.instagram.com/stephaniehson/ Stephanie's TikTok: https://www.tiktok.com/@stephaniehson Resources Mentioned: The New Happy: https://www.thenewhappy.com/ Stephanie's Book, New Happy: Getting Happiness Right in a World That's Got It Wrong: https://www.amazon.com/New-Happy-Getting-Happiness-Right/dp/0593541383 LinkedIn Secrets Masterclass, Have Job Security For Life: Use code ‘podcast' for 30% off at yapmedia.io/course. Top Tools and Products of the Month: https://youngandprofiting.com/deals/ More About Young and Profiting Download Transcripts - youngandprofiting.com Get Sponsorship Deals - youngandprofiting.com/sponsorships Leave a Review - ratethispodcast.com/yap Watch Videos - youtube.com/c/YoungandProfiting Follow Hala Taha LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ TikTok - tiktok.com/@yapwithhala Twitter - twitter.com/yapwithhala Learn more about YAP Media's Services - yapmedia.io/
The Intuitive Customer - Improve Your Customer Experience To Gain Growth
If there is one thing that academics know how to do, it's publish new research. It seems that umpteen studies are published every hour. It can be overwhelming to keep up with it all. So, we undertook it to help you with this week's episode. We explore three fascinating studies in the realm of consumer behavior with insights from Dr. Morgan Ward, a Professor of Consumer Behavior at Emory University. From the influence of sound on social status to the role of streaks in motivating behavior and even how firms should use AI to deliver news to customers, this episode provides a wealth of information for businesses looking to understand and serve their customers better than they do today. Social Status and Product Sound Dr. Ward's research delves into how consumers choose products based on the sounds they emit, linking these choices to social status. The study finds that people often seek status through two main channels—dominance and prestige. Some customers buy noisy products, like a Harley Davidson, to assert dominance, while others opt for quiet, high-end products, such as Dyson fans, to signify prestige. Ward emphasizes that understanding the status-seeking motivations of your target audience can help businesses design products that appeal to specific social power desires. Key Takeaway: Customizing product sounds can signal social power, appealing to customers' status-seeking behavior. The Gamification of Behavior Through Streaks Ward also discusses the role of streaks in consumer behavior, particularly how brands use streak-based incentives to encourage continued engagement. Apps like Duolingo, Snapchat, and Headspace all capitalize on the idea of maintaining streaks to motivate daily usage. However, there are tradeoffs. Extrinsic motivators like streaks can sometimes overshadow intrinsic motivators, leading to a decrease in overall enjoyment and, ultimately, participation. Ward warns businesses to consider when streaks are appropriate carefully and to balance the motivations that drive consumer behavior. Key Takeaway: Streak-based gamification can motivate, but businesses should carefully balance extrinsic and intrinsic motivations. AI vs. Human Interaction: Finally, Ward shares research on when companies should use AI versus humans for customer interactions, particularly around delivering good or bad news. Interestingly, the research suggests that AI should handle bad news because customers perceive it as more impartial. In contrast, a human best delivers good news to create a personal connection. These findings affect how businesses structure customer service strategies and use AI. Key Takeaway: AI is better suited for delivering bad news, while human interaction is more impactful for delivering good news. Additional things you'll learn in this episode: How cultural differences affect consumer status-seeking behavior. The psychological impact of losing a streak and how it influences future behavior. Why some products benefit more from gamification than others. The future implications of AI-human interaction in customer service.
In today's episode we dive deep into the topic of self worth as Andie shares what self worth is, why it's such a confusing concept, and what you need to do to actually build self worth for yourself. If you are struggling with self sabotage, feeling worthy as you are, feeling confident and at peace in your life, or simply wanting to live a more interesting life... this is the episode for you.What we cover...What self worth isThe difference between Extrinsic & Intrinsic self worthThe connection between purpose, living an interesting life, and self worthThe three step method to building a rock solid sense of self worth And more...Join Building Self Worth the Extended Training TODAYFREE & LOW COST RESOURCES FOR YOU: Get the free journal here email list here! Check out my website here! Follow on Instagram @andiecolleen and TikTok @andie.colleen for more mini-trainings, tips, and advice. SUPPORT THE SHOW:Please subscribe, rate, and review over on Apple Podcasts and Spotify to help support Mindset Magic! Follow along on Instagram and TikTok for updates, giveaways, and more inspo!
Mental skills are so important to train to assist us in our races, but often times we leave it out. Included in this episode are some things you can work on to help you in your daily runs/life. My notes: Mentally preparing for a race Goals A B C Write them down and share them. Revisit them from time to time. Realistic? Extrinsic & Intrinsic Why's These are yours. You decide if you want to share depending on how personal. If strong enough, may want to share with crew & pacers Revisit & revise (make them as strong as possible) Forseen obstacles Think of troubles you've had in the past. How would you resolve them now. Give yourself multiple resolutions Ask others what they have encountered. How did they resolve them? Would they do anything differently? How would you resolve them? Watch videos & read race reports. What were the biggest challenges. How did they overcome them? Practice your Focus Zoom in/out - what helps you zoom out? Music? Talking? Staying present Avoiding negative thoughts by acknowledging and letting go and asking what good does that thought do me? Learn to ask what? What do I need right now? What can I do to resolve this? Chunking Triggers Episodes I referred to: Values - https://runningislifepodcast.podbean.com/e/aligning-your-values-with-your-running-episode-206/ Leadville - https://runningislifepodcast.podbean.com/e/my-race-recap-of-the-leadville-100-mile-race-episode-184/ Aaron's information: If you'd like to learn more about Patreon or to donate, please visit https://www.patreon.com/RunningIsLife My Socials, Channels, & Newsletter:https://www.facebook.com/MRRUNNINGPAINSCOACHING https://www.instagram.com/runningislifecoaching/ https://www.youtube.com/channel/UCQ6J512qA34z_N0KJSU4jfw https://www.strava.com/athletes/18431982 Email - coachsaft@gmail.com Thanks to all of you for listening! Please share the Podcast and please leave a review, rate, & subscribe if you haven't done so already! THANK YOU! Aaron Saft Running Is Life Coaching & Podcast I'd be remiss not to thanks to my Patrons for their continued support!
In Episode 154, we are recasting our episode with Dan Abrahams, Sports Psychologist, best-selling author of four books, Founder of the Dan Abrahams Soccer Academy and The Sport Psych Show Podcast, and former pro golfer, who talks with Phil about his sports psychology work, his books, and his podcast, the concepts of “Teamship” and motivational climate, extrinsic vs. intrinsic motivation, pre-failing, self-leadership, his personal why, how we can coach difficult players by asking great questions, and whether we can recreate the pressure of a penalty kick outside of a match. Specifically, Dan discusses: · His story, his work with Premier League and other football/soccer teams, his books, his podcast, how he developed his passion for soccer and leadership, and how he got to where he is today (3:31) · What excites him about the increase of awareness of the importance of mindset and sports psychology in sports over the past several years (13:53) · In which sport the mental game is more important: soccer or golf (20:04) · The concept of “Teamship” and why it is important (26:49) · The Motivational Climate of a Team, what it is, and what it has to do with a team's culture and burnout (30:13) · Extrinsic vs. Intrinsic Motivation, get-to vs. have-to mentality, and how they are related to short- and long-term performance (43:07) · The concept of pre-failing (49:01) · Why self-leadership and a leader's mindset are important (53:13) · His personal why and how it is playing out in his life (59:05) · “Not” coaching difficult players, but teaching them to coach themselves through questions (1:01:23) · How personality styles are related to the sports psychology work (1:05:45) · Whether it is possible to recreate the pressure of a penalty kick in a training environment (1:09:31) · How he uses lessons learned in soccer in his family relationships (1:14:16) · His recommendation, which is very personal to him (1:15:44) Resources and Links from this Episode Dan's Website The Sport Psych Show Podcast Uncut Video of the Episode HSEL Facebook Group Coaching the Bigger Game information Warrior Way information Phil Darke's email address Soccer Tough: Simple Football Psychology Techniques to Improve Your Game, by Dan Abrahams Soccer Tough 2: Advanced Psychology Techniques for Footballers, by Dan Abrahams Soccer Brain: The 4C Coaching Model for Developing World Class Player Mindsets and a Winning Football Team, by Dan Abrahams Golf Tough: Practice, Prepare, Perform, and Progress, by Dan Abrahams The Inner Game of Tennis: The Classic Guide to the Mental Side of Peak Performance, by W. Timothy Gallwey
In this episode, I dive into the difference between intrinsic vs. extrinsic motivation, especially for those of us with ADHD. Learn how to build and nurture your own intrinsic motivation to get things done in a way that feels good and sustainable. Whether you're tired of relying on external rewards or struggling to tap into your inner drive, this one's for you!
How To Foster Intrinsic Self Worth Not Extrinsic Self Worth Gita 18 14 by Exploring mindfulness, yoga and spirituality
Are you confused by how you think about your own worth? Maybe one part of you feel like you have things to offer, that you have value to add, but another feels like you can't quite believe the idea you matter to other people... Third Culture Kids often experience this paradox - "I am someone, but I'm not someone to others". I explore this experience here as tension between experiences of intrinsic and extrinsic value and I'd love to hear your thoughts! Hosted on Acast. See acast.com/privacy for more information.
Looking for motivation? Go intrinsic first. Extrinsic second. And you'll find some motivation that stands up against any adversity. Ready to do the work? Start an exercise on the Mojo Crowe App: https://mojocrowe.com/app Not quite ready to begin the journey? All good, we'll be here when you're ready. Come hang out with us on our socials in the meantime. https://www.instagram.com/mojocrowe/ https://www.linkedin.com/company/mojo-crowe/
Read our book, The Score That Matters https://amzn.to/3XxHi7p Full show notes at www.LearningLeader.com Notes: Arthur grew up with one goal - To be the world's greatest French horn player. He learned that striving for something was fungible across all fields of life. It was a great laboratory for learning. Intrinsic vs Extrinsic motivation - Intrinsic motivation comes from an internal desire to accomplish a goal, while extrinsic motivation comes from external rewards and praise. "Misery comes from excessive auto-focus." Misery comes from thinking about yourself too much and not enough about helping others. The curse of the strive... All happiness comes from progress. The arrival is not the goal. How to be happy while striving: Be grateful - Write it down. Do it daily. Always look to help others. "All research is 'me-search.'" The Four Idols: Money, Power, Pleasure, and Prestige/Fame. We talked through ours… What are yours? The Four Focus areas to help with happiness Faith Family Friendship Serving Others Define your purpose. Write it down. Understand why you're here. Mine = "To inspire others to value and pursue excellence." Too many people are ok with mediocrity. We should strive for more. Oprah Winfrey is the same person everywhere she goes. She is genuine and authentic to all. Arthur's column helped Oprah stay positive and happy through the pandemic. So much so that she called him and asked to meet. And eventually, write a book together. That book became a #1 best-seller. #1 Life Hack: "Don't lie ever." Arthur is jacked (in great shape). Taking care of your body helps with unhappiness. Wake up 1.5 hours before dawn. Work out hard. Lift weights. Do challenging cardio. Life/Career Advice: Don't worry too much about the first job out of college. Don't sacrifice relationships. Bring love to every relationship and be great at what you do. Be excellent. Emanate love and show excellence.
Have you ever wondered if the time you spend nurturing your soul is truly worth it? In this episode, Soul Care Guide Bonnie explores the difference between extrinsic and intrinsic goals and how they impact our happiness and fulfillment.Learn to identify and invest in the activities that bring you joy, peace, and comfort—those small beginnings that God values deeply.Tune into this episode to discover how nurturing your intrinsic - instead striving towards extrinsic goals - can lead to greater fulfillment and how Jesus views our seemingly small efforts as significant.Key Takeaways:- Understand the difference between Extrinsic and Intrinsic goals.- Learn why Intrinsic goals lead to greater happiness and fulfillment.- Discover how Jesus values our small beginnings.- Practice a powerful breath prayer to nurture your soul.Breath Prayer: (inhale) Do not despise these small beginnings / (exhale) the Lord rejoices to see the work beginScriptures: "Do not despise these small beginnings, for the Lord rejoices to see the work begin." Zech 4:10LINKS & RESOURCES- Take FREE Soul Care Quiz at soulcarequiz.com – your wellness assessment!- Ask Bonnie Questions at https://bit.ly/askbreathe- Get Bonnie's Bestseller book "Breathe: 21 Days to Stress Less": https://amzn.to/4azae1K- Join Soul Care School: http://mysoulcareschool.com- Join the Breathe Newsletter! https://thebonniegray.com/subscribe/- Follow Bonnie at www.instagram.com/thebonniegray & www.facebook.com/thebonniegray Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.
Social loafing is a phenomenon that is becoming more prevalent in today's workplace. In this episode, I discuss social loafing and provide mitigating tips.ReferencesAggarwal, P., & O'Brien, C. L. (2008). Social loafing on group projects: Structural antecedents and effect on student satisfaction. Journal of Marketing Education, 30(3), 255-264.Alnuaimi, O. A., Robert, L. P., & Maruping, L. M. (2010). Team size, dispersion, and social loafing in technology-supported teams: A perspective on the theory of moral disengagement. Journal of Management Information Systems, 27(1), 203-230.Bennett, N., & Naumann, S. E. (2005). Understanding and preventing shirking, job neglect, social loafing, and free riding. In R. E. Kidwell& C. L. Martin (Eds.), Managing Organizational Deviance (Vol. 1, pp. 113–130). Sage. Chidambaram, L., & Tung, L. L. (2015). Is out of sight, out of mind? An empirical study of social loafing in technology-supported groups. Information Systems Research, 16(2), 149-168.George, J. M. (1992). Extrinsic and intrinsic origins of perceived social loafing in organizations. Academy of Management Journal, 35(1), 191–202. Jia, H., Jia, R., & Karau, S. (2019). Cyberloafing and personality: The impact of the Big Five traits and workplace situational factors. Journal of Leadership & Organizational Studies, 20(3), 258-279.Karau, S. J., & Williams, K. D. (2021). Social loafing: A meta-analytic review and theoretical integration. Journal of Personality and Social Psychology, 65(4), 681-706.Liden, R. C., Wayne, S. J., Jaworski, R. A., & Bennett, N. (2014). Social loafing: A field investigation. Journal of Management, 30(2), 285-304.Monzani, L., Ripoll, P., Peir., J. M., & Van Dick, R. (2014). Loafing in the digital age: The role of computer mediated communication in the relation between perceived loafing and group affective outcomes. Computers in Human Behavior, 33, 279–285. Mulvey, P. W., & Klein, H. J. (1998). The impact of perceived loafing and collective efficacy on group goal processes and group performance. Organizational Behavior and Human Decision Processes, 74(1), 62–87.Pearsall, M. J., Christian, M. S., & Ellis, A. P. J. (2010). Motivating interdependent teams: Individual rewards, shared rewards, or something in between? Journal of Applied Psychology, 95(1), 183–191. Price, K. H., Harrison, D. A., & Gavin, J. H. (2006). Withholding inputs in team contexts: Member composition, interaction processes, evaluation structure, and social loafing. Journal of Applied Psychology, 91(6), 1375–1384.
Send us a Text Message.Hi there, friends!We're back with an episode featuring the much-asked for, long and patiently waited for, Taggart Notes!Credit goes to: The Taggart Notes: PDFNational Board Examination Review Book For Students of Funeral Service Education/Mortuary Science : SCIENCESDr. Thomas R. TaggartThe Taggart Notes for Sciences looks like they hold some very useful information for preparing yourself for the Science NBE, but I will be skipping all of the out-of-date content for you. I highly recommend looking into David's MUCH better, and far more extensive, study guide books he has done for Arts and Sciences, as they literally bounce off of The Taggart Notes and expand upon them whilst updating them to match today's proper terminology and definitions. (Just remember, use MEG25 and save $25 off of EACH book if you do get one, or both!)In this episode I cover the following topics:Intrinsic / Extrinsic Factors in Case AnalysisDilution and DrainageDiscolorationsBlood DiscolorationsFluid Accumulation PLEASE understand that as wonderful as his notes are, they are incredibly outdated. We're talking 2005. The man that has created the D.E.A.D Program made an updated set of study guides to replace the Taggart Notes, and if you go to the D.E.A.D website and hit Resource Materials, you can either buy one or both. Enter MEG25 and you will get a nice $25 off discount from EACH book!I have also started a YouTube Channel to coincide with this Podcast that I highly suggest following and subscribing to if you have not already! I will be offering in-depth visual aids of the various places of worship, casket construction and parts, etc.!The link to my channel is:https://www.youtube.com/@MeghanOpocenskyAs always, thank you all so much for you continued support and I immensely appreciate your continued patience! :)This upcoming week David and I intend to blow through those Conference Glossary Terms y'all enjoy so much and getting those knocked out for you. We just did Pathology AND Microbiology and I will upload them both once he sends them to me! So stay tuned, things will be getting good! I will also be revisiting Arts here and there as I want to offer visual aids for a few things, as I know David wants to do for certain Science topics- so please look at my "seasons" as really more of an "Arts" and "Sciences" divider to help with any confusion, ok?Season 1: Arts - Season 2: Sciences maybe a Season 3 for random funeral professionals to share their wisdom, practices, etc. but that's a ways away!THE BIGGEST THANK YOU TO MY SUBSCRIBERS: DANIELLA G. & SARAH J. :)Check out my Patreon to support my show, too; https://www.patreon.com/MeghanOpocensky?utm_campaign=creatorshare_creatorI'll be updating it later!As always, thank you all so much for you continued support!I will continue looking at what to do next, I have yet to touch on still and I appreciate your continued patience!To support this show through the channel platform, please visit:https://www.buzzsprout.com/2355990/supporters/newIf you're interested in supporting the show but don't want to subscribe, my info is as follows:@MegOpocensky / $MegOpocenskySupport the Show.Support the Show.
Wrinkles are an inevitable part of aging and while genetics play a role, external factors can accelerate this process. But the truth is, you have more control over their appearance than you might think. Your lifestyle and skincare choices can make a significant difference.In this solo episode of the Biohacking Beauty podcast, I talk about wrinkles formation, the science behind it, actionable tips and strategies to help combat it and more!Listen as I discuss:(00:02) Biohacking strategies for wrinkle prevention(02:16) Extrinsic factors and oxidative stress(07:55) Antioxidants for healthy skin(12:50) Ingredients that are incredible for cellular turnover(15:06) Skin care holistic approachTo learn more about Young Goose:Use code PODCAST10 to get 10% off your first purchase, and if you're a returning customer use the code PODCAST5 to get 5% off at https://www.younggoose.com/ Instagram: @young_goose_skincare
In this episode of The Addicted Mind Plus, Duane and Eric Osterlind dive into the difference between extrinsic and intrinsic goals and how they affect our well-being. Have you ever felt the high of achieving a big goal, only to have that happiness fade away? This episode explores why that happens and introduces the concept of the "hedonic treadmill." You'll learn how extrinsic goals, like money and status, give short-term happiness but don't last. In contrast, intrinsic goals, like personal growth and meaningful relationships, bring deeper and more lasting joy. Duane and Eric share practical tips on how to shift your focus to these intrinsic goals, cultivate gratitude, and build stronger, more fulfilling connections. They also discuss the importance of mindfulness, personal growth, and serving others in achieving true contentment. Tune in to discover how you can step off the hedonic treadmill and find real, sustainable happiness in your life. Download: INTRINSIC VS. EXTRINSIC GOALS Join Our Deep Dive, where we discuss this episode in depth. Register Here: https://theaddictedmind.com/deepdive Click Here to Join the TAM + Community Waitlist. Get the support you need. Key Topics The difference between extrinsic and intrinsic goals Understanding the hedonic treadmill How extrinsic goals lead to temporary happiness The importance of intrinsic goals for lasting fulfillment Practical tips to shift focus and cultivate intrinsic goals Timestamp List [00:01:06] Introduction to the topic: Extrinsic vs. Intrinsic Goals [00:03:08] Explanation of the hedonic treadmill [00:04:20] The impact of extrinsic goals on happiness [00:07:33] Defining and understanding intrinsic goals [00:11:00] How to step off the hedonic treadmill [00:16:04] Practical tips for cultivating intrinsic goals [00:19:00] Summary and closing thoughts Learn more about your ad choices. Visit megaphone.fm/adchoices
Explore the essence of true happiness with Stephanie Harrison, the visionary behind The New Happy. In this enlightening episode, along with Dr. Stephanie, they dissect the flawed beliefs of individualism and relentless achievement that often define our pursuit of happiness. Stephanie Harrison introduces a transformative approach that emphasizes community, self-awareness, and the celebration of our innate talents. Episode Overview:0:00 Intro/Teaser4:09 Welcome Stephanie Harrison4:21 Pursuit of Happiness in All Walks of Life6:38 The Fallacy of Being Happy All the Time8:59 Deconstructing the Old Happy vs. New Happy10:47 Lie #1: You're Not Good Enough16:24 Pursuit of Happiness and Societal Conditioning23:06 The Tyranny of the Perfect Self28:08 Learning from Mistakes and Frustration30:43 Lie #2: You'll Be Happy When33:01 Extrinsic vs. Intrinsic Goals37:36 Intrinsic Goal Examples50:01 Lie #3: You Have to Do It Alone1:05:11 Sharing Your Gifts1:10:58 Wisdom and Experience1:14:06 Impactful Realizations1:16:48 Visual Learning InsightsResources mentioned in the episode:Self-Acceptance and Interdependence Promote Longevity: Evidence From a 20-year Prospective Cohort Study - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7460297/Positive psychological constructs and association with reduced risk of mild cognitive impairment and dementia in older adults: A systematic review and meta-analysis - https://www.sciencedirect.com/science/article/abs/pii/S1568163722000368Social relationship satisfaction and accumulation of chronic conditions and multimorbidity: a national cohort of Australian women - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9950967/Stephanie's book - https://www.thenewhappy.com/Stephanie's podcast - https://podcasts.apple.com/us/podcast/the-new-happy/id1551211138Bio:Stephanie Harrison is the creator of the New Happy philosophy. Her work has been featured in publications such as CNBC, Fast Company, Forbes, and Harvard Business Review. She is the founder of The New Happy, a company helping individuals, companies, and communities apply this philosophy in their lives. The New Happy's art, newsletter, podcast, and programs reach millions of people around the world every month. She has a Masters Degree in positive psychology from the University of Pennsylvania. Previously, she was the Director of Learning at Thrive Global.We are grateful to our sponsors:EQUIPYou can use this grass-fed collagen daily, to take care of your hair, skin, nails, joints, and gut after resistance training workouts, or you can even bake with it because it tastes like dessert, not beef. Go to https://equipfoods.com/better and use the code BETTER for 20% off.ONESKINIt's time for a radical shift in skin care. This year I am doubling down on skin health and OneSkin is one of the ways I am getting my 40-year-old skin to behave like 20-year-old skin. Visit https://www.oneskin.co/?utm_source=partner&utm_medium=podcast&utm_campaign=BETTER and use the code BETTER at checkout to save 15%.BEAM MINERALSBeam Minerals contains every single mineral that you lose during perimenopause and menopause, and there is a meaningful dose here with close to 100% bioavailability. All you have to do is take a shot of liquid every morning to replenish your mineral stores and ease the symptoms that you might be experiencing. Head over to https://beamminerals.com/better and use the code BETTER for 20% off.
Welcome to the Social-Engineer Podcast: The Doctor Is In Series – where we will discuss understandings and developments in the field of psychology. In today's episode, Chris and Abbie are discussing Intrinsic and Extrinsic Motivation. They will talk about the differences your source of motivation can have on your behavior and state of mind. [June 3, 2024] 00:00 - Intro 00:18 - Dr. Abbie Maroño Intro 00:35 - Intro Links - Social-Engineer.com - http://www.social-engineer.com/ - Managed Voice Phishing - https://www.social-engineer.com/services/vishing-service/ - Managed Email Phishing - https://www.social-engineer.com/services/se-phishing-service/ - Adversarial Simulations - https://www.social-engineer.com/services/social-engineering-penetration-test/ - Social-Engineer channel on SLACK - https://social-engineering-hq.slack.com/ssb - CLUTCH - http://www.pro-rock.com/ - innocentlivesfoundation.org - http://www.innocentlivesfoundation.org/ 03:35 - The Topic of the Day: Intrinsic vs Extrinsic Motivators 05:19 - Foundational Differences 07:19 - The Pitfalls of Extrinsic Motivation 09:29 - The Overjustification Effect 13:29 - The Intrinsic Difference 16:47 - Where Passion Lies 19:43 - Wellbeing is Intrinsic 22:07 - Situational Influence 27:57 - Passion and Warfare 30:04 - The Maladaptive Difference 33:02 - Avoidance 35:29 - High Risk! 38:31 - Self-reflection 40:31 - Smash That Extrinsic Button! 44:08 - ...A Life Well Lived 46:11 - We Should Grow! 49:15 - Wrap Up 49:40 - Next Month: Psychological Myths 50:06 - Outro - www.social-engineer.com - www.innocentlivesfoundation.org Find us online: - Twitter: @DrAbbieofficial - LinkedIn: linkedin.com/in/dr-abbie-maroño-phd - Instagram: @DoctorAbbieofficial - Twitter: @humanhacker - LinkedIn: linkedin.com/in/christopherhadnagy References: Amabile, T. M. (1993). Motivational synergy: Toward new conceptualizations of intrinsic and extrinsic motivation in the workplace. Human Resource Management Review, 3(3), 185-201. https://doi.org/10.1016/1053-4822(93)90012-S Baum, J. R., & Locke, E. A. (2004). The relationship of entrepreneurial traits, skill, and motivation to subsequent venture growth. Journal of Applied Psychology, 89(4), 587–598. https://doi.org/10.1037/0021-9010.89.4.587 Curran, T., Hill, A. P., & Appleton, P. R. (2015). The mediating role of psychological need satisfaction in relationships between types of passion for sport and athlete burnout. Journal of Sports Sciences, 33(6), 597-606. https://doi.org/10.1080/02640414.2014.951952 Deci, E. L., Koestner, R., & Ryan, R. M. (1999). A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Journal of Personality and Social Psychology, 125(6), 627-668. https://doi.org/10.1037/0022-3514.125.6.627 Forest, J., Mageau, G. A., Sarrazin, C., & Morin, E. M. (2011). “Work is my passion”: The different affective, behavioural, and cognitive consequences of harmonious and obsessive passion toward work. Canadian Journal of Administrative Sciences/Revue Canadienne des Sciences de l'Administration, 28(1), 27-40. https://doi.org/10.1002/cjas.170 Ho, V. T., & Pollack, J. M. (2014). Passion Isn't Always a Good Thing: Examining Entrepreneurs' Network Centrality and Financial Performance with a Dualistic Model of Passion. Journal of Management Studies, 51(3), 433-459. https://doi.org/10.1111/joms.12062 Kohn, A. (1993). Punished by rewards: The trouble with gold stars, incentive plans, A's, praise, and other bribes. Boston, MA: Houghton Mifflin. Ryan, R. M., & Deci, E. L. (2000). Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemporary Educational Psychology, 25(1), 54-67. https://doi.org/10.1006/ceps.1999.1020 Vallerand, R. J., Blanchard, C., Mageau, G. A., Koestner, R., Ratelle, C., Léonard, M., ... & Marsolais, J. (2003). Les passions de l'âme: On obsessive and harmonious passion. Journal of Personality and Social Psychology, 85(4), 756–767. https://doi.org/10.1037/0022-3514.85.4.756
Thank you for supporting Scholastic Answers The next part on my series on the Development of Doctrine. NEW AQUINAS ACADEMY Link: https://www.christianbwagner.com/newaquinasacademy Discord: https://aquinas.cc/la/en/~DePrinNat.C1 Donate: https://www.patreon.com/newaquinasacademy FURTHER RESOURCES To get Tutoring: https://www.christianbwagner.com/book-online Annotated Thomist: https://www.christianbwagner.com/annotated-thomist Scholastic Courses: https://www.christianbwagner.com/courses SPONSOR Use the code “Militant” for 20% off to learn Greek here: https://fluentgreeknt.com/ MUSIC https://youtu.be/ePYe3lqsu-g https://youtu.be/Hi5YgbiNB1U SUPPORT Subscribe: https://www.youtube.com/channel/UCQ5DQ8zCOmeAqOcKTbSb7fg Become a Patron: https://www.patreon.com/MilitantThomist Donate: https://www.paypal.com/donate/?business=9XM8FACTLFDW2&no_recurring=0&item_name=Support+my+Apostolate¤cy_code=USD SusbscribeStar: https://www.subscribestar.com/militant-thomist FOLLOW Website: https://www.christianbwagner.com/ Facebook: https://www.facebook.com/MilitantThomist Facebook Group: https://www.facebook.com/groups/543689120339579 Twitter: https://twitter.com/MilitantThomist Instagram: https://www.instagram.com/militantthomist/ WATCH https://www.youtube.com/channel/UCQ5DQ8zCOmeAqOcKTbSb7fg LISTEN Podcast: https://www.christianbwagner.com/podcast Spotify: https://open.spotify.com/show/0exZN1vHDyLuRjnUI3sHXt?si=XHs8risyS1ebLCkWwKLblQ Apple Podcasts: https://podcasts.apple.com/us/podcast/militant-thomist/id1603094572 Anchor: https://anchor.fm/militantthomist SHOP Book Store: https://www.christianbwagner.com/shop Merch: https://www.christianbwagner.com/merch
Do you ever think about what truly drives us to achieve our goals and dreams? Join us as we explore the age-old debate between intrinsic and extrinsic motivation and uncover the secrets behind what really fuels our actions. Get ready to gain valuable insights that might just change the way you approach your students and possibly your own motivations. Topics DiscussedThe difference between these two types of rewardsHow to foster motivation at the middle school levelFinding balanceResourceshttps://thecoloradoclassroom.com/product/pat-on-the-back-recognition-cardsPlease subscribe on your favorite platform so you don't miss an episode. Whether it's Spotify, Apple Podcasts, Google Podcasts, or some other listening app, we encourage you to take a moment to subscribe to The Teaching Toolbox. And if you feel so inclined, we would love a review at Apple or Spotify to help other listeners find us just like you did.Let's ConnectTo stay up to date with episodes, check out our Facebook page or follow us on Instagram.Join Brittany's 6th Grade Teacher Success group on Facebook.Join Ellie's Middle School Math Chats group on Facebook.Brittany's resources can be found on her website or on TPT.Ellie's resources can be found on her website or on TPT.Mentioned in this episode:Looking for support with Classroom Management?Grab a digital mental health check-in for free and receive classroom management support from The Colorado Classroom: https://thecoloradoclassroom.com/product/mental-health-digital-check-in-form Then take a look at these classroom management resources to see what is a good fit for your students: https://www.teacherspayteachers.com/Store/The-Colorado-Classroom/Category/999965039-CLASSROOM-MANAGEMENT-999965039-1263189This podcast uses the following third-party services for analysis: Chartable - https://chartable.com/privacy
We will learn about leadership from a global executive who taught 60,000 students around the world. My guest today is Elizabeth Campbell Pagé. She is an entrepreneur, manager, and global executive in international organizations, including The World Bank, Inter-American Development Bank (IDB), and The Royal Society. Elizabeth teaches cross-sectoral teams in person and online. She was also the founder, publisher, and editor of a global, peer-reviewed journal of economic and environmental sciences, Ecodecision, featuring Francophone and Commonwealth articles and distributed to over 100 countries. https://ecppage.com/ Elizabeth C. Pagé shares her childhood influences, early adventures, and curiosity. She discusses the challenges and rewards of living in different places and straddling cultures. Elizabeth talks about her work in leadership development and the importance of leading remote and hybrid teams. She explores the redefinition of success and failure and the need for a broader perspective. Elizabeth emphasizes the value of process and long-term thinking and the importance of intrinsic motivation in leadership. In this conversation, we discuss the redefinition of leadership in the face of disruption and the importance of creating more leaders. We explore the need for meaningful connections and creating environments where people can challenge ideas. We also delve into the impact of AI on different professions and the need for re-skilling. The power of storytelling and trust in business is highlighted, along with the intrinsic value of stories and the human element. The conversation concludes with a reflection on the seismic shift in the definition of success. Chapters: 05:00 Introduction and Childhood Influences 08:13 Early Adventures and Curiosity 11:02 Living in Different Places and Straddling Cultures 14:27 The Work of Elizabeth C. Pagé 18:38 Leading Remote and Hybrid Teams 19:29 Creating a Safe Space for Voices to Be Heard 27:26 The Importance of Process and Long-Term Thinking 33:12 The Nuances of Success and Failure 34:31 The Shift from Extrinsic to Intrinsic Motivation 41:36 Redefining Leadership and Creating More Leaders 44:07 The Shift in Leadership and the Need for Meaningful Connections 45:09 Creating an Environment for Challenging Ideas 47:20 Preparing for the Disruption of AI 49:18 The Impact of AI on Different Professions 53:45 Aligning Values and Investing for Generational Wealth 56:54 The Power of Storytelling and Trust in Business 01:01:01 The Intrinsic Value of Stories and the Human Element 01:04:31 Insight, Hindsight, and Foresight in Success 01:07:54 Leadership as Guiding and Empowering Others 01:14:06 The Seismic Shift in the Definition of Success Blue Infinitas Capital, LLC is a registered investment adviser and the opinions expressed by the Firm's employees and podcast guests on this show are their own and do not reflect the opinions of Blue Infinitas Capital, LLC. All statements and opinions expressed are based upon information considered reliable although it should not be relied upon as such. Any statements or opinions are subject to change without notice. Information presented is for educational purposes only and does not intend to make an offer or solicitation for the sale or purchase of any specific securities, investments, or investment strategies. Investments involve risk and unless otherwise stated, are not guaranteed. Information expressed does not take into account your specific situation or objectives, and is not intended as recommendations appropriate for any individual. Listeners are encouraged to seek advice from a qualified tax, legal, or investment adviser to determine whether any information presented may be suitable for their specific situation. Past performance is not indicative of future performance. --- Send in a voice message: https://podcasters.spotify.com/pod/show/talking-billions/message
Donofriend, or Donovan Taylor Hall, joins Ashanti Branch, a master educator with 20+ years of experience in schools, to talk about what they are reflecting on as educators and what they're seeing in the halls of modern-day schools. Donovan is an experienced teacher and positive youth development expert who travels around the country speaking at schools and pushing into classrooms. Learn more about him in episode 152. Topics include: video games, core values, extrinsic/intrinsic motivation, “call-out” vs. “call-in”, “book smart” vs. “street smart”, being a warm demander, and shared humanity amongst teachers and students. (0:00) Welcome! (1:30) Donovan and Ashanti introduce themselves. (4:00) Student behavior, engagement, and distractions (7:00) Video games and what they teach us about students (11:00) Making education entertaining (15:00) Extrinsic vs. intrinsic motivation (17:00) Core values (25:00) Safe vs. unsafe schools; accountability (30:00) Warm demander - running out of warm (34:00) When to “call-out” students, and desire for attention (40:00) “Book smart” vs. “Street smart” (43:00) Getting teachers more classroom management tools (45:00) Teachers showing humanity; dropping ego --- Connect with Donovan Taylor Hall: Website: www.donovantaylorhall.com Instagram, TikTok, Youtube: @donofriend Join our 5k Challenge: https://charity.pledgeit.org/20thAnnualEF5KChallenge Create your own mask anonymously at https://millionmask.org/ Email us questions and comments at totmpod100@gmail.com --- Connect with Ashanti Branch: Instagram: https://www.instagram.com/branchspeaks/ Facebook: https://www.facebook.com/BranchSpeaks Twitter: https://twitter.com/BranchSpeaks LinkedIn: https://www.linkedin.com/in/ashantibranch/ Website: https://www.branchspeaks.com/ --- Support the podcast and the work of the Ever Forward Club: https://podcasters.spotify.com/pod/show/branch-speaks/support --- Connect with Ever Forward Club: Instagram: https://www.instagram.com/everforwardclub Facebook: https://www.facebook.com/everforwardclub Twitter: https://twitter.com/everforwardclub LinkedIn: https://www.linkedin.com/company/the-ever-forward-club/ --- Support this podcast: https://podcasters.spotify.com/pod/show/branch-speaks/support
Did you know that sun exposure can not only cause burn, but also damage your DNA? Genomic instability is like rust that builds up and damages the core functions of your body's DNA. External aggressors, like the sun, can build up genomic instability causing your DNA to mutate more rapidly and speed up your skin's aging process.In this bonus episode of the Biohacking Beauty podcast, Amitay, CEO of Young Goose, discusses genomic instability and how it accelerates aging at the cellular level, leading to visible signs like wrinkles and loss of elasticity.Learn about the innovative tools that are available to combat the visible signs of aging. Master the art of aging gracefully with a thoughtful dialogue on the importance of smart sun exposure, the power of NAD precursors and skin health from the cellular level up! What we discuss: (1:02) An introduction to genomic instability(2:50) Extrinsic aging and its effect on the skin(4:24) Biohacking strategies for saving your skin from extrinsic aging and external aggressors(6:11) Genoprotectants, what they are and how you can use them to protect your cells from damage (7:54) The repair process, and why autophagy is so importantTo learn more about Young Goose:Use code PODCAST10 to get 10% off your first purchase, and if you're a returning customer use the code PODCAST5 to get 5% off at https://www.younggoose.com/Instagram: @young_goose_skincareResources:Spermidine research: https://www.longevitybioresearch.org/Spermidine in health and disease: https://www.science.org/doi/10.1126/science.aan2788
Today on the Brain Booster we have a live recording of a webinar I conducted with Raymond Prior. The event was over subscribed with people trying to get into the Zoom call after it was full. BUILDING STABLE CONFIDENCE – THE NEW PROGRAM To get your copy of the brand new program with Raymond Prior and Karl Morris It will truly make a BIG difference to your development Link to buy: Building Stable Confidence Raymond and I collaborated on what we believe to be a ground breaking project called ‘Building Stable Confidence' It is a program that has had wonderful reviews and could be a wonderful assistance this year to help you get the best from your game. During the webinar we discussed the key concept of ‘Separating SELF from CRAFT' How can we develop the capability to create some space between our capability as a golfer and our self image as a human being How being ‘concerned' about our value as a human is tied to our evolutionary development The early messages we receive as youngsters from important people around us and how this can weave into our identity The need to develop a ‘SAFE PLACE' to play from How our performance can become a THREAT to our identity How this leads to unstable confidence Why your relationship to the game can become so toxic The necessity to ask yourself the question ‘Why do I play golf?' Not to take your initial answer but to look deeper to what is REALLY important to you What are your INTRINSIC and SELF DIRECTED values to play the game? Challenging your core BELIEFS to see if they REALLY work for you The fragility of EXTRINSIC values and why you can never ‘get enough' The need to JOURNAL your experiences and why that is so important What am I committed to TODAY? Think about WHY you play golf WHEN you play golf Being INTENTIONAL and on purpose This is a wonderful session with Raymond and Karl as they go into some real detail as to how to get the very best from your game Take ACTION and get yourself a copy of the stable confidence program BUILDING STABLE CONFIDENCE – THE NEW PROGRAM To get your copy of the brand new program with Raymond Prior and Karl Morris It will truly make a BIG difference to your development Link to buy: Building Stable Confidence
Is Motivation Real? Introducing "The Overweight Mind" podcast, where the power of positive psychology, mindset, and personal development converge to transform lives. Join us on this uplifting journey as we explore the secrets to creating a lifestyle that leads to happiness, health, and wealth. Discover how the science of mindset can help you achieve your weight loss goals and unlock your full potential. It's where the magic of the mind meets the path to a healthier, wealthier you! Show Sponsor: SpaStar www.spastar.net The Get Ready Wrap™ is spa-inspired and made for virtually every body. Designed with comfort and eco-elegance in mind - no velcro, snaps, or bulky material that take up too much room in your closet, laundry, or suitcase. This luxury spa wrap is perfect for spa treatments, beauty rituals, special occasions, getting ready, makeup tutorials, travel, by the pool, and the gym bag. Discount Code to Save 15% Off Your Get Ready Wrap: THRIVE15 Do You Really Need Motivation? In this episode of The Overweight Mind, I discuss all things motivation. What is motivation (the two types) ~ Is it really? Is it necessary for success? How can you create it when it's gone? What is Motivation and Do We Really Need It Motivation refers to the driving force behind our actions, desires, and behaviors. It is the internal or external factor that stimulates us to pursue a particular goal, task, or outcome. Motivation is what energizes us, directs our behavior, and sustains our efforts over time. There are two main types of motivation: Intrinsic motivation: This type of motivation comes from within oneself. It involves engaging in an activity because it is inherently rewarding, enjoyable, or fulfilling. Examples of intrinsic motivation include pursuing a hobby because it brings personal satisfaction or studying a subject out of genuine interest. Extrinsic motivation: Extrinsic motivation, on the other hand, comes from external factors such as rewards, punishments, or social pressures. It involves engaging in an activity to earn a reward or avoid a negative consequence. Examples of extrinsic motivation include studying to earn good grades, working to receive a paycheck, or exercising to win a competition. Motivation can be influenced by various factors, including: *Personal goals and values *Expectations of success or failure *Past experiences and achievements *Social and cultural norms *Environmental factors *Emotional states, such as excitement, fear, or enthusiasm Why we lose motivation and how to get it back: Understanding motivation is essential for achieving personal and professional goals, as it plays a crucial role in determining our persistence, effort, and overall performance in pursuing desired outcomes. Losing motivation can happen for various reasons, including burnout, lack of clear goals, feeling overwhelmed, or experiencing setbacks. 5 steps to help you regain motivation: #1 Reflect on your goals ~ Why did you start?: Take some time to reassess your short-term and long-term goals. Are they still meaningful to you? Are they achievable? Adjust your goals to align with your current circumstances and aspirations if necessary. #2 Break tasks into smaller steps ~ Rocks vs Boulder: Feeling overwhelmed can lead to a loss of motivation. Break down your tasks into smaller, manageable steps. This makes them feel less daunting and allows you to make progress incrementally, which can boost your motivation as you see yourself moving forward. #3 Find inspiration ~ manufacture urgency: Seek out sources of inspiration that resonate with you. This could be reading books, watching motivational videos, listening to podcasts, or talking to mentors or peers who inspire you. Surround yourself with positivity and motivation to reignite your own enthusiasm. #4 Create a supportive environment. Who's influencing you? Evaluate your environment and identify any factors that may be contributing to your lack of motivation. These could include distractions, negative influences, or disorganization. Make changes to your environment that support your goals and encourage productivity. #5 Take care of yourself ~ How are you treating yourself: Remember to prioritize your health and life, as physical and mental well-being play a significant role in motivation. Ensure you get enough sleep, eat healthily, exercise regularly, and manage stress effectively. Taking care of yourself gives you more energy and resilience to tackle challenges and stay motivated. By implementing these steps, you can gradually regain your motivation and momentum towards achieving your goals. Remember that motivation fluctuates naturally, so be patient with yourself and celebrate your progress along the way. If you love the show, it would mean the world to me if you'd like it, share it, and review it. That's the only way I can help more people and continue to fulfill my mission of helping others overcome pain and start living a life of peaceful passion. Links: Thrive Forever Fit Supplements Thrive Forever Fit Coaching Program Thrive Fitness Studio FREE Facebook Group: Wellness Lab Best Selling Books: The Overweight Mind The Purpose of Pain
In this robust episode, we break the anatomy of your dream private practice down into three parts. Are you willing to bet on yourself as a practice owner? Drop your fixed ideas and open your mind to the successful actions of the top 10% of clinics nationwide. Follow this blueprint to go from startup to success! Episode at a glance: Part 1: Extrinsic factors Part 2: Financial factors Part 3: Operational factors
I'm Annette Leonard of https://www.annetteleonard.com find me on Instagram https://www.instagram.com/theannetteleonard I recently read Emily Henry's, "The People We Meet on Vacation." She did a great job putting to language the ways we are confident and insecure, and how relationships expose us to new layers of ourselves. In the book one of her characters said that the more someone got to know her, the more she was afraid they'd discover the ugly and unlovable in her. I highlighted that passage and returned to it because I think that any of us who carry any shame or worthiness wounds -- that's what's at the heart of it. That when all the trappings are stripped away, we'll come up short and be rejected. That anxiety is often to difficult to bear and so we: hustle, people-please, distance, get addicted, hurt others first, and many more habits of self-defense. What's also interesting about this is that it's a commentary on the other person, too. It suggests that I'm so skilled at maintaining a façade that you can't see through, but if you did you wouldn't like what you'd see... So it also suggests that the other person (whose opinion we seem to care a lot about) is dupable or dim. Perhaps in a life without many obstacles, it might be easy to never have to examine these questions. However, chronic pain and illness stops you short and requires facing these kinds of questions. This weekend I was talking with a friend of a friend who had a life-altering surgery. They're questioning things like "What is my worth?" "What is my value if I'm not in the world of work?" "Will my partner stay if I'm significantly transformed?" These aren't simple questions. In a world where productivity is tied to output, it's difficult to not see that as a commentary on our worth. While it's so simple for me to see that YOU have intrinsic worth, your value is your existence, your value is your YOUNESS. But, it's much more difficult for me (on a hard day) to say it about myself -- but that is the goal. Do you struggle with these questions? This is the Chronic Wellness Podcast. I'm Annette Leonard, speaker, coach, and sick person who believes that my illnesses do not define me. If health is the absence of disease and wellness is the presence of wholeness, then no matter what your disease status, we can work toward your wellness, your wholeness. Whether or not you are ever "healthy" on paper, you can be well. Join me and others on the path back to wholeness at AnnetteLeonard.com. Whether you are a person experiencing chronic illness or are someone who loves or serves people with chronic illness I have great resources here on this channel or on my website for you.
Speaker CFPs and Sponsor Guides are now available for AIE World's Fair — join us on June 25-27 for the biggest AI Engineer conference of 2024!Soumith Chintala needs no introduction in the ML world — his insights are incredibly accessible across Twitter, LinkedIn, podcasts, and conference talks (in this pod we'll assume you'll have caught up on the History of PyTorch pod from last year and cover different topics). He's well known as the creator of PyTorch, but he's more broadly the Engineering Lead on AI Infra, PyTorch, and Generative AI at Meta.Soumith was one of the earliest supporters of Latent Space (and more recently AI News), and we were overjoyed to catch up with him on his latest SF visit for a braindump of the latest AI topics, reactions to some of our past guests, and why Open Source AI is personally so important to him.Life in the GPU-Rich LaneBack in January, Zuck went on Instagram to announce their GPU wealth: by the end of 2024, Meta will have 350k H100s. By adding all their GPU clusters, you'd get to 600k H100-equivalents of compute. At FP16 precision, that's ~1,200,000 PFLOPS. If we used George Hotz's (previous guest!) "Person of Compute" measure, Meta now has 60k humans of compute in their clusters. Occasionally we get glimpses into the GPU-rich life; on a recent ThursdAI chat, swyx prompted PaLM tech lead Yi Tay to write down what he missed most from Google, and he commented that UL2 20B was trained by accidentally leaving the training job running for a month, because hardware failures are so rare in Google.Meta AI's Epic LLM RunBefore Llama broke the internet, Meta released an open source LLM in May 2022, OPT-175B, which was notable for how “open” it was - right down to the logbook! They used only 16 NVIDIA V100 GPUs and Soumith agrees that, with hindsight, it was likely under-trained for its parameter size.In Feb 2023 (pre Latent Space pod), Llama was released, with a 7B version trained on 1T tokens alongside 65B and 33B versions trained on 1.4T tokens. The Llama authors included Guillaume Lample and Timothée Lacroix, who went on to start Mistral.July 2023 was Llama2 time (which we covered!): 3 model sizes, 7B, 13B, and 70B, all trained on 2T tokens. The three models accounted for a grand total of 3,311,616 GPU hours for all pre-training work. CodeLlama followed shortly after, a fine-tune of Llama2 specifically focused on code generation use cases. The family had models in the 7B, 13B, 34B, and 70B size, all trained with 500B extra tokens of code and code-related data, except for 70B which is trained on 1T.All of this on top of other open sourced models like Segment Anything (one of our early hits!), Detectron, Detectron 2, DensePose, and Seamless, and in one year, Meta transformed from a company people made fun of for its “metaverse” investments to one of the key players in the AI landscape and its stock has almost tripled since (about $830B in market value created in the past year).Why Open Source AIThe obvious question is why Meta would spend hundreds of millions on its AI efforts and then release them for free. Zuck has addressed this in public statements:But for Soumith, the motivation is even more personal:“I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India… And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for like zero dollars. And I think that was a strong reason why I ended up where I am. So like that, like the open source side of things, I always push regardless of like what I get paid for, like I think I would do that as a passion project on the side……I think at a fundamental level, the most beneficial value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me……Like, okay, I again always go back to like I'm a student in India with no money. What is my accessibility to any of these closed source models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control issue: I strongly believe if you want human aligned AI, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble.We like the way Soumith put it last year: Closed AI “rate-limits against people's imaginations and needs”!What It Takes For Open Source AI to WinHowever Soumith doesn't think Open Source will simply win by popular demand. There is a tremendous coordination problem with the decentralized nature of the open source AI development right now: nobody is collecting the valuable human feedback in the way that OpenAI or Midjourney are doing.“Open source in general always has a coordination problem. If there's a vertically integrated provider with more resources, they will just be better coordinated than open source. And so now open source has to figure out how to have coordinated benefits. And the reason you want coordinated benefits is because these models are getting better based on human feedback. And if you see with open source models, like if you go to the /r/localllama subreddit, like there's so many variations of models that are being produced from, say, Nous research. I mean, like there's like so many variations built by so many people. And one common theme is they're all using these fine-tuning or human preferences datasets that are very limited and they're not sufficiently diverse. And you look at the other side, say front-ends like Oobabooga or like Hugging Chat or Ollama, they don't really have feedback buttons. All the people using all these front-ends, they probably want to give feedback, but there's no way for them to give feedback… So we're just losing all of this feedback. Maybe open source models are being as used as GPT is at this point in like all kinds of, in a very fragmented way, like in aggregate all the open source models together are probably being used as much as GPT is, maybe close to that. But the amount of feedback that is driving back into the open source ecosystem is like negligible, maybe less than 1% of like the usage. So I think like some, like the blueprint here I think is you'd want someone to create a sinkhole for the feedback… I think if we do that, if that actually happens, I think that probably has a real chance of the open source models having a runaway effect against OpenAI, I think like there's a clear chance we can take at truly winning open source.”If you're working on solving open source coordination, please get in touch!Show Notes* Soumith Chintala Twitter* History of PyTorch episode on Gradient Podcast* The Llama Ecosystem* Apple's MLX* Neural ODEs (Ordinary Differential Equations)* AlphaGo* LMSys arena* Dan Pink's "Drive"* Robotics projects:* Dobb-E* OK Robot* Yann LeCun* Yangqing Jia of Lepton AI* Ed Catmull* George Hotz on Latent Space* Chris Lattner on Latent Space* Guillaume Lample* Yannic Kilcher of OpenAssistant* LMSys* Alex Atallah of OpenRouter* Carlo Sferrazza's 3D tactile research* Alex Wiltschko of Osmo* Tangent by Alex Wiltschko* Lerrel Pinto - RoboticsTimestamps* [00:00:00] Introductions* [00:00:51] Extrinsic vs Intrinsic Success* [00:02:40] Importance of Open Source and Its Impact* [00:03:46] PyTorch vs TinyGrad* [00:08:33] Why PyTorch is the Switzerland of frameworks* [00:10:27] Modular's Mojo + PyTorch?* [00:13:32] PyTorch vs Apple's MLX* [00:16:27] FAIR / PyTorch Alumni* [00:18:50] How can AI inference providers differentiate?* [00:21:41] How to build good benchmarks and learnings from AnyScale's* [00:25:28] Most interesting unexplored ideas* [00:28:18] What people get wrong about synthetic data* [00:35:57] Meta AI's evolution* [00:38:42] How do you allocate 600,000 GPUs?* [00:42:05] Even the GPU Rich are GPU Poor* [00:47:31] Meta's MTIA silicon* [00:50:09] Why we need open source* [00:59:00] Open source's coordination problem for feedback gathering* [01:08:59] Beyond text generation* [01:15:37] Osmo and the Future of Smell Recognition TechnologyTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have in the studio Soumith Chintala, welcome.Soumith [00:00:17]: Thanks for having me.Swyx [00:00:18]: On one of your rare visits from New York where you live. You got your start in computer vision at NYU with Yann LeCun. That was a very fortuitous start. I was actually listening to your interview on the Gradient podcast. So if people want to know more about the history of Soumith, history of PyTorch, they can go to that podcast. We won't spend that much time there, but I just was marveling at your luck, or I don't know if it's your luck or your drive to find AI early and then find the right quality mentor because I guess Yan really sort of introduced you to that world.Soumith [00:00:51]: Yeah, I think you're talking about extrinsic success, right? A lot of people just have drive to do things that they think is fun, and a lot of those things might or might not be extrinsically perceived as good and successful. I think I just happened to like something that is now one of the coolest things in the world or whatever. But if I happen, the first thing I tried to become was a 3D VFX artist, and I was really interested in doing that, but I turned out to be very bad at it. So I ended up not doing that further. But even if I was good at that, whatever, and I ended up going down that path, I probably would have been equally happy. It's just like maybe like the perception of, oh, is this person successful or not might be different. I think like after a baseline, like your happiness is probably more correlated with your intrinsic stuff.Swyx [00:01:44]: Yes. I think Dan Pink has this book on drive that I often refer to about the power of intrinsic motivation versus extrinsic and how long extrinsic lasts. It's not very long at all. But anyway, now you are an investor in Runway, so in a way you're working on VFX. Yes.Soumith [00:02:01]: I mean, in a very convoluted way.Swyx [00:02:03]: It reminds me of Ed Catmull. I don't know if you guys know, but he actually tried to become an animator in his early years and failed or didn't get accepted by Disney and then went and created Pixar and then got bought by Disney and created Toy Story. So you joined Facebook in 2014 and eventually became a creator and maintainer of PyTorch. And there's this long story there you can refer to on the gradient. I think maybe people don't know that you also involved in more sort of hardware and cluster decision affair. And we can dive into more details there because we're all about hardware this month. Yeah. And then finally, I don't know what else, like what else should people know about you on a personal side or professional side?Soumith [00:02:40]: I think open source is definitely a big passion of mine and probably forms a little bit of my identity at this point. I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India. I didn't have internet for a while. In college, actually, I didn't have internet except for GPRS or whatever. And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for zero dollars. And I think that was a strong reason why I ended up where I am. So the open source side of things, I always push regardless of what I get paid for, like I think I would do that as a passion project on the side.Swyx [00:03:35]: Yeah, that's wonderful. Well, we'll talk about the challenges as well that open source has, open models versus closed models. Maybe you want to touch a little bit on PyTorch before we move on to the sort of Meta AI in general.PyTorch vs Tinygrad tradeoffsAlessio [00:03:46]: Yeah, we kind of touched on PyTorch in a lot of episodes. So we had George Hotz from TinyGrad. He called PyTorch a CISC and TinyGrad a RISC. I would love to get your thoughts on PyTorch design direction as far as, I know you talk a lot about kind of having a happy path to start with and then making complexity hidden away but then available to the end user. One of the things that George mentioned is I think you have like 250 primitive operators in PyTorch, I think TinyGrad is four. So how do you think about some of the learnings that maybe he's going to run into that you already had in the past seven, eight years almost of running PyTorch?Soumith [00:04:24]: Yeah, I think there's different models here, but I think it's two different models that people generally start with. Either they go like, I have a grand vision and I'm going to build a giant system that achieves this grand vision and maybe one is super feature complete or whatever. Or other people say they will get incrementally ambitious, right? And they say, oh, we'll start with something simple and then we'll slowly layer out complexity in a way that optimally applies Huffman coding or whatever. Like where the density of users are and what they're using, I would want to keep it in the easy, happy path and where the more niche advanced use cases, I'll still want people to try them, but they need to take additional frictional steps. George, I think just like we started with PyTorch, George started with the incrementally ambitious thing. I remember TinyGrad used to be, like we would be limited to a thousand lines of code and I think now it's at 5,000. So I think there is no real magic to which why PyTorch has the kind of complexity. I think it's probably partly necessitated and partly because we built with the technology available under us at that time, PyTorch is like 190,000 lines of code or something at this point. I think if you had to rewrite it, we would probably think about ways to rewrite it in a vastly simplified way for sure. But a lot of that complexity comes from the fact that in a very simple, explainable way, you have memory hierarchies. You have CPU has three levels of caches and then you have DRAM and SSD and then you have network. Similarly, GPU has several levels of memory and then you have different levels of network hierarchies, NVLink plus InfiniBand or Rocky or something like that, right? And the way the flops are available on your hardware, they are available in a certain way and your computation is in a certain way and you have to retrofit your computation onto both the memory hierarchy and like the flops available. When you're doing this, it is actually a fairly hard mathematical problem to do this setup, like you find the optimal thing. And finding the optimal thing is, what is optimal depends on the input variables themselves. So like, okay, what is the shape of your input tensors and what is the operation you're trying to do and various things like that. Finding that optimal configuration and writing it down in code is not the same for every input configuration you have. Like for example, just as the shape of the tensors change, let's say you have three input tensors into a Sparstar product or something like that. The shape of each of these input tensors will vastly change how you do this optimally placing this operation onto the hardware in a way that will get you maximal throughput. So a lot of our complexity comes from writing out hundreds of configurations for each single PyTorch operator and templatizing these things and symbolically generating the final CUDA code or CPU code. There's no way to avoid it because mathematically we haven't found symbolic ways to do this that also keep compile time near zero. You can write a very simple framework, but then you also should be willing to eat the long compile time. So if searching for that optimal performance at runtime, but that's the trade off. There's no, like, I don't think unless we have great breakthroughs George's vision is achievable, he should be thinking about a narrower problem such as I'm only going to make this for work for self-driving car connets or I'm only going to make this work for LLM transformers of the llama style. Like if you start narrowing the problem down, you can make a vastly simpler framework. But if you don't, if you need the generality to power all of the AI research that is happening and keep zero compile time and in all these other factors, I think it's not easy to avoid the complexity.Pytorch vs MojoAlessio [00:08:33]: That's interesting. And we kind of touched on this with Chris Lattner when he was on the podcast. If you think about frameworks, they have the model target. They have the hardware target. They have different things to think about. He mentioned when he was at Google, TensorFlow trying to be optimized to make TPUs go brr, you know, and go as fast. I think George is trying to make especially AMD stack be better than ROCm. How come PyTorch has been such as Switzerland versus just making Meta hardware go brr?Soumith [00:09:00]: First, Meta is not in the business of selling hardware. Meta is not in the business of cloud compute. The way Meta thinks about funding PyTorch is we're funding it because it's net good for Meta to fund PyTorch because PyTorch has become a standard and a big open source project. And generally it gives us a timeline edge. It gives us leverage and all that within our own work. So why is PyTorch more of a Switzerland rather than being opinionated? I think the way we think about it is not in terms of Switzerland or not. We actually the way we articulate it to all hardware vendors and software vendors and all who come to us being we want to build a backend in core for PyTorch and ship it by default is we just only look at our user side of things. Like if users are using a particular piece of hardware, then we want to support it. We very much don't want to king make the hardware side of things. So as the MacBooks have GPUs and as that stuff started getting increasingly interesting, we pushed Apple to push some engineers and work on the NPS support and we spend significant time from Meta funded engineers on that as well because a lot of people are using the Apple GPUs and there's demand. So we kind of mostly look at it from the demand side. We never look at it from like oh which hardware should we start taking opinions on.Swyx [00:10:27]: Is there a future in which, because Mojo or Modular Mojo is kind of a superset of Python, is there a future in which PyTorch might use Mojo features optionally?Soumith [00:10:36]: I think it depends on how well integrated it is into the Python ecosystem. So if Mojo is like a pip install and it's readily available and users feel like they can use Mojo so smoothly within their workflows in a way that just is low friction, we would definitely look into that. Like in the same way PyTorch now depends on Triton, OpenAI Triton, and we never had a conversation that was like huh, that's like a dependency. Should we just build a Triton of our own or should we use Triton? It almost doesn't, like those conversations don't really come up for us. The conversations are more well does Triton have 10,000 dependencies and is it hard to install? We almost don't look at these things from a strategic leverage point of view. We look at these things from a user experience point of view, like is it easy to install? Is it smoothly integrated and does it give enough benefits for us to start depending on it? If so, yeah, we should consider it. That's how we think about it.Swyx [00:11:37]: You're inclusive by default as long as it meets the minimum bar of, yeah, but like maybe I phrased it wrongly. Maybe it's more like what problems would you look to solve that you have right now?Soumith [00:11:48]: I think it depends on what problems Mojo will be useful at.Swyx [00:11:52]: Mainly a performance pitch, some amount of cross compiling pitch.Soumith [00:11:56]: Yeah, I think the performance pitch for Mojo was like, we're going to be performant even if you have a lot of custom stuff, you're going to write arbitrary custom things and we will be performant. And that value proposition is not clear to us from the PyTorch side to consider it for PyTorch. So PyTorch, it's actually not 250 operators, it's like a thousand operators. PyTorch exposes about a thousand operators and people kind of write their ideas in the thousand operators of PyTorch. Mojo is like, well, maybe it's okay to completely sidestep those thousand operators of PyTorch and just write it in a more natural form. Just write raw Python, write for loops or whatever, right? So from the consideration of how do we intersect PyTorch with Mojo, I can see one use case where you have custom stuff for some parts of your program, but mostly it's PyTorch. And so we can probably figure out how to make it easier for say Torch.compile to smoothly also consume Mojo subgraphs and like, you know, the interoperability being actually usable, that I think is valuable. But Mojo as a fundamental front end would be replacing PyTorch, not augmenting PyTorch. So in that sense, I don't see a synergy in more deeply integrating Mojo.Pytorch vs MLXSwyx [00:13:21]: So call out to Mojo whenever they have written something in Mojo and there's some performance related thing going on. And then since you mentioned Apple, what should people think of PyTorch versus MLX?Soumith [00:13:32]: I mean, MLX is early and I know the folks well, Ani used to work at FAIR and I used to chat with him all the time. He used to be based out of New York as well. The way I think about MLX is that MLX is specialized for Apple right now. It has a happy path because it's defined its product in a narrow way. At some point MLX either says we will only be supporting Apple and we will just focus on enabling, you know, there's a framework if you use your MacBook, but once you like go server side or whatever, that's not my problem and I don't care. For MLS, it enters like the server side set of things as well. Like one of these two things will happen, right? If the first thing will happen, like MLX's overall addressable market will be small, but it probably do well within that addressable market. If it enters the second phase, they're going to run into all the same complexities that we have to deal with. They will not have any magic wand and they will have more complex work to do. They probably wouldn't be able to move as fast.Swyx [00:14:44]: Like having to deal with distributed compute?Soumith [00:14:48]: Distributed, NVIDIA and AMD GPUs, like just like having a generalization of the concept of a backend, how they treat compilation with plus overheads. Right now they're deeply assumed like the whole NPS graph thing. So they need to think about all these additional things if they end up expanding onto the server side and they'll probably build something like PyTorch as well, right? Like eventually that's where it will land. And I think there they will kind of fail on the lack of differentiation. Like it wouldn't be obvious to people why they would want to use it.Swyx [00:15:24]: I mean, there are some cloud companies offering M1 and M2 chips on servers. I feel like it might be interesting for Apple to pursue that market, but it's not their core strength.Soumith [00:15:33]: Yeah. If Apple can figure out their interconnect story, maybe, like then it can become a thing.Swyx [00:15:40]: Honestly, that's more interesting than the cars. Yes.Soumith [00:15:43]: I think the moat that NVIDIA has right now, I feel is that they have the interconnect that no one else has, like AMD GPUs are pretty good. I'm sure there's various silicon that is not bad at all, but the interconnect, like NVLink is uniquely awesome. I'm sure the other hardware providers are working on it, but-Swyx [00:16:04]: I feel like when you say it's uniquely awesome, you have some appreciation of it that the rest of us don't. I mean, the rest of us just like, you know, we hear marketing lines, but what do you mean when you say NVIDIA is very good at networking? Obviously they made the acquisition maybe like 15 years ago.Soumith [00:16:15]: Just the bandwidth it offers and the latency it offers. I mean, TPUs also have a good interconnect, but you can't buy them. So you have to go to Google to use it.PyTorch MafiaAlessio [00:16:27]: Who are some of the other FAIR PyTorch alumni that are building cool companies? I know you have Fireworks AI, Lightning AI, Lepton, and Yangqing, you knew since college when he was building Coffee?Soumith [00:16:40]: Yeah, so Yangqing and I used to be framework rivals, PyTorch, I mean, we were all a very small close-knit community back then. Caffe, Torch, Theano, Chainer, Keras, various frameworks. I mean, it used to be more like 20 frameworks. I can't remember all the names. CCV by Liu Liu, who is also based out of SF. And I would actually like, you know, one of the ways it was interesting is you went into the framework guts and saw if someone wrote their own convolution kernel or they were just copying someone else's. There were four or five convolution kernels that were unique and interesting. There was one from this guy out of Russia, I forgot the name, but I remembered who was awesome enough to have written their own kernel. And at some point there, I built out these benchmarks called ConNet benchmarks. They're just benchmarking all the convolution kernels that are available at that time. It hilariously became big enough that at that time AI was getting important, but not important enough that industrial strength players came in to do these kinds of benchmarking and standardization. Like we have MLPerf today. So a lot of the startups were using ConNet benchmarks in their pitch decks as like, oh, you know, on ConNet benchmarks, this is how we fare, so you should fund us. I remember Nirvana actually was at the top of the pack because Scott Gray wrote amazingly fast convolution kernels at that time. Very interesting, but separate times. But to answer your question, Alessio, I think mainly Lepton, Fireworks are the two most obvious ones, but I'm sure the fingerprints are a lot wider. They're just people who worked within the PyTorch Cafe2 cohort of things and now end up at various other places.Swyx [00:18:50]: I think as a, both as an investor and a people looking to build on top of their services, it's a uncomfortable slash like, I don't know what I don't know pitch. Because I've met Yang Tsing and I've met Lin Chao. Yeah, I've met these folks and they're like, you know, we are deep in the PyTorch ecosystem and we serve billions of inferences a day or whatever at Facebook and now we can do it for you. And I'm like, okay, that's great. Like, what should I be wary of or cautious of when these things happen? Because I'm like, obviously this experience is extremely powerful and valuable. I just don't know what I don't know. Like, what should people know about like these sort of new inference as a service companies?Soumith [00:19:32]: I think at that point you would be investing in them for their expertise of one kind. So if they've been at a large company, but they've been doing amazing work, you would be thinking about it as what these people bring to the table is that they're really good at like GPU programming or understanding the complexity of serving models once it hits a certain scale. You know, various expertise like from the infra and AI and GPUs point of view. What you would obviously want to figure out is whether their understanding of the external markets is clear, whether they know and understand how to think about running a business, understanding how to be disciplined about making money or, you know, various things like that.Swyx [00:20:23]: Maybe I'll put it like, actually I will de-emphasize the investing bit and just more as a potential customer. Oh, okay. Like, it's more okay, you know, you have PyTorch gods, of course. Like, what else should I know?Soumith [00:20:37]: I mean, I would not care about who's building something. If I'm trying to be a customer, I would care about whether...Swyx [00:20:44]: Benchmarks.Soumith [00:20:44]: Yeah, I use it and it's usability and reliability and speed, right?Swyx [00:20:51]: Quality as well.Soumith [00:20:51]: Yeah, if someone from some random unknown place came to me and say, user stuff is great. Like, and I have the bandwidth, I probably will give it a shot. And if it turns out to be great, like I'll just use it.Benchmark dramaSwyx [00:21:07]: Okay, great. And then maybe one more thing about benchmarks, since we already brought it up and you brought up Confident Benchmarks. There was some recent drama around AnyScale. AnyScale released their own benchmarks and obviously they look great on their own benchmarks, but maybe didn't give the other... I feel there are two lines of criticism. One, which is they didn't test some apples for apples on the kind of endpoints that the other providers, that they are competitors with, on their benchmarks and that is due diligence baseline. And then the second would be more just optimizing for the right thing. You had some commentary on it. I'll just kind of let you riff.Soumith [00:21:41]: Yeah, I mean, in summary, basically my criticism of that was AnyScale built these benchmarks for end users to just understand what they should pick, right? And that's a very good thing to do. I think what they didn't do a good job of is give that end user a full understanding of what they should pick. Like they just gave them a very narrow slice of understanding. I think they just gave them latency numbers and that's not sufficient, right? You need to understand your total cost of ownership at some reasonable scale. Not oh, one API call is one cent, but a thousand API calls are 10 cents. Like people can misprice to cheat on those benchmarks. So you want to understand, okay, like how much is it going to cost me if I actually subscribe to you and do like a million API calls a month or something? And then you want to understand the latency and reliability, not just from one call you made, but an aggregate of calls you've made over several various times of the day and times of the week. And the nature of the workloads, is it just some generic single paragraph that you're sending that is cashable? Or is it like testing of real world workload? I think that kind of rigor, like in presenting that benchmark wasn't there. It was a much more narrow sliver of what should have been a good benchmark. That was my main criticism. And I'm pretty sure if before they released it, they showed it to their other stakeholders who would be caring about this benchmark because they are present in it, they would have easily just pointed out these gaps. And I think they didn't do that and they just released it. So I think those were the two main criticisms. I think they were fair and Robert took it well.Swyx [00:23:40]: And he took it very well. And we'll have him on at some point and we'll discuss it. But I think it's important for, I think the market being maturing enough that people start caring and competing on these kinds of things means that we need to establish what best practice is because otherwise everyone's going to play dirty.Soumith [00:23:55]: Yeah, absolutely. My view of the LLM inference market in general is that it's the laundromat model. Like the margins are going to drive down towards the bare minimum. It's going to be all kinds of arbitrage between how much you can get the hardware for and then how much you sell the API and how much latency your customers are willing to let go. You need to figure out how to squeeze your margins. Like what is your unique thing here? Like I think Together and Fireworks and all these people are trying to build some faster CUDA kernels and faster, you know, hardware kernels in general. But those modes only last for a month or two. These ideas quickly propagate.Swyx [00:24:38]: Even if they're not published?Soumith [00:24:39]: Even if they're not published, the idea space is small. So even if they're not published, the discovery rate is going to be pretty high. It's not like we're talking about a combinatorial thing that is really large. You're talking about Llama style LLM models. And we're going to beat those to death on a few different hardware SKUs, right? Like it's not even we have a huge diversity of hardware you're going to aim to run it on. Now when you have such a narrow problem and you have a lot of people working on it, the rate at which these ideas are going to get figured out is going to be pretty rapid.Swyx [00:25:15]: Is it a standard bag of tricks? Like the standard one that I know of is, you know, fusing operators and-Soumith [00:25:22]: Yeah, it's the standard bag of tricks on figuring out how to improve your memory bandwidth and all that, yeah.Alessio [00:25:28]: Any ideas instead of things that are not being beaten to death that people should be paying more attention to?Novel PyTorch ApplicationsSwyx [00:25:34]: One thing I was like, you know, you have a thousand operators, right? Like what's the most interesting usage of PyTorch that you're seeing maybe outside of this little bubble?Soumith [00:25:41]: So PyTorch, it's very interesting and scary at the same time, but basically it's used in a lot of exotic ways, like from the ML angle, what kind of models are being built? And you get all the way from state-based models and all of these things to stuff nth order differentiable models, like neural ODEs and stuff like that. I think there's one set of interestingness factor from the ML side of things. And then there's the other set of interesting factor from the applications point of view. It's used in Mars Rover simulations, to drug discovery, to Tesla cars. And there's a huge diversity of applications in which it is used. So in terms of the most interesting application side of things, I think I'm scared at how many interesting things that are also very critical and really important it is used in. I think the scariest was when I went to visit CERN at some point and they said they were using PyTorch and they were using GANs at the same time for particle physics research. And I was scared more about the fact that they were using GANs than they were using PyTorch, because at that time I was a researcher focusing on GANs. But the diversity is probably the most interesting. How many different things it is being used in. I think that's the most interesting to me from the applications perspective. From the models perspective, I think I've seen a lot of them. Like the really interesting ones to me are where we're starting to combine search and symbolic stuff with differentiable models, like the whole AlphaGo style models is one example. And then I think we're attempting to do it for LLMs as well, with various reward models and search. I mean, I don't think PyTorch is being used in this, but the whole alpha geometry thing was interesting because again, it's an example of combining the symbolic models with the gradient based ones. But there are stuff like alpha geometry that PyTorch is used at, especially when you intersect biology and chemistry with ML. In those areas, you want stronger guarantees on the output. So yeah, maybe from the ML side, those things to me are very interesting right now.Swyx [00:28:03]: Yeah. People are very excited about the alpha geometry thing. And it's kind of like, for me, it's theoretical. It's great. You can solve some Olympia questions. I'm not sure how to make that bridge over into the real world applications, but I'm sure people smarter than me will figure it out.Synthetic Data vs Symbolic ModelsSoumith [00:28:18]: Let me give you an example of it. You know how the whole thing about synthetic data will be the next rage in LLMs is a thing?Swyx [00:28:27]: Already is a rage.Soumith [00:28:28]: Which I think is fairly misplaced in how people perceive it. People think synthetic data is some kind of magic wand that you wave and it's going to be amazing. Synthetic data is useful in neural networks right now because we as humans have figured out a bunch of symbolic models of the world or made up certain symbolic models because of human innate biases. So we've figured out how to ground particle physics in a 30 parameter model. And it's just very hard to compute as in it takes a lot of flops to compute, but it only has 30 parameters or so. I mean, I'm not a physics expert, but it's a very low rank model. We built mathematics as a field that basically is very low rank. Language, a deep understanding of language, like the whole syntactic parse trees and just understanding how language can be broken down and into a formal symbolism is something that we figured out. So we basically as humans have accumulated all this knowledge on these subjects, either synthetic, we created those subjects in our heads, or we grounded some real world phenomenon into a set of symbols. But we haven't figured out how to teach neural networks symbolic world models directly. The only way we have to teach them is generating a bunch of inputs and outputs and gradient dissenting over them. So in areas where we have the symbolic models and we need to teach all the knowledge we have that is better encoded in the symbolic models, what we're doing is we're generating a bunch of synthetic data, a bunch of input output pairs, and then giving that to the neural network and asking it to learn the same thing that we already have a better low rank model of in gradient descent in a much more over-parameterized way. Outside of this, like where we don't have good symbolic models, like synthetic data obviously doesn't make any sense. So synthetic data is not a magic wand where it'll work in all cases in every case or whatever. It's just where we as humans already have good symbolic models off. We need to impart that knowledge to neural networks and we figured out the synthetic data is a vehicle to impart this knowledge to. So, but people, because maybe they don't know enough about synthetic data as a notion, but they hear, you know, the next wave of data revolution is synthetic data. They think it's some kind of magic where we just create a bunch of random data somehow. They don't think about how, and then they think that's just a revolution. And I think that's maybe a gap in understanding most people have in this hype cycle.Swyx [00:31:23]: Yeah, well, it's a relatively new concept, so. Oh, there's two more that I'll put in front of you and then you can see what you respond. One is, you know, I have this joke that it's, you know, it's only synthetic data if it's from the Mistral region of France, otherwise it's just a sparkling distillation, which is what news research is doing. Like they're distilling GPT-4 by creating synthetic data from GPT-4, creating mock textbooks inspired by Phi 2 and then fine tuning open source models like Llama. And so I don't know, I mean, I think that's, should we call that synthetic data? Should we call it something else? I don't know.Soumith [00:31:57]: Yeah, I mean, the outputs of LLMs, are they synthetic data? They probably are, but I think it depends on the goal you have. If your goal is you're creating synthetic data with the goal of trying to distill GPT-4's superiority into another model, I guess you can call it synthetic data, but it also feels like disingenuous because your goal is I need to copy the behavior of GPT-4 and-Swyx [00:32:25]: It's also not just behavior, but data set. So I've often thought of this as data set washing. Like you need one model at the top of the chain, you know, unnamed French company that has that, you know, makes a model that has all the data in it that we don't know where it's from, but it's open source, hey, and then we distill from that and it's great. To be fair, they also use larger models as judges for preference ranking, right? So that is, I think, a very, very accepted use of synthetic.Soumith [00:32:53]: Correct. I think it's a very interesting time where we don't really have good social models of what is acceptable depending on how many bits of information you use from someone else, right? It's like, okay, you use one bit. Is that okay? Yeah, let's accept it to be okay. Okay, what about if you use 20 bits? Is that okay? I don't know. What if you use 200 bits? I don't think we as society have ever been in this conundrum where we have to be like, where is the boundary of copyright or where is the boundary of socially accepted understanding of copying someone else? We haven't been tested this mathematically before,Swyx [00:33:38]: in my opinion. Whether it's transformative use. Yes. So yeah, I think this New York Times opening eye case is gonna go to the Supreme Court and we'll have to decide it because I think we never had to deal with it before. And then finally, for synthetic data, the thing that I'm personally exploring is solving this great stark paradigm difference between rag and fine tuning, where you can kind of create synthetic data off of your retrieved documents and then fine tune on that. That's kind of synthetic. All you need is variation or diversity of samples for you to fine tune on. And then you can fine tune new knowledge into your model. I don't know if you've seen that as a direction for synthetic data.Soumith [00:34:13]: I think you're basically trying to, what you're doing is you're saying, well, language, I know how to parametrize language to an extent. And I need to teach my model variations of this input data so that it's resilient or invariant to language uses of that data.Swyx [00:34:32]: Yeah, it doesn't overfit on the wrong source documents.Soumith [00:34:33]: So I think that's 100% synthetic. You understand, the key is you create variations of your documents and you know how to do that because you have a symbolic model or like some implicit symbolic model of language.Swyx [00:34:48]: Okay.Alessio [00:34:49]: Do you think the issue with symbolic models is just the architecture of the language models that we're building? I think maybe the thing that people grasp is the inability of transformers to deal with numbers because of the tokenizer. Is it a fundamental issue there too? And do you see alternative architectures that will be better with symbolic understanding?Soumith [00:35:09]: I am not sure if it's a fundamental issue or not. I think we just don't understand transformers enough. I don't even mean transformers as an architecture. I mean the use of transformers today, like combining the tokenizer and transformers and the dynamics of training, when you show math heavy questions versus not. I don't have a good calibration of whether I know the answer or not. I, you know, there's common criticisms that are, you know, transformers will just fail at X. But then when you scale them up to sufficient scale, they actually don't fail at that X. I think there's this entire subfield where they're trying to figure out these answers called like the science of deep learning or something. So we'll get to know more. I don't know the answer.Meta AI and Llama 2/3Swyx [00:35:57]: Got it. Let's touch a little bit on just Meta AI and you know, stuff that's going on there. Maybe, I don't know how deeply you're personally involved in it, but you're our first guest with Meta AI, which is really fantastic. And Llama 1 was, you know, you are such a believer in open source. Llama 1 was more or less the real breakthrough in open source AI. The most interesting thing for us covering on this, in this podcast was the death of Chinchilla, as people say. Any interesting insights there around the scaling models for open source models or smaller models or whatever that design decision was when you guys were doing it?Soumith [00:36:31]: So Llama 1 was Guillaume Lample and team. There was OPT before, which I think I'm also very proud of because we bridged the gap in understanding of how complex it is to train these models to the world. Like until then, no one really in gory detail published.Swyx [00:36:50]: The logs.Soumith [00:36:51]: Yeah. Like, why is it complex? And everyone says, oh, it's complex. But no one really talked about why it's complex. I think OPT was cool.Swyx [00:37:02]: I met Susan and she's very, very outspoken. Yeah.Soumith [00:37:05]: We probably, I think, didn't train it for long enough, right? That's kind of obvious in retrospect.Swyx [00:37:12]: For a 175B. Yeah. You trained it according to Chinchilla at the time or?Soumith [00:37:17]: I can't remember the details, but I think it's a commonly held belief at this point that if we trained OPT longer, it would actually end up being better. Llama 1, I think, was Guillaume Lample and team Guillaume is fantastic and went on to build Mistral. I wasn't too involved in that side of things. So I don't know what you're asking me, which is how did they think about scaling loss and all of that? Llama 2, I was more closely involved in. I helped them a reasonable amount with their infrastructure needs and stuff. And Llama 2, I think, was more like, let's get to the evolution. At that point, we kind of understood what we were missing from the industry's understanding of LLMs. And we needed more data and we needed more to train the models for longer. And we made, I think, a few tweaks to the architecture and we scaled up more. And that was Llama 2. I think Llama 2, you can think of it as after Guillaume left, the team kind of rebuilt their muscle around Llama 2. And Hugo, I think, who's the first author is fantastic. And I think he did play a reasonable big role in Llama 1 as well.Soumith [00:38:35]: And he overlaps between Llama 1 and 2. So in Llama 3, obviously, hopefully, it'll be awesome.Alessio [00:38:42]: Just one question on Llama 2, and then we'll try and fish Llama 3 spoilers out of you. In the Llama 2 paper, the loss curves of the 34 and 70B parameter, they still seem kind of steep. Like they could go lower. How, from an infrastructure level, how do you allocate resources? Could they have just gone longer or were you just, hey, this is all the GPUs that we can burn and let's just move on to Llama 3 and then make that one better?Soumith [00:39:07]: Instead of answering specifically about that Llama 2 situation or whatever, I'll tell you how we think about things. Generally, we're, I mean, Mark really is some numbers, right?Swyx [00:39:20]: So let's cite those things again. All I remember is like 600K GPUs.Soumith [00:39:24]: That is by the end of this year and 600K H100 equivalents. With 250K H100s, including all of our other GPU or accelerator stuff, it would be 600-and-something-K aggregate capacity.Swyx [00:39:38]: That's a lot of GPUs.Soumith [00:39:39]: We'll talk about that separately. But the way we think about it is we have a train of models, right? Llama 1, 2, 3, 4. And we have a bunch of GPUs. I don't think we're short of GPUs. Like-Swyx [00:39:54]: Yeah, no, I wouldn't say so. Yeah, so it's all a matter of time.Soumith [00:39:56]: I think time is the biggest bottleneck. It's like, when do you stop training the previous one and when do you start training the next one? And how do you make those decisions? The data, do you have net new data, better clean data for the next one in a way that it's not worth really focusing on the previous one? It's just a standard iterative product. You're like, when is the iPhone 1? When do you start working on iPhone 2? Where is the iPhone? And so on, right? So mostly the considerations are time and generation, rather than GPUs, in my opinion.Alessio [00:40:31]: So one of the things with the scaling loss, like Chinchilla is optimal to balance training and inference costs. I think at Meta's scale, you would rather pay a lot more maybe at training and then save on inference. How do you think about that from infrastructure perspective? I think in your tweet, you say you can try and guess on like how we're using these GPUs. Can you just give people a bit of understanding? It's like, because I've already seen a lot of VCs say, Llama 3 has been trained on 600,000 GPUs and that's obviously not true, I'm sure. How do you allocate between the research, FAIR and the Llama training, the inference on Instagram suggestions that get me to scroll, like AI-generated stickers on WhatsApp and all of that?Soumith [00:41:11]: Yeah, we haven't talked about any of this publicly, but as a broad stroke, it's like how we would allocate resources of any other kinds at any company. You run a VC portfolio, how do you allocate your investments between different companies or whatever? You kind of make various trade-offs and you kind of decide, should I invest in this project or this other project, or how much should I invest in this project? It's very much a zero sum of trade-offs. And it also comes into play, how are your clusters configured, like overall, what you can fit of what size and what cluster and so on. So broadly, there's no magic sauce here. I mean, I think the details would add more spice, but also wouldn't add more understanding. It's just gonna be like, oh, okay, I mean, this looks like they just think about this as I would normally do.Alessio [00:42:05]: So even the GPU rich run through the same struggles of having to decide where to allocate things.Soumith [00:42:11]: Yeah, I mean, at some point I forgot who said it, but you kind of fit your models to the amount of compute you have. If you don't have enough compute, you figure out how to make do with smaller models. But no one as of today, I think would feel like they have enough compute. I don't think I've heard any company within the AI space be like, oh yeah, like we feel like we have sufficient compute and we couldn't have done better. So that conversation, I don't think I've heard from any of my friends at other companies.EleutherSwyx [00:42:47]: Stella from Eleuther sometimes says that because she has a lot of donated compute. She's trying to put it to interesting uses, but for some reason she's decided to stop making large models.Soumith [00:42:57]: I mean, that's a cool, high conviction opinion that might pay out.Swyx [00:43:01]: Why?Soumith [00:43:02]: I mean, she's taking a path that most people don't care to take about in this climate and she probably will have very differentiated ideas. I mean, think about the correlation of ideas in AI right now. It's so bad, right? So everyone's fighting for the same pie. In some weird sense, that's partly why I don't really directly work on LLMs. I used to do image models and stuff and I actually stopped doing GANs because GANs were getting so hot that I didn't have any calibration of whether my work would be useful or not because, oh yeah, someone else did the same thing you did. It's like, there's so much to do, I don't understand why I need to fight for the same pie. So I think Stella's decision is very smart.Making BetsAlessio [00:43:53]: And how do you reconcile that with how we started the discussion about intrinsic versus extrinsic kind of like accomplishment or success? How should people think about that especially when they're doing a PhD or early in their career? I think in Europe, I walked through a lot of the posters and whatnot, there seems to be mode collapse in a way in the research, a lot of people working on the same things. Is it worth for a PhD to not take a bet on something that is maybe not as interesting just because of funding and visibility and whatnot? Or yeah, what suggestions would you give?Soumith [00:44:28]: I think there's a baseline level of compatibility you need to have with the field. Basically, you need to figure out if you will get paid enough to eat, right? Like whatever reasonable normal lifestyle you want to have as a baseline. So you at least have to pick a problem within the neighborhood of fundable. Like you wouldn't wanna be doing something so obscure that people are like, I don't know, like you can work on it.Swyx [00:44:59]: Would a limit on fundability, I'm just observing something like three months of compute, right? That's the top line, that's the like max that you can spend on any one project.Soumith [00:45:09]: But like, I think that's very ill specified, like how much compute, right? I think that the notion of fundability is broader. It's more like, hey, are these family of models within the acceptable set of, you're not crazy or something, right? Even something like neural or DS, which is a very boundary pushing thing or states-based models or whatever. Like all of these things I think are still in fundable territory. When you're talking about, I'm gonna do one of the neuromorphic models and then apply image classification to them or something, then it becomes a bit questionable. Again, it depends on your motivation. Maybe if you're a neuroscientist, it actually is feasible. But if you're an AI engineer, like the audience of these podcasts, then it's more questionable. The way I think about it is, you need to figure out how you can be in the baseline level of fundability just so that you can just live. And then after that, really focus on intrinsic motivation and depends on your strengths, like how you can play to your strengths and your interests at the same time. Like I try to look at a bunch of ideas that are interesting to me, but also try to play to my strengths. I'm not gonna go work on theoretical ML. I'm interested in it, but when I want to work on something like that, I try to partner with someone who is actually a good theoretical ML person and see if I actually have any value to provide. And if they think I do, then I come in. So I think you'd want to find that intersection of ideas you like, and that also play to your strengths. And I'd go from there. Everything else, like actually finding extrinsic success and all of that, I think is the way I think about it is like somewhat immaterial. When you're talking about building ecosystems and stuff, slightly different considerations come into play, but that's a different conversation.Swyx [00:47:06]: We're gonna pivot a little bit to just talking about open source AI. But one more thing I wanted to establish for Meta is this 600K number, just kind of rounding out the discussion, that's for all Meta. So including your own inference needs, right? It's not just about training.Soumith [00:47:19]: It's gonna be the number in our data centers for all of Meta, yeah.Swyx [00:47:23]: Yeah, so there's a decent amount of workload serving Facebook and Instagram and whatever. And then is there interest in like your own hardware?MTIASoumith [00:47:31]: We already talked about our own hardware. It's called MTIA. Our own silicon, I think we've even showed the standard photograph of you holding the chip that doesn't work. Like as in the chip that you basically just get like-Swyx [00:47:51]: As a test, right?Soumith [00:47:52]: Yeah, a test chip or whatever. So we are working on our silicon and we'll probably talk more about it when the time is right, but-Swyx [00:48:00]: Like what gaps do you have that the market doesn't offer?Soumith [00:48:04]: Okay, I mean, this is easy to answer. So basically, remember how I told you about there's this memory hierarchy and like sweet spots and all of that? Fundamentally, when you build a hardware, you make it general enough that a wide set of customers and a wide set of workloads can use it effectively while trying to get the maximum level of performance they can. The more specialized you make the chip, the more hardware efficient it's going to be, the more power efficient it's gonna be, the more easier it's going to be to find the software, like the kernel's right to just map that one or two workloads to that hardware and so on. So it's pretty well understood across the industry that if you have a sufficiently large volume, enough workload, you can specialize it and get some efficiency gains, like power gains and so on. So the way you can think about everyone building, every large company building silicon, I think a bunch of the other large companies are building their own silicon as well, is they, each large company has a sufficient enough set of verticalized workloads that can be specialized that have a pattern to them that say a more generic accelerator like an NVIDIA or an AMD GPU does not exploit. So there is some level of power efficiency that you're leaving on the table by not exploiting that. And you have sufficient scale and you have sufficient forecasted stability that those workloads will exist in the same form, that it's worth spending the time to build out a chip to exploit that sweet spot. Like obviously something like this is only useful if you hit a certain scale and that your forecasted prediction of those kind of workloads being in the same kind of specializable exploitable way is true. So yeah, that's why we're building our own chips.Swyx [00:50:08]: Awesome.Open Source AIAlessio [00:50:09]: Yeah, I know we've been talking a lot on a lot of different topics and going back to open source, you had a very good tweet. You said that a single company's closed source effort rate limits against people's imaginations and needs. How do you think about all the impact that some of the Meta AI work in open source has been doing and maybe directions of the whole open source AI space?Soumith [00:50:32]: Yeah, in general, I think first, I think it's worth talking about this in terms of open and not just open source, because like with the whole notion of model weights, no one even knows what source means for these things. But just for the discussion, when I say open source, you can assume it's just I'm talking about open. And then there's the whole notion of licensing and all that, commercial, non-commercial, commercial with clauses and all that. I think at a fundamental level, the most benefited value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me. Like I got this thing in a very accessible way. And then it's various degrees, right? And then if it's open source, but it's actually a commercial license, then a lot of companies are gonna benefit from gaining value that they didn't previously have, that they maybe had to pay a closed source company for it. So open source is just a very interesting tool that you can use in various ways. So there's, again, two kinds of open source. One is some large company doing a lot of work and then open sourcing it. And that kind of effort is not really feasible by say a band of volunteers doing it the same way. So there's both a capital and operational expenditure that the large company just decided to ignore and give it away to the world for some benefits of some kind. They're not as tangible as direct revenue. So in that part, Meta has been doing incredibly good things. They fund a huge amount of the PyTorch development. They've open sourced Llama and those family of models and several other fairly transformative projects. FICE is one, Segment Anything, Detectron, Detectron 2. Dense Pose. I mean, it's-Swyx [00:52:52]: Seamless. Yeah, seamless.Soumith [00:52:53]: Like it's just the list is so long that we're not gonna cover. So I think Meta comes into that category where we spend a lot of CapEx and OpEx and we have a high talent density of great AI people and we open our stuff. And the thesis for that, I remember when FAIR was started, the common thing was like, wait, why would Meta wanna start a open AI lab? Like what exactly is a benefit from a commercial perspective? And for then the thesis was very simple. It was AI is currently rate limiting Meta's ability to do things. Our ability to build various product integrations, moderation, various other factors. Like AI was the limiting factor and we just wanted AI to advance more and we didn't care if the IP of the AI was uniquely in our possession or not. However the field advances, that accelerates Meta's ability to build a better product. So we just built an open AI lab and we said, if this helps accelerate the progress of AI, that's strictly great for us. But very easy, rational, right? Still the same to a large extent with the Llama stuff. And it's the same values, but the argument, it's a bit more nuanced. And then there's a second kind of open source, which is, oh, we built this project, nights and weekends and we're very smart people and we open sourced it and then we built a community around it. This is the Linux kernel and various software projects like that. So I think about open source, like both of these things being beneficial and both of these things being different. They're different and beneficial in their own ways. The second one is really useful when there's an active arbitrage to be done. If someone's not really looking at a particular space because it's not commercially viable or whatever, like a band of volunteers can just coordinate online and do something and then make that happen. And that's great.Open Source LLMsI wanna cover a little bit about open source LLMs maybe. So open source LLMs have been very interesting because I think we were trending towards an increase in open source in AI from 2010 all the way to 2017 or something. Like where more and more pressure within the community was to open source their stuff so that their methods and stuff get adopted. And then the LLMs revolution kind of took the opposite effect OpenAI stopped open sourcing their stuff and DeepMind kind of didn't, like all the other cloud and all these other providers, they didn't open source their stuff. And it was not good in the sense that first science done in isolation probably will just form its own bubble where people believe their own b******t or whatever. So there's that problem. And then there was the other problem which was the accessibility part. Like, okay, I again always go back to I'm a student in India with no money. What is my accessibility to any of these closers models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control thing. I strongly believe if you want human aligned stuff, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble. Like all the friends I hang out with talk about some random thing like Dyson Spheres or whatever, that's a thing. And most of the world doesn't know or care about any of this stuff. It's definitely a bubble and bubbles can form very easily. And when you make a lot of decisions because you're in a bubble, they're probably not globally optimal decisions. So I think open source, the distribution of open source powers a certain kind of non-falsifiability that I think is very important. I think on the open source models, like it's going great in the fact that LoRa I think came out of the necessity of open source models needing to be fine-tunable in some way. Yeah, and I think DPO also came out of the academic open source side of things. So do any of the closed source labs, did any of them already have LoRa or DPO internally? Maybe, but that does not advance humanity in any way. It advances some companies probability of doing the winner takes all that I talked about earlier in the podcast.Open Source and TrustI don't know, it just feels fundamentally good. Like when people try to, you know, people are like, well, what are the ways in which it is not okay? I find most of these arguments, and this might be a little controversial, but I find a lot of arguments based on whether closed source models are safer or open source models are safer very much related to what kind of culture they grew up in, what kind of society they grew up in. If they grew up in a society that they trusted, then I think they take the closed source argument. And if they grew up in a society that they couldn't trust, where the norm was that you didn't trust your government, obviously it's corrupt or whatever, then I think the open source argument is what they take. I think there's a deep connection to like people's innate biases from their childhood and their trust in society and governmental aspects that push them towards one opinion or the other. And I'm definitely in the camp of open source is definitely going to actually have better outcomes for society. Closed source to me just means that centralization of power, which, you know, is really hard to trust. So I think it's going well
In this Episode, Dr. Haley addresses multiple causes of aging and what you can do to prejuvenate and reverse aging in your skin and body and improve your overall health. She discussed the science behind why certain procedures and skin care work - skin care routine, chemical peels, microdermabrasion, microneedling, Botox and neuromodulators, fillers, and laser. PRODUCTS / RESOURCES: Follow Dr. Jen Haley on Instagram @drjenhaley - instagram.com/drjenhaley Connect on LinkedIn: http://linkedin.com/in/jennifer-haley-md-faad-a4283b46 Visit her website at drjenhaley.com Book a consultation with Dr. Haley here: https://app.minnect.com/expert/DrJenHaley Dr. Haley's favorite acne skincare: https://www.alumiermd.com/join?code=5HUKRDKW
It's time to stop searching for motivation and start building it instead.Willpower alone doesn't work. One day you're fired up and ready to crush it, the next you're staring blankly at the wall wondering where your motivation went.In this psychology and self-improvement-focused episode, learn the scientifically proven tactics to construct motivation that sticks long-term - even through the toughest challenges. Discover the 4 key types of motivation, how to tap into them, and the warning signs that your motivation is unsustainable.Learn why sheer willpower backfires, how small wins compound, and how to reframe mundane tasks into extraordinary achievements. Boost grit, self-improvement, and a growth mindset with actionable insights from psychology and motivation science. Your passport to sustainable drive and grit starts here- - -On the growth mindset podcast with Sam Webster Harris, we explore the psychology of happiness, satisfaction, purpose, and growth through the lens of self-improvement. Success and happiness is a state of mind unique to ourselves and is our responsibility to create.Through a process of honest self-reflection of what is holding us back and what is driving us forward, we can lose the ego and build awareness of how to be the best we can be.- - -Connect with Sam:Sam's newsletter on creativity - Explosive ThinkingWatch the pod - YouTube (Growth Mindset)Twitter - @samjamharrisInstagram - @SamJam.zenYoutube - @Samjam- - -Show: Growth Mindset, psychology of self-improvementEpisode: Build Unstoppable Motivation and ResilienceChapters:0:00 Hiking through the coastal path of UK02:03 Pre-roll advert02:07 Building a stronger mindset and unstoppable motivation02:40 Nuances of human motivation04:45 Extrinsic motivation examples07:24 What drives people to continue08:25 4 types of motivation11:00 Mid-roll advert13:42 Identifying personal values20:10 Aligning your actions with your values24:45 Celebrate small wins26:00 Seek feedback and reflect on it27:15 Send offTopics:intrinsic motivationextrinsic motivationidentified motivationinterjected motivation psychology of motivationself-determinationpersonal growthself-improvementcore valuesmental toughnesssmall winsconstructive feedbackself-reflectioneulogy exerciseodyssey planradical pathwheel of lifealigning actions with valuesavoiding burnoutmental toughnesswillpowerresiliency4 types of motivations Become a member at https://plus.acast.com/s/growth-mindset-podcast. Hosted on Acast. See acast.com/privacy for more information.
Text Hawk to 66866 to become part of "Mindful Monday." Join 10's of thousands of your fellow learning leaders and receive a carefully curated email from me each Monday morning to help you start your week off right... Full show notes at www.LearningLeader.com Twitter/IG: @RyanHawk12 https://twitter.com/RyanHawk12 Charisma: Presence, Power, and Warmth - Show up, be fully there. In that moment with the person in front of you. Flip the switch. Understand your power. And deeply care for others. Be warm, not cold. And it's important that each of these is expressed with authenticity. That's how to develop more charisma. How to develop our protocol - A simple exercise. Get a sheet of paper. On one side write “DO.” On the other side, write “DON'T.” Think of yourself at your best, what do you do? That's your protocol. And remember that the worse you feel, the more committed you need to be to your protocol It's always day one. Brian thinks of his time spent with the Navy SEALs. They work to earn their trident every single day. Today is the day. It's always the right day to earn it. It's always day 1. Arete – An ancient Greek word. We translate it into English as ‘virtue' or ‘excellence,' but it has a deeper meaning. Something closer to ‘expressing the best version of yourself moment to moment to moment.' Inter-leaving - The basic idea is simple: If you want to learn something, you're better off varying your practice rather than grooving one identical rep after another. Epictetus - One of his students took great lecture notes and captured his wisdom in a manual called the Enchiridion. The Greek word for Enchiridion is translated as “handbook,” and it's important to note that the word literally means “within” + “hand.” Intrinsic versus Extrinsic motivation – Which motivation leads to greater levels of happiness and flourishing? Why? It's why people who get to the peak of what David Brooks calls the “First Mountain” look around and wonder why they don't feel fulfilled. They got all the stuff they were told would make them happy and… they're not. Phil Stutz wrote the Foreword – Practice comprised of unusual people. “They refuse to be defined by any single accomplishment. Their Identity is based on a process of endless possibility. They don't stop creating.” Two primary obstacles getting in our way are fear and laziness. This comes from Phil Stutz... AM and PM Bookends – “Get these right and you're 80% there.” Targeted thinking - What do I want? What's needed to get that done? Consistency - "Who you are speaks so loudly I cannot hear what you say." Unshakeable confidence -- Anti-fragile confidence. You have intense trust that you have what it takes to respond. Anti-Fragility - The more life kicks you around, the better you get. Emotional stamina - The worse you feel, the more committed you are to your protocol. Protocol - Think of yourself at your best... What are you doing? Hero - An ancient Greek word for protector Get clear on your identity Sleep, meditate, work out, work, love Pilots have checklists before they fly a plane... We should use one too each day. Create your "Do" and "Don't" list Intrinsic vs Extrinsic motivation -- Deepend relationships, help in your community, focus on your eulogy virtues today... Hire a coach... We all need a coach A great coach has believable hope, they see your potential