Podcasts about Pareto

  • 1,406PODCASTS
  • 2,026EPISODES
  • 29mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Apr 18, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Pareto

Latest podcast episodes about Pareto

Everyone Gets a Trophy
Pareto's Portal

Everyone Gets a Trophy

Play Episode Listen Later Apr 18, 2025 67:36


Where else can you get a discussion about the football spring portal, animal fights, college baseball, Pareto's principle, how the cost of customer acquisition relates to NIL, and a ranking of the best college football programs in Texas? The time is now for your new mortgage or refi with Gabe Winslow at 832-557-1095 or MortgagesbyGabe. Then get your financial life in order with advisor David McClellan 312-933-8823 with a free consult: dmcclellan@forumfinancial.com. Read his retirement tax bomb series at Kiplinger! https://www.kiplinger.com/retirement/retirement-planning/605109/is-your-retirement-portfolio-a-tax-bomb Need a great CenTex realtor? Contact Laura Baker at 512-784-0505 or laura@andyallenteam.com.

Albuquerque Business Podcast
Unlock Juran's 1951 Secret: 10X Your Leadership with Timeless Quality Lessons for 2025 Excellence

Albuquerque Business Podcast

Play Episode Listen Later Apr 9, 2025 17:58


In 1951, Joseph Moses Juran published the Quality Control Handbook, a groundbreaking work that redefined how organizations approach quality. As a Romanian-American engineer and management consultant, Juran brought a fresh perspective to a world recovering from war and industrial upheaval. His handbook wasn't just a technical manual—it was a call to action for leaders to prioritize quality as a strategic cornerstone. Over 70 years later, its principles remain a goldmine for today's leaders striving for operational excellence, customer loyalty, and sustainable growth. This blog dives into the key takeaways from Juran's 1951 masterpiece, offering actionable lessons for modern leadership, with a nod to its historical impact. The Heart of Juran's Vision: Quality as a Leadership Priority At its core, the Quality Control Handbook challenged the notion that quality was solely the domain of inspectors or technicians. Juran argued that quality starts at the top—with leaders who set the tone, define the vision, and rally their teams around it. In 1951, this was a radical shift. Industries were focused on mass production, often at the expense of consistency or customer satisfaction. Juran flipped the script, insisting that quality isn't just about catching defects—it's about designing systems that prevent them. For today's leaders, this is a wake-up call. Whether you're running a tech startup, a manufacturing plant, or a service-based business, quality can't be an afterthought. It's a competitive edge. Juran's handbook teaches us that leadership isn't just about charisma or strategy—it's about embedding a quality mindset into every layer of your organization. Let's unpack the key principles and how they apply to you. Key Principles from the Quality Control Handbook While the original 1951 text isn't widely available online, its foundational ideas have been well-documented through Juran's legacy and subsequent editions. Here's what leaders need to know: Quality Means Fitness for Use Juran defined quality as “fitness for use”—a product or service that meets customer needs and performs as expected. This wasn't about perfection for its own sake; it was about delivering value to the end user. In 1951, this customer-centric focus was ahead of its time, pushing leaders to look beyond factory floors and into the lives of their customers. Leadership Lesson: Put your customers first. Ask: Does this solve their problem? Does it delight them? Whether it's a software update or a new product line, align your definition of quality with what your audience values most. The Pareto Principle: Focus on the Vital Few Juran popularized the 80/20 rule, suggesting that 80% of your quality issues come from just 20% of causes. He called these the “vital few” versus the “trivial many.” This principle gave leaders a practical tool to zero in on what matters most, cutting through the noise of endless problem-solving. Leadership Lesson: Don't spread yourself thin. Use data to pinpoint the handful of issues—like bottlenecks or customer complaints—that drive the biggest headaches. Fixing these delivers outsized results, freeing you to innovate elsewhere. Top Management Must Lead the Charge Juran was crystal clear: quality isn't a middle-management task—it's a leadership imperative. He urged executives to own the quality agenda, setting goals, allocating resources, and holding teams accountable. Without this commitment, quality efforts fizzle out. Leadership Lesson: Step up. Make quality a personal mission. Show your team it's a priority by investing time and budget in it—whether that's training, new tools, or process redesigns. Your involvement signals that quality isn't optional. Train Everyone, Everywhere The handbook pushed for widespread quality training, not just for specialists but for every employee. Juran believed that a shared understanding of quality principles builds a cohesive, capable workforce. This was a bold stance in an era when training was often siloed. Leadership Lesson: Empower your people. Equip them with the skills to spot and solve quality issues. A frontline worker who understands the “why” behind their role is your secret weapon for consistency and innovation. Improve Quality Project by Project Juran advocated a structured, project-based approach to quality improvement. Rather than vague goals, he recommended specific initiatives with clear objectives, timelines, and metrics. This methodical mindset turned quality into a tangible, achievable outcome. Leadership Lesson: Break it down. Tackle quality challenges one project at a time—say, reducing delivery delays or streamlining onboarding. Small wins build momentum and prove the value of your efforts. The Quality Trilogy: Plan, Control, Improve Juran's Quality Trilogy is a three-step framework that's pure gold for leaders: Quality Planning: Identify customers, understand their needs, and design processes to meet them. Quality Control: Monitor performance and catch deviations early. Quality Improvement: Continuously raise the bar by addressing weaknesses. This holistic approach ties quality to every stage of your operation. Leadership Lesson: Think systematically. Map out how you'll plan for quality, maintain it, and push it further. It's a cycle that keeps your organization sharp and adaptable. Mind the Cost of Quality Juran introduced the idea that quality has a price tag—costs of prevention (training, design), appraisal (testing, audits), and poor quality (returns, lost trust). He showed that investing upfront saves money down the line, a lesson rooted in economic pragmatism. Leadership Lesson: Play the long game. Don't skimp on quality to cut corners—it'll cost you more in rework or reputation damage. Budget for prevention and watch your bottom line improve. Why These Principles Matter Today In 2025, the stakes for quality are higher than ever. Customers have endless options and zero patience for mediocrity. A single glitch—a buggy app, a late shipment, a rude interaction—can tank your brand. Juran's handbook, though written in a different era, feels tailor-made for this reality. His focus on customers, data, and leadership aligns perfectly with modern demands like agile workflows, user experience (UX), and data analytics. Take the tech world: a SaaS company lives or dies by its uptime and user satisfaction—Juran's “fitness for use” in action. Or consider manufacturing: lean principles owe a debt to his project-by-project improvements. Even in service industries, training staff to deliver consistent excellence echoes Juran's vision. His ideas aren't relics; they're blueprints for staying relevant. Practical Steps for Leaders Ready to channel Juran in your leadership? Here's how to start: Set a Quality Vision: Define what “fitness for use” means for your customers and rally your team around it. Make it specific—e.g., “Zero defects in our next release” or “95% on-time delivery.” Dig into Data: Use tools like surveys, analytics, or Pareto charts to find your “vital few” problems. Focus your energy there. Lead by Example: Get hands-on with a quality project. If you're in the trenches, your team will follow. Train Relentlessly: Host workshops or bring in experts to upskill your staff. Make quality everyone's job. Launch a Pilot: Pick one process—say, customer support response times—and improve it step by step. Measure the impact and scale it up. Track Costs: Calculate what poor quality costs you (e.g., refunds, churn) versus prevention (e.g., better onboarding). Use the numbers to justify your investments. Historical Impact: A Legacy That Shaped the World Juran's handbook didn't just influence theory—it changed history. In 1954, the Japanese Union of Scientists and Engineers (JUSE) invited him to Japan, where his ideas fueled the country's post-war quality revolution. Companies like Toyota and Sony embraced his teachings, blending them with local practices to create Total Quality Control (TQC). By the 1970s, Japan's reputation for precision and reliability had flipped global markets on their head, a feat Juran called “the greatest quality achievement in the history of mankind.” The 1951 handbook laid the groundwork for this transformation, proving that quality isn't just a tactic—it's a game-changer. Bringing Juran into 2025 As a leader, you're not just managing a team—you're shaping an organization's future. Juran's Quality Control Handbook offers a roadmap to do it right. It's about more than avoiding mistakes; it's about building something exceptional. Imagine your business humming with efficiency, delighting customers, and outpacing competitors—all because you took quality seriously. That's Juran's promise, and it's yours to claim. So, pick one principle—say, the Quality Trilogy—and test it this quarter. Plan for your customers, control your processes, and improve relentlessly. You'll see why Juran's work has endured for over seven decades. Quality isn't a buzzword; it's leadership in action. Let's make it happen.

Paretopodden
Crayon & SoftwareOne forteller om utfordringer og synergier ved sammenslåingen

Paretopodden

Play Episode Listen Later Apr 3, 2025 29:34


Hvordan bygger man et globalt teknologiselskap – sammen?Mot slutten av fjoråret ble det kjent at norske Crayon og sveitsiske SoftwareOne ønsker å slå seg sammen. I denne episoden av Paretopodden får vi besøk av Rune Syversen, styreleder i Crayon, og Till Spillmann, styremedlem i SoftwareOne.Samtalen gir et unikt innblikk i prosessen bak sammenslåingen – fra strategiske overveielser til kulturelle forskjeller og forventede synergier. Hva er de største utfordringene i en slik prosess? Og hvorfor mener de to styremedlemmene at selskapene passer godt sammen?Disclaimer:This material has been produced by Pareto Securities for general guidance and information purposes only and shall be seen as marketing material. The information provided should not be considered professional advice and is under no circumstances intended to be used or considered as financial or investment advice, a recommendation or an offer to sell, or a solicitation of any offer to buy any securities or other form of financial asset.The information should not be considered investment research and is consequently not prepared in accordance with the regulation regarding investment analysis. Furthermore, the information is not indented to be regarded as legal, financial, commercial, tax or accounting advice.The information provided in the material is obtained from public sources which Pareto Securities considers reliable. However, the information has not been independently verified, and Pareto makes no guarantee as to its accuracy or completeness. We have taken reasonable care to ensure that, to the best of our knowledge, material information contained herein is in accordance with the facts and contains no omissions likely to affect its understanding. Please note that we make no assurance that the underlying forward-looking statements are free from errors. The material reflects Pareto Securities' assessment at the time of production and may change without further notice. Pareto do not intend, and do not assume any obligation to update or correct the information included in the material.Pareto Securities AS is subject to supervision by the Financial Supervisory Authority of Norway, and member of the Norwegian Securities Dealers Association. Pareto Securities AB is subject to the supervision by the Financial Supervisory Authority of Sweden.Please see our website https://paretosec.com/our-firm/compliance/ for more information and full disclaimer. Hosted on Acast. See acast.com/privacy for more information.

Developer Tea
Meta Models - Logarithmic Returns

Developer Tea

Play Episode Listen Later Apr 2, 2025 12:08


This episode introduces a valuable meta-tool for understanding the generic shapes of models, focusing specifically on the concept of logarithmic relationships and how they manifest as diminishing returns in various aspects of our lives and work. Understanding these patterns can help us make more informed decisions about where to invest our time and resources. Uncover a meta-tool for understanding generic model shapes, specifically focusing on the concept of logarithmic relationships, which operates at a layer above specific mental models. Learn about logarithmic complexity as a concept often encountered in algorithmic analysis and graphing math, characterised by a curve where the slope continuously decreases. Discover how diminishing returns serve as a colloquial way to understand logarithmic relationships, where each unit of input effort yields progressively smaller returns in value or output. Explore examples of where diminishing returns are evident, such as increasing the reliability of a system through quality improvements, estimation efforts, and the value gained from time spent in meetings. Understand how learning processes often follow a logarithmic curve, with rapid initial gains that gradually diminish with experience. Grasp the connection between logarithmic returns and the Pareto principle (80/20 rule), where a small percentage of effort often produces a large percentage of the value. Recognise the importance of identifying the threshold on a logarithmic curve where the returns on further investment become minimal, aiding in more effective resource allocation. Consider how our natural perception might not align with logarithmic realities, potentially leading us to overvalue continued effort beyond the point of significant return. Learn how understanding these fundamental input-output relationships can empower you to make better decisions about where to focus your time, effort, and resources.

BYNN with Christopher Vonheim & William Frantzen
#190 Eirik Haavaldsen (Highlight) - How To Invest In Shipping, And Succeed In Finance?

BYNN with Christopher Vonheim & William Frantzen

Play Episode Listen Later Apr 2, 2025 15:00


Eirik Haavaldsen is Head of Research at Pareto, and one of leading experts on shipping stocks. In this episode, we analyse all shipping segments for 2025, and learn how Eirik finds value in different types of maritime companies. Let us know what you think of the episode, and please comment and share the interview with friends and network. It helps more than you can imagine!Christopher Vonheim is a Norwegian host focused on business, ocean industries, investing, and start-ups. I hope you enjoy this tailor made content, and help us make this channel the best way to consume ideas, models, and stories that can help fuel the next entrepreneurs, leaders and top performers. Hosted on Acast. See acast.com/privacy for more information.

Connected Communication
Mew Mew I'm Feeling Sigma

Connected Communication

Play Episode Listen Later Apr 1, 2025 54:33


A chat about Adolescence, the Manosphere, red, blue, purple and black pills; the 80/20 principle (not Pareto), Alpha, Beta and Sigma males, mewing - and a Hiberno English story between a young boy and his grandfather. Support the PodcastConnect on LinkedInConnect on InstagramALL IN Magazine Hosted on Acast. See acast.com/privacy for more information.

Quando Menos é Mais
USE A REGRA 80/20 PARA TRANSFORMAR SUA VIDA | Ep.0421

Quando Menos é Mais

Play Episode Listen Later Mar 29, 2025 11:51


*Links importantes:*✦ Comunidade Vida Leve: https://comunidade.quandomenosemais.com✦ Cursos para ter um novo Hobby e gerar Renda Extra: https://cursos.kirizawa.com✦ Caso queira apoiar nosso trabalho, considere nos pagar um cafezinho: https://kirizawa.com/cafe✦ Ebook Mentalidade para ser Rico: https://quandomenosemais.com/ebookctmsr-yt✦ Clube do Livro: https://quandomenosemais.com.br/clubedolivroDescubra como a Regra 80/20 pode transformar sua vida.Neste podcast, exploramos o Princípio de Pareto e como ele se aplica ao desenvolvimento pessoal, saúde, riqueza e relacionamentos.Aprenda a identificar as ações que geram 80% dos resultados e maximize seu potencial.Destralhe sua casa, melhore sua saúde e conquiste seus objetivos com estratégias práticas.Se você busca viver com mais leveza e significado, este vídeo é para você!Inscreva-se e comece sua jornada de transformação hoje mesmo!--- *** ---E-mail: contato@quandomensemais.comBlog: https://quandomenosemais.comYouTube: https://youtube.com/c/quandomenosemaisInstagram: https://instagram.com/quandomenosemaisFacebook: https://fb.me/quandomenosemaisPodcast / Spotify: https://open.spotify.com/show/1wpYTs7ZF2JvPhQ8ftonEDFacebook: https://fb.me/quandomenosemais--- *** ---Nossa lojinha: https://quandomenosemais.com/lojaAqui você encontrará vários dos produtos que eu comento nos vídeos como: - Arrumação, organização e limpeza da casa; - Decoração; - Plantas; - Feng Shui contemporâneo; - Livros. OBS MINIMALISTA: Lembre-se de comprar apenas o que realmente está precisando e terá de fato utilidade em sua vida.--- --- #Minimalismo #QuandoMenoseMais #RobertoKirizawa

Six Figure Flower Farming
42: Are you focused on the wrong 80%?

Six Figure Flower Farming

Play Episode Listen Later Mar 24, 2025 28:31


In this episode, host Jenny Marks explains how the 80/20 rule (Pareto's Law) can help flower farmers streamline their business, increase profits, and reduce burnout. She shares real examples from her own farm, including: How analyzing sales data helped identify the most profitable crops Why cutting underperforming flower varieties improved efficiency and revenue How focusing on key sales outlets maximized profits with less effort Why applying the 80/20 rule to time management allows for smarter work, not harder work Join Jenny at the Lean Flower Farming Workshop on July 28-29, 2025, in Clifton Springs, NY (with an optional third day focused on high-earning high tunnels). Register now: www.trademarkfarmer.com/lean Did you enjoy this episode? Please leave a review on Apple or Spotify.Follow Jenny on Instagram: @trademarkfarmerFind free flower business resources: www.trademarkfarmer.com

Emprende con Propósito
Entrevista a Paula Pareto

Emprende con Propósito

Play Episode Listen Later Mar 24, 2025 7:47


Este episodio fue grabado durante el evento de la Fundación Inspirar que hicimos el año pasado. Tuvimos una invitada de lujo: La campeona olímpica Paula Pareto. Estas fueron las preguntas que le hice:00:16- ¿Qué cualidades buscarías en un emprendedor de alto rendimiento?01:45- ¿Cómo hiciste para estudiar y competir en un deporte de alto rendimiento al mismo tiempo?02:49- ¿Cuáles son las herramientas necesarias para tener éxito en lo que hagas?04:01- ¿Qué aprendiste de tu carrera deportiva?04:56- ¿Cómo manejás la presión como emprendedora?06:50- ¿Qué consejo le dejarías a alguien que empieza a emprender?Abrazá un propósito. ¡Desafía al mundo e inspirá a otros!Recordá que si querés enviarnos tus preguntas, consultas o sugerencias podés hacerlo a podcast@emprendeconproposito.com.arTambién podés seguirnos en las otras redes:Web: emprendeconproposito.com.ar IG: @sebasosaemprende YT: Emprende con propósito También te dejo un resumen del podcast por si querés guardarte algún concepto: ¿Qué cualidades buscarías en un emprendedor de alto rendimiento?Esfuerzo y disciplina son claves para llegar a cualquier objetivo. Es lo que más resultados me dio para llegar a donde estoy hoy y en el deporte es el factor central porque a nivel físico partís desde distintas situaciones. ¿Cómo hiciste para estudiar y competir en un deporte de alto rendimiento al mismo tiempo?La pasión es lo que cuenta. Cuando empecé a estudiar medicina todos me recomendaban que estudié educación física por ser deportista, pero a mí no me gustaba, y tuve que escuchar mi voz que me decía que me gustaba la medicina y que era por ahí. ¿Cuáles son las herramientas necesarias para tener éxito en lo que hagas?Creo que hay que ver si uno hace lo que hace porque quiere o porque lo hace todo el mundo. Si tenés un propósito, una pasión, un foco y una visión.  ¿Qué aprendiste de tu carrera deportiva?Me dejó muchos valores del deporte que se pueden aplicar todos a la vida. El respeto, el honor, el compañerismo, la amistad, los saludos de comienzo, del final, el respeto al rival, a la competencia como algo bueno para siempre buscar ser mejor. Yo no quiero ser mejor que otro, quiero ser mejor que yo misma en todos los ámbitos de mi vida. ¿Cómo manejás la presión como emprendedora?Yo tengo un café y una línea de viandas gluten free con mi familia y puedo delegar a mis personas de confianza. ¿Qué consejo le dejarías a alguien que empieza a emprender?Si tenés un objetivo, hacé todo por cumplirlo por más difícil que parezca. Me di cuenta que la clave es el día a día. Es la cotidianeidad y hacer cosas todos los días para ser mejor que ayer. #emprendedoresargentinos #emprender #emprendedores #emprendedoreslatinos #franquicias #negociosconvalores #servicioalcliente #servicio #cliente #pequeparetto #altorendimiento #deporte #objetivosclaros #competencia #meta #millaextra #reclutamiento #proposito #equipo    

Stressismus - Stress ist keine Naturgewalt
#103 Planungssysteme: Die sieben goldenen Prinzipien

Stressismus - Stress ist keine Naturgewalt

Play Episode Listen Later Mar 24, 2025 46:11


Heute spreche ich mit dir über Planungssysteme. Also: Wie baue ich ein gutes Planungssystem beziehungsweise ein Aufgabenmanagementsystem? Wie baue ich mir ein System, mit dem ich plane? Die Idee zu der Folge ist entstanden, weil ich die Frage logischerweise berufsbedingt öfter höre. Und tatsächlich muss ich sagen, die ersten zwei Male war so meine Antwort darauf: „Ja weiß ich doch nicht, ich mache das halt“. Das kennst du sicherlich auch, dass es so Dinge gibt, die wir tun und gar nicht so wirklich genau erklären können, wie wir das eigentlich tun und warum das irgendwie funktioniert. Weil sich eine ganze Menge von dem, was wir darüber wissen, in die Intuition mittlerweile rein eingeschlichen hat, dass wir gar nicht mehr erklären können, was eigentlich die Prinzipien unseres Verhaltens sind. Aber ich habe mir gedacht, es gibt sie ja, diese Regeln, die ich beachte oder die einzelnen Schritte, die ich da tue. Ich muss sie nur mal sichtbar machen. Und natürlich war gleich wieder mein großer Anspruch, dir hier eine Art Handbuch hinzulegen, mit dem du dann Schritt für Schritt dein eigenes Planungssystem bauen kannst. Das ist es nicht geworden. Aber ich habe mir überlegt, wir machen daraus mal zwei Folgen. Ich werde dir heute mal erzählen, was es zu beachten gilt, wenn du ein solches Aufgabenmanagementsystem für dich aufbaust oder von mir aus auch für dein gesamtes Team. Was du generell beachten solltest, wenn du Prozesse designst. Und in der nächsten Folge, spreche darüber, welche Fragen du dir stellen solltest, um dann den Prozess erstmal in Gedanken oder mit Zettel und Stift aufzubauen. Folgen, auf die ich mich beziehe: Pareto: https://www.teatime.berlin/post/neue-podcastfolge-paretro Kanban: https://www.teatime.berlin/post/95-kanban TickTick: https://www.teatime.berlin/post/100-ticktick Vorlage Tagesplan: https://www.teatime.berlin/post/der-fast-perfekte-plan *********** Werde Königin Deiner Zeitzone! TeaTime.Berlin ist ein Podcast über Zeit- & Selbstmanagement, Ordnung & Struktur mit einer Priese Achtsamkeit für mehr Zeit, Energie und Selbstbestimmung für Dich! Schick mir Deine Fragen und Themenwünsche einfach über das Kontaktformular auf meiner Website: https://www.teatime.berlin/kontakt Ich freu mich über Sternchen und Rezensionen bei iTunes, Spotify oder Panoptikum, Kommentare auf meiner Website oder Youtube und natürlich Eure vielen Emails.

The Gary Gunn Show Podcast
#413 - The Pareto Effect

The Gary Gunn Show Podcast

Play Episode Listen Later Mar 23, 2025 25:34


In this episode, we go behind the scenes of my recent transformation. How my outlook on life has changed and what it means for my coaching practice moving forward. Book a consultation call - Founders and CEOs only: https://calendly.com/garygunn/consultation Work on your dating success for only £10.79/$13.50 per month with my Advanced Dating System and 100-Day Dating Challenge: https://social-attraction-courses.teachable.com/p/small-bundle Access all of my digital products for only £24.96/$33.20 per month: https://social-attraction-courses.teachable.com/p/gary-gunn-big-bundle View all of my digital courses here: https://social-attraction-courses.teachable.com/ My TikTok account: https://www.tiktok.com/@garygunnshow Follow me on Instagram: https://www.instagram.com/garygunnshow/ Access my podcast with over 200 episodes: https://soundcloud.com/garygunn Subscribe to my YouTube channel: https://www.youtube.com/c/GaryGunn?sub_confirmation=1 This video is for entertainment purposes only. Any action you take upon the information provided is strictly at your own risk, and we will not be responsible for any losses or damages in connection with the use of this video. The selection of techniques, opinions, programs, products, services, tools, templates, or manuals is not a guarantee of income or success. You are fully responsible for the effort you put in and the results you achieve by following the information in this video. All content, materials, and techniques delivered are proprietary and cannot be used, disclosed, or duplicated without permission. This video is for informational purposes only and does not form a professional relationship. Visit our website if you wish to hire us on a professional basis. #datingadviceformen #datingtipsformen #datingcoachformen

Un libro tira l'altro
La scuola economica italiana, tra Sraffa ed Einaudi

Un libro tira l'altro

Play Episode Listen Later Mar 23, 2025


Il pensiero economico italiano affonda le radici nel Medioevo. Assume rilevanza nel Rinascimento e nell’Illuminismo. Si afferma nel primo cinquantennio dell’Italia unificata con Ferrara, Pantaleoni, Pareto, Barone, De Viti de Marco, Einaudi e Ricci. Nelle due guerre e nella parentesi fascista tale tradizione si appanna. L’apertura dialettica dischiusa dall’Italia democratica, repubblicana, dopo il 1946 e il ristabilirsi delle relazioni culturali con l’estero favoriscono un recupero degli studi di economia.Ne parliamo con Giangiacomo Nardozzi autore con Pierluigi Ciocca del libro: Il pensiero economico nell'Italia repubblicana, Treccani.Nella seconda parte, spazio alla politica ed alla musica con le seguenti recensioni:- Stefano Passigli, Crisi istituzionale o crisi democratica? Passigli editori- Angelo Panebianco, Principati e repubbliche, Azioni individuali e forme di governo, Il Mulino- Peter Williams, Le variazioni Golderg di Johann Sebastian Bach, Astrolabio- Alberto Bologni, Vivaldi: Le quattro stagioni, Carocci- Federico Maria Sardelli, Vivaldi secondo Vivaldi, Dentro i suoi manoscritti, Il Saggiatore.Per i giovani lettori il confettino di questa settimana è:- Eduard Altarriba, Cosa sai dell'economia? Erikson

Our birth control stories
My Unexpected Lessons Since Quitting My 9-5 Job

Our birth control stories

Play Episode Listen Later Mar 22, 2025 14:13


I scheduled a meeting with my boss one cold mid-March morning in New York City. Since saving up almost $40,000, I had started to taste freedom in my morning coffee. My courage came out of nowhere. I was about to do something crazy. I was on the edge, flirting with the real world. That morning, I did the dead. I quit my full-time job.Three years have passed since that fateful morning, and this week, I hosted a party to celebrate that. As I sipped white wine with my friends, I realized that despite what the crunch of capitalism would want you to believe, I'm still here. I've survived for three years without a full-time job; I also moved to Mexico City and published a teen romance novel in the process. And in some ways, I'm thriving.This article is for anyone in the corporate world who is curious about what I've learned in the chaos of building my new career as a writer, freelancing, and fun, which I'm calling my “post-employment” era. Here, I've distilled for you the five most important professional lessons that I've never shared anywhere else, as well as the most impactful things in other categories of my life.Top Five Lessons for Post-Employment Professional Thriving

Beyond The Lens
85. Productivity Hacks for Creators: Chronotypes and Energy Matching, Pareto's and Parkinson's Principles, Automation and Delegation

Beyond The Lens

Play Episode Listen Later Mar 21, 2025 30:05


Richard Bernabe on productivity hacks and techniques for creators - photographers, artists, writers, musicians, etc. - so they can spend more of their time on the act on creating and less time bogged down on the mundane tasks of running a business.These are battle-tested productivity techniques that transformed my creative practice. Discover how I wrote an entire photography book in just 7 months (instead of 2 years), how to identify which 20% of your work produces 80% of your results, and why multitasking is destroying your creative output. Does the time of day you perform certain tasks matter?These aren't theoretical concepts - they're practical strategies I've implemented with dramatic results.Notable Links:The 4-Hour Workweek by Tim FerrisThe Myth of Multitasking: How Doing It All Gets Nothing Done by Dave CrenshawNever Play It Safe: A Practical Guide to Freedom, Creativity, and a Life You Love  by Chase JarvisWhen: the Scientific Secrets of Perfect Science by Daniel PinkMuench WorkshopsKelbyOne*****This episode is brought to you by Kase Filters. I travel the world with my camera, and I can use any photography filters I like, and I've tried all of them, but in recent years I've landed on Kase Filters.Kase filters are made with premium materials, HD optical glass, shockproof, with zero color cast, round and square filter designs, magnetic systems, filter holders, adapters, step-up rings, and everything I need so I never miss a moment.And now, my listeners can get 10% off the Kase Filters Amazon page when they visit. beyondthelens.fm/kase and use coupon code BERNABE10Kase Filters, Capture with Confidence.

The Peel
How the World's Most Active Angel Investor Operates | Ed Lando, Founder of Pareto Holdings

The Peel

Play Episode Listen Later Mar 20, 2025 104:42


Ed Lando is the Co-founder of Pareto, where he's been an early investor in over 25 unicorns, started and incubated over 10 companies, and was recently named the most active angel investor in the world according to Crunchbase.We get into how Ed first got started angel investing, how he built up deal flow, why he's historically kept a low profile, and why he hasn't raised outside capital.We also talk concentration vs diversification, why there's many ways to build successful companies, advice on hiring your first employees, and his playbook for incubating companies at Pareto, which is where he focuses most of his time. Timestamps:(0:00) Intro(2:51) Getting into angel investing(3:58) Debating high vs low PR strategies(8:27) How to start building deal flow when angel investing(10:00) Pareto: first investor in people leaving school or their job(12:05) Evolving from angel to fund(14:57) Why Ed didn't raise outside capital(20:33) Concentration vs diversification(28:29) Investing in non-sexy categories(32:50) There's no one right way to build a company(36:03) When to go against traditional wisdom(39:36) Lessons from his anti-portfolio(45:59) Ed's close relationship with his parents((49:04) How we're using AI(54:04) Incubating companies(58:38) Investing beyond spreadsheets and DCF models(1:05:49) How to trust your intuition investing (1:09:47) How to move fast(1:14:24) What most people get wrong when incubating companies(1:18:40) How to hire your first employees(1:26:27) Navigating hype when building and investing1:29:59 Venture math and the Power Law1:35:33 How Ed and Pareto's strategy might break1:38:45 Differences between the US and EuropeReferencedPareto: https://pareto20.com/Misfit Market: https://www.misfitsmarket.com/Catalina Crunch: https://catalinacrunch.com/Zamp: https://zamp.com/Magnus Carlsen on Joe Rogan: https://www.youtube.com/watch?v=ybuJ_nIXwGEFollow EdTwitter: https://x.com/edwardlandoLinkedIn: https://www.linkedin.com/in/edwardlando/Substack: https://edwardlando.substack.com/Follow TurnerTwitter: https://twitter.com/TurnerNovakLinkedIn: https://www.linkedin.com/in/turnernovakSubscribe to my newsletter to get every episode + the transcript in your inbox every week: https://www.thespl.it/

BYNN with Christopher Vonheim & William Frantzen
#190 Eirik Haavaldsen - Shipping Stocks 2025, Containers, Ro-Ro, LNG, LPG, Dry Bulk, Tankers, Valuations

BYNN with Christopher Vonheim & William Frantzen

Play Episode Listen Later Mar 17, 2025 68:58


00:00 - Learnings So Far In 202507:28 - Container Markets12:25 - Car Carrier And Ro-Ro Markets19:23 - Gas Markets (LNG and LPG)22:27 - Golar LNG26:20 - LPG Ahead28:25 - John Fredriksen Golden Ocean Exit 33:10 - Dry Bulk Markets Ahead40:05 - Tankers Markets And Crude Oil46:30 - How To Value A Shipping Company 51:44 - Best Question To Ask Shipping CEOs?54:33 - How To Make A Career In Shipping And Finance?01:00:44 - Most Impressive Shipping CEOs And Companies?01:03:50 - List A Shipping Company In Oslo or New York?01:05:50 - Favorite Books From Eirik Haavaldsen (Fiction)Eirik Haavaldsen is Head of Research at Pareto, and one of leading experts on shipping stocks. In this episode, we analyse all shipping segments for 2025, and learn how Eirik finds value in different types of maritime companies. Let us know what you think of the episode, and please comment and share the interview with friends and network. It helps more than you can imagine! Christopher Vonheim is a Norwegian host focused on business, ocean industries, investing, and start-ups. I hope you enjoy this tailor made content, and help us make this channel the best way to consume ideas, models, and stories that can help fuel the next entrepreneurs, leaders and top performers. Hosted on Acast. See acast.com/privacy for more information.

Psicologia con Luca Mazzucchelli
Legge di Pareto e Fattore 1%: due principi per svoltare la tua comunicazione

Psicologia con Luca Mazzucchelli

Play Episode Listen Later Mar 13, 2025 14:59


Per acquistare la tua copia del mio libro Fattore 1% e implementare la tua capacità di instaurare nuove abitudini, clicca qui: https://amzn.to/3tfRFNwIn questo episodio estratto dal BeMore Program di Febbraio, ti spiego due principi game-changing da applicare per migliorare nella tua comunicazione e nella tua vita.Buon ascolto ;-)(00:00:00) Intro (00:00:25) La ricerca di Pareto (00:01:23) La legge di Pareto (00:02:03) Come fare esplodere i tuoi risultati? (00:03:16) Quando comunichi, scegli le tue battaglie(00:07:16) Qual è il segreto per riuscire a parlare in pubblico?(00:09:17) Il Fattore 1% in pratica(00:11:22) Migliora dell'1% anche tu #comunicazioneefficace #psicologia #fattore1% #crescitapersonale #publicspeaking 

Work Less, Earn More
Ep 268: How to Use the 80/20 Principle to Work Less & Earn More

Work Less, Earn More

Play Episode Listen Later Mar 11, 2025 37:39


In this episode, I introduce the powerful 80/20 rule, also known as Pareto's Principle, to help you work less and achieve more. I share my personal journey of transforming my business by focusing on tasks that drive the most results and explain how this principle can revolutionize your productivity. I dive into identifying key drivers in your business, prioritizing high-impact tasks, and strategies to avoid getting trapped in busy work. I also provide actionable questions and exercises to help you pinpoint the most effective activities for business growth. Tune in to discover practical tips for leveraging your strengths and optimizing your workflow to maximize your success.Listen to the full episode to hear:How the 80/20 rule can help you work less while achieving moreThe key to identifying high-impact tasks in your businessStrategies to avoid getting stuck in busyworkPractical exercises to boost productivity and maximize successFREE Resources to Grow Your Online Business:The 100K Method Podcast Series: https://www.gillianperkins.com/the-100k-methodUse code “WORKLESS” for 30% off at The Startup Shop: https://startupsociety.shop/Work with Gillian Perkins:Apply for $100K Mastermind: https://gillianperkins.com/100k-mastermind Get your online biz started with Startup Society: https://startupsociety.com Learn more about Gillian: https://gillianperkins.com Instagram: @GillianZPerkins

The Colin McEnroe Show
Turns out common sense isn't all that common

The Colin McEnroe Show

Play Episode Listen Later Mar 6, 2025 48:59


President Donald Trump has been using the phrase “common sense” a lot. But it turns out that this is nothing new for politicians. This hour, we look at how common sense is used in politics. Plus, is there really such a thing as common sense? We dig into what it means and if it’s possible to teach it to artificial intelligence. GUESTS: Sophia Rosenfeld: Walter H. Annenberg Professor of History and Chair of the Department of History at the University of Pennsylvania; she is the author of multiple books, including Common Sense: A Political History and her new book, The Age of Choice: A History of Freedom in Modern Life Mark Whiting: Research fellow at the Computational Social Science lab at the University of Pennsylvania and chief technology officer of the startup Pareto.AI; you can find the common sense survey here Mayank Kejriwal: Research professor and principal scientist at the University of Southern California The Colin McEnroe Show is available as a podcast on Apple Podcasts, Spotify, Amazon Music, TuneIn, Listen Notes, or wherever you get your podcasts. Subscribe and never miss an episode! Subscribe to The Noseletter, an email compendium of merriment, secrets, and ancient wisdom brought to you by The Colin McEnroe Show. Join the conversation on Facebook and Twitter. Colin McEnroe, Angelica Gajewski, and Dylan Reyes contributed to this show.Support the show: http://www.wnpr.org/donateSee omnystudio.com/listener for privacy information.

Profit + Prosper
169: How to Identify Where Your Money Is Coming From (So You Can Make More of It)

Profit + Prosper

Play Episode Listen Later Mar 6, 2025 15:10


Where does your revenue actually come from… and how can you make more? In this episode, we're diving into the key data analytics I use to track revenue streams, the crucial questions I ask to interpret the numbers, and real-life examples of how I've applied these strategies at Young and Co. I'll also break down how tools like ChatGPT and the Pareto principle can make this process easier and more effective. If you're ready to focus on the true revenue drivers for your business, this episode is for you!

Paretopodden
Capsol Technologies: Riding the strong carbon capture trend

Paretopodden

Play Episode Listen Later Mar 6, 2025 25:49


Oslo Stock Exchange listed Capsol Technologies is a carbon capture technology provider with a goal of accelerating the world's transition to a net zero future. The company just released their Q4 report, which saw 2024 end on a positive note as they are entering a very exciting 2025.Together with host Sebastian Baartvedt, CEO Wendy Lam shared her insight on the global CCS market and how Capsol is positioned to stand out within the fast-growing CCS market.Clients can access all our renewables and Capsol research through our research and trading platform: https://online.paretosec.com/instrument/NO0010923121/c/NOK/overviewLearn more about our Nordic trading services: https://www.paretosec.no/aksjehandel-paa-nett/verdipapirhandel/aksjehandel-paa-nettDisclaimer:This material has been produced by Pareto Securities for general guidance and information purposes only and shall be seen as marketing material. The information provided should not be considered professional advice and is under no circumstances intended to be used or considered as financial or investment advice, a recommendation or an offer to sell, or a solicitation of any offer to buy any securities or other form of financial asset.The information should not be considered investment research and is consequently not prepared in accordance with the regulation regarding investment analysis. Furthermore, the information is not indented to be regarded as legal, financial, commercial, tax or accounting advice.The information provided in the material is obtained from public sources which Pareto Securities considers reliable. However, the information has not been independently verified, and Pareto makes no guarantee as to its accuracy or completeness. We have taken reasonable care to ensure that, to the best of our knowledge, material information contained herein is in accordance with the facts and contains no omissions likely to affect its understanding. Please note that we make no assurance that the underlying forward-looking statements are free from errors. The material reflects Pareto Securities' assessment at the time of production and may change without further notice. Pareto do not intend, and do not assume any obligation to update or correct the information included in the material.Pareto Securities AS is subject to supervision by the Financial Supervisory Authority of Norway, and member of the Norwegian Securities Dealers Association. Pareto Securities AB is subject to the supervision by the Financial Supervisory Authority of Sweden.Please see our website https://paretosec.com/our-firm/compliance/ for more information and full disclaimer. Hosted on Acast. See acast.com/privacy for more information.

CEO Podcasts: CEO Chat Podcast + I AM CEO Podcast Powered by Blue 16 Media & CBNation.co
IAM2394 - Mastering Time and Effort with Pareto & Parkinson

CEO Podcasts: CEO Chat Podcast + I AM CEO Podcast Powered by Blue 16 Media & CBNation.co

Play Episode Listen Later Mar 3, 2025 7:14


Gresham Harkless shares his journey as a franchise broker, focusing on time management, entrepreneurship, and starting a business.    He emphasizes that many people delay taking action because they feel they need to have everything perfect and all their time available.    Gresham highlights the importance of focusing on the key activities that produce the most significant results rather than trying to do everything.   Gresham also discusses Parkinson's Law, which suggests that work expands to fill the time allocated for it.  He uses an example from his friend Dave, who transitioned out of his full-time job using "power hours"—intentional, focused time to make progress toward his business goals.    Gresham recommends that even dedicating a small amount of focused time, like an hour or 10–20 minutes, can significantly move things forward.   Blue Star Franchise: http://bluestarfranchise.com Browse the Franchise Inventory: https://bluestarfranchise.com/franchise Is franchising right for you? Check this out to see: http://bluestarfranchise.com/assessment Franchise CEO (A CBNation Site - coming soon) - http://franchiseceo.co Check out our CEO Hack Buzz Newsletter–our premium newsletter with hacks and nuggets to level up your organization. Sign up HERE.  I AM CEO Handbook Volume 3 is HERE and it's FREE. Get your copy here: http://cbnation.co/iamceo3. Get the 100+ things that you can learn from 1600 business podcasts we recorded. Hear Gresh's story, learn the 16 business pillars from the podcast, find out about CBNation Architects and why you might be one and so much more. Did we mention it was FREE? Download it today!

Paretopodden
Sjømat: Q4-høydepunktene, NASF-konferansen og viktigste i sektoren nå

Paretopodden

Play Episode Listen Later Mar 3, 2025 28:00


Denne uken er vi med å arrangere sjømatkonferansen North Atlantic Seafood Forum (NASF) i Bergen. Konferansen er verdens største sjømatkonferanse og en viktig møteplass for sjømatbransjen med over 1000 deltagere og 400 selskaper representert fra hele verdikjeden.Før konferansen og i kjølvannet av Q4-rapportene fra sjømatselskapene oppsummerer analytikerne Sander Lie og Oda Djupvik markedsbildet nå, biologiske forbedringer, Q4-tallene og hva som blir det viktigste på årets NASF sammen med aksjemegler Sebastian Baartvedt.Ta kontakt med Pareto-megleren din for påmelding til årets konferanse, eller meld deg på NASFs nettsider: https://nor-seafood.com/ Vårt analyseteam dekker ca. 20 sjømataksjer og publiserer ukentlige analyser og lakseprisoppdateringer for aksjehandels- og analysekunder. Les mer om vårt ledende analyse- og aksjehandelstilbud: https://www.paretosec.no/aksjehandel-paa-nett/verdipapirhandel/aksjehandel-paa-nettDisclaimer:Pareto Securities' podkaster inneholder ikke profesjonell rådgivning, og skal ikke betraktes som investeringsrådgivning. Handel i verdipapirer medfører til enhver tid risiko, og historisk avkastning er ingen garanti for fremtidig avkastning. Pareto Securities er verken rettslig eller økonomisk ansvarlig for direkte eller indirekte tap, eller andre kostnader som måtte påløpe ved bruk av informasjon i denne podkasten.Se våre nettsider https://paretosec.com/our-firm/compliance/ for mer informasjon og full disclaimer. Hosted on Acast. See acast.com/privacy for more information.

Epicenter - Learn about Blockchain, Ethereum, Bitcoin and Distributed Technologies
Puja Ohlhaver: Why Community Currencies Are Crucial for Governance in DeSoc

Epicenter - Learn about Blockchain, Ethereum, Bitcoin and Distributed Technologies

Play Episode Listen Later Mar 1, 2025 64:42


In the digital networked age, people's attention often overlooks local problems in favour of global ones, which don't necessarily impact them in their daily lives, or over which they don't have a say due to the skewed Pareto distribution of power in modern day societies. Puja Ohlhaver, in her recent research paper ‘Community currencies', proposes a dual-currency model that prices attention and influence in each community, with the ultimate goal of creating a Gaussian distribution of power, either locally, or globally through the dynamic interaction of multiple local communities. This model allows community members to stake their currency to earn non-transferable governance rights, creating a substrate for decentralised societal coordination that favours social innovation.Topics covered in this episode:Puja's backgroundWeb3 research‘Community currencies'Pareto vs. Gaussian distributionsGlobal vs. local power distributionsThe community currencies modelMeritocracy vs. influenceQuadratic fundingGovernance, bribery and the crisis of legitimacyExperimenting with community currenciesEpisode links:Puja Ohlhaver on X'Community Currencies' Research Paper'Decentralized Society' Research PaperSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: one of the largest node operators worldwide, trusted by 175,000+ accounts across more than 60 networks, Chorus One combines institutional-grade security with the highest yields at - chorus.oneThis episode is hosted by Friederike Ernst.

Matias Laca: Hombres Altamente Atractivos
213. ¿Qué es ser un hombre saludable? Entrenamiento, nutrición, testosterona y hábitos - Daniel Demicheri

Matias Laca: Hombres Altamente Atractivos

Play Episode Listen Later Feb 28, 2025 88:43


Hoy platicamos con Daniel Demicheri, Máster en ciencias, entrenamiento y nutrición, coach online, entrenador personal, divulgador sobre hábitos.Conoce por qué contar con un coach es esencial para mejorar hábitos, equilibrar tu descanso y adaptarte a diversos estilos de vida saludables.Redes Sociales de Daniel Demicheri:Instagram: https://www.instagram.com/demicherifitness/Tik Tok: https://www.tiktok.com/@demicherifitnessWeb:demicherifitness.com

Hyper Conscious Podcast
A New Habit That We're Loving (So Far) (1985)

Hyper Conscious Podcast

Play Episode Listen Later Feb 21, 2025 38:34 Transcription Available


Are you making the most of your time? In this Freestyle Friday episode, Kevin and Alan reveal how tracking time can boost focus, productivity, and success. Kevin shares his initial resistance and surprising benefits, while Alan explains how minor adjustments in time, money, and effort lead to significant results. Whether chasing a dream or just wanting to use your time more wisely, this episode will shift your perspective.Learn more about:Next Level Dreamliner: https://a.co/d/9fPpxEtNext Level Live 2025 - Saturday, April 5th, 2025 (10:00 am to 5:00 pm) - https://bit.ly/4aTwC7Q_____________________NLU is not just a podcast; it's a gateway to a wealth of resources designed to help you achieve your goals and dreams. From our Next Level Dreamliner to our Group Coaching, we offer a variety of tools and communities to support your personal development journey.For more information, please check out our website at the link below.

Self-Funded With Spencer
Selling In A Collaborative Environment | with Sean Wood

Self-Funded With Spencer

Play Episode Listen Later Feb 18, 2025 72:17


"Be kind, be focused on others, be a steward of your craft and take the opportunity to take action.” - Sean WoodSean Wood, SVP of National Partnerships at ParetoHealth, joined me for a podcast that is far less about captives than it is about being a sales ninja. We talk about getting started in sales, the buyer as a skeptic, collaborative selling, and sales mentorship.Sean was actually a pro basketball player, and he shares how his point guard experience has translated to his career. He shares what led him into sales and why he embraced it so quickly, and why Pareto felt like a natural fit when he joined. Whether you're just getting started in sales or you're a sales veteran, I promise you'll learn something from this week's episode of Self-Funded with Spencer.Chapters:00:00:00 Meet Sean Wood00:04:16 Elevating Your Team As A Leader00:11:04 Transitioning From Pro Basketball To Sales00:20:04 Trust-Based Team Selling 00:24:11 Long-term Relationship Building In Sales00:31:30 The Importance Of Sales Mentorship00:53:09 Healthcare Solutions Driven By The Market00:59:57 The Future Of Healthcare ConsultingKey Links for Social:@SelfFunded on YouTube for video versions of the podcast and much more - https://www.youtube.com/@SelfFundedListen on Spotify - https://open.spotify.com/show/1TjmrMrkIj0qSmlwAIevKA?si=068a389925474f02Listen on Apple Podcasts - https://podcasts.apple.com/us/podcast/self-funded-with-spencer/id1566182286Follow Spencer on LinkedIn - https://www.linkedin.com/in/spencer-smith-self-funded/Follow Spencer on Instagram - https://www.instagram.com/selffundedwithspencer/Key Words: Transition To Sales, Collaborative Selling, Team Sales, Mentorship, Relationship Building, Sales Process, Healthcare Solutions, Buyer Mentality, Sales Success, podcast, healthcare, health insurance, self funded, self funding, self funded health insurance, self funded insurance#TransitionToSales #CollaborativeSelling #TeamSales #Mentorship #RelationshipBuilding #SalesProcess #HealthcareSolutions #BuyerMentality #SalesSuccess #podcast #healthcare #healthinsurance #selffunded #selffunding #selffundedhealthinsurance #selffundedinsurance

OPOSICIONES DE EDUCACIÓN
Cómo ser productivo y aprovechar tu tiempo según la ciencia

OPOSICIONES DE EDUCACIÓN

Play Episode Listen Later Feb 18, 2025 9:03


Si quieres aumentar tu confianza, reducir la ansiedad y multiplicar tus probabilidades de éxito, necesitas un plan estructurado de estudio. En este video, te comparto 7 leyes de productividad diseñadas específicamente para opositores, basadas en la evidencia y en estrategias probadas por miles de personas que ya han conseguido su plaza. Menos esfuerzo, más resultados. ¡Aplícalas desde hoy! ➡️ Apúntate gratis al Consejo Educativo diario y recíbelo todos los días a las 15h para ser mejor docente: https://preparadoredufis.com/consejo-educativo-diario/ ════════════════ Secciones de nuestro canal por categorías ➜ Encuéntralas aquí: https://www.youtube.com/c/OposicionesdeEducaci%C3%B3n/playlists ════════════════ ⚡️ ¿YouTube se te queda corto y quieres ir más allá? ¡Síguenos en otras redes sociales! Instagram: https://www.instagram.com/diegofuentes.oposiciones TikTok: https://www.tiktok.com/@diegofuentes.oposiciones Mi web: https://preparadoredufis.com/ ════════════════ ÍNDICE DE VÍDEO 0:00 Introducción al vídeo 0:52 Trabajo inteligente, no duro 2:05 Aprovecha la presión a tu favor (Ley de Yerkes-Dodson) 3:12 Estado de flow y sesiones de estudio progresivas 4:30 Deja tareas inacabadas para potenciar la memoria (Efecto Zeigarnik) 5:35 Usa fechas límite para evitar procrastinar (Ley de Parkinson) 6:20 Enfócate en lo que realmente importa (Ley de Pareto) 7:10 Ataca lo más difícil primero (Ley de Laborit) 8:00 Cómo aplicar estas leyes en tu rutina de estudio ¡Suscríbete al canal y dale like para más estrategias que te acerquen a tu plaza soñada!

Self-Funded With Spencer
Selling In A Collaborative Environment | with Sean Wood

Self-Funded With Spencer

Play Episode Listen Later Feb 18, 2025 72:17


"Be kind, be focused on others, be a steward of your craft and take the opportunity to take action.” - Sean WoodSean Wood, SVP of National Partnerships at ParetoHealth, joined me for a podcast that is far less about captives than it is about being a sales ninja. We talk about getting started in sales, the buyer as a skeptic, collaborative selling, and sales mentorship.Sean was actually a pro basketball player, and he shares how his point guard experience has translated to his career. He shares what led him into sales and why he embraced it so quickly, and why Pareto felt like a natural fit when he joined. Whether you're just getting started in sales or you're a sales veteran, I promise you'll learn something from this week's episode of Self-Funded with Spencer.Chapters:00:00:00 Meet Sean Wood00:04:16 Elevating Your Team As A Leader00:11:04 Transitioning From Pro Basketball To Sales00:20:04 Trust-Based Team Selling 00:24:11 Long-term Relationship Building In Sales00:31:30 The Importance Of Sales Mentorship00:53:09 Healthcare Solutions Driven By The Market00:59:57 The Future Of Healthcare ConsultingKey Links for Social:@SelfFunded on YouTube for video versions of the podcast and much more - https://www.youtube.com/@SelfFundedListen on Spotify - https://open.spotify.com/show/1TjmrMrkIj0qSmlwAIevKA?si=068a389925474f02Listen on Apple Podcasts - https://podcasts.apple.com/us/podcast/self-funded-with-spencer/id1566182286Follow Spencer on LinkedIn - https://www.linkedin.com/in/spencer-smith-self-funded/Follow Spencer on Instagram - https://www.instagram.com/selffundedwithspencer/Key Words: Transition To Sales, Collaborative Selling, Team Sales, Mentorship, Relationship Building, Sales Process, Healthcare Solutions, Buyer Mentality, Sales Success, podcast, healthcare, health insurance, self funded, self funding, self funded health insurance, self funded insurance#TransitionToSales #CollaborativeSelling #TeamSales #Mentorship #RelationshipBuilding #SalesProcess #HealthcareSolutions #BuyerMentality #SalesSuccess #podcast #healthcare #healthinsurance #selffunded #selffunding #selffundedhealthinsurance #selffundedinsurance

The Tim Ferriss Show
#795: The 4-Hour Workweek Revisited — The End of Time Management

The Tim Ferriss Show

Play Episode Listen Later Feb 13, 2025 51:46


This time around, we have a bit of a different format, featuring the book that started it all for me, The 4-Hour Workweek. Readers and listeners often ask me what I would change or update, but an equally interesting question is: what wouldn't I change? What stands the test of time and hasn't lost any potency? This episode features one of the most important chapters from the audiobook of The 4-Hour Workweek. It includes tools and frameworks that I use to this day, including Pareto's Law and Parkinson's Law. The chapter is narrated by the great voice actor Ray Porter. If you are interested in checking out the rest of the audiobook, which is produced and copyrighted by Blackstone Publishing, you can find it on Audible, Apple, Google, Spotify, Downpour.com, or wherever you find your favorite audiobooks.Sponsors:ExpressVPN high-speed, secure, and anonymous VPN service: https://www.expressvpn.com/tim (get 3 or 4 months free on their annual plans) Momentous high-quality supplements: https://livemomentous.com/tim (code TIM for 20% off)Helix Sleep premium mattresses: https://HelixSleep.com/Tim (Between 20% and 27% off all mattress orders and two free pillows)*For show notes and past guests on The Tim Ferriss Show, please visit tim.blog/podcast.For deals from sponsors of The Tim Ferriss Show, please visit tim.blog/podcast-sponsorsSign up for Tim's email newsletter (5-Bullet Friday) at tim.blog/friday.For transcripts of episodes, go to tim.blog/transcripts.Discover Tim's books: tim.blog/books.Follow Tim:Twitter: twitter.com/tferriss Instagram: instagram.com/timferrissYouTube: youtube.com/timferrissFacebook: facebook.com/timferriss LinkedIn: linkedin.com/in/timferrissPast guests on The Tim Ferriss Show include Jerry Seinfeld, Hugh Jackman, Dr. Jane Goodall, LeBron James, Kevin Hart, Doris Kearns Goodwin, Jamie Foxx, Matthew McConaughey, Esther Perel, Elizabeth Gilbert, Terry Crews, Sia, Yuval Noah Harari, Malcolm Gladwell, Madeleine Albright, Cheryl Strayed, Jim Collins, Mary Karr, Maria Popova, Sam Harris, Michael Phelps, Bob Iger, Edward Norton, Arnold Schwarzenegger, Neil Strauss, Ken Burns, Maria Sharapova, Marc Andreessen, Neil Gaiman, Neil de Grasse Tyson, Jocko Willink, Daniel Ek, Kelly Slater, Dr. Peter Attia, Seth Godin, Howard Marks, Dr. Brené Brown, Eric Schmidt, Michael Lewis, Joe Gebbia, Michael Pollan, Dr. Jordan Peterson, Vince Vaughn, Brian Koppelman, Ramit Sethi, Dax Shepard, Tony Robbins, Jim Dethmer, Dan Harris, Ray Dalio, Naval Ravikant, Vitalik Buterin, Elizabeth Lesser, Amanda Palmer, Katie Haun, Sir Richard Branson, Chuck Palahniuk, Arianna Huffington, Reid Hoffman, Bill Burr, Whitney Cummings, Rick Rubin, Dr. Vivek Murthy, Darren Aronofsky, Margaret Atwood, Mark Zuckerberg, Peter Thiel, Dr. Gabor Maté, Anne Lamott, Sarah Silverman, Dr. Andrew Huberman, and many more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

rvprepper's podcast
Keep On Prepping

rvprepper's podcast

Play Episode Listen Later Feb 10, 2025 22:46


The election is done and the new administration has been installed. Is it time to stop prepping yet? That answer is HELL NO! It doesn't matter if your choice of candidate got into office or the opposing candidate got into office. You must always stay alert and be aware of what is going on politically, locally and financially.  Now is not the time to let up with your preparations. So, are we on doom and gloom watch? In my opinion we are not at this time.  However, the political atmosphere across the globe still has our adversaries spooling up to do damage to the United States of America. Will that affect you and your family? My answer is, I sure hope it doesn't. But, being a realist it probably will. So let's jump into some scenarios and see where we go with our preparations. If you're new to the prepping lifestyle and it truly is a lifestyle, then there is much to learn and much to absorb. We have been involved in this lifestyle for more than four decades. Do I feel like chicken little running around saying the sky is falling? No not really. But I do keep looking up. There has been an ample amount of alarmist that have made their podcasts and YouTube channels feeding that belief. One could say that the RV prepper podcast is just like that. I will not deny that I have not alarmed people along the way. But if a warning is justified in my mind, then I am going to get the news out any way I can. Do I feel more comfortable with the current administration in power? And to that,  I'll answer I am cautiously optimistic. Do I feel safer than when we had the previous administration in power? Time and actions will be the judge of that. It can all change in an instant. So we just keep learning and building our preparations.  One thing that we have not focused on a lot in the RV prepper podcast has been on the subject of firearms. Just like underwear it's a personal choice. We're going to be taking some time in a few episodes this year to talk about firearms and their place in the prepping environment. We have talked about firearms in previous episodes and the need for true hands on training from competent trainers. This is an important step that many people shortchanged themselves by not getting competent training. There is a plethora of trainers across the country. You need to do your due diligence and check references before you sign up for one of them. It can be a fabulous environment for you to learn safety, responsibility, accuracy, and realizing that you don't have all the answers. When you do spend your hard earned money and go for training, go there to learn from that instructor and not to teach that instructor that you are better than him or her. This happens a great deal. You must have an open mind and be willing to learn a new skill.   If you want a recommendation, drop me an email inforvprepper@gmail.com and we will be more than glad to provide you with some.  When you ask somebody what a prepper is they will more than likely say that they are a nut job waiting for the world to end. I can confidently and honestly say there are many people just like that that have declared themselves preppers. They live that lifestyle, to be ready for that conflict should it come.  Our goal is to avoid any type of conflict as much as possible. But we must be prudent in our training to be ready to protect ourselves, our family and friends should the need arise. In several of our episodes both Rhonda and I have talked about us working on the 80/20 rule. The 80/20 rule, also known as the Pareto principle, suggests that roughly 80% of outcomes come from 20% of causes. So with that thought in mind, will 20% of our preps be able to cover 80% of the hardship should they arise. Who knows.  This sounds good in principle but it's hard to believe in practice.   We always work on a Continuous Improvement Process (CIP) in an effort to become more sufficient with our preparations. We often review where we stand as far as water, food, fire and other supplies. Adjustments have happened a great deal in the past. Now? We appear to have hit a level of readiness that stable. Is it complete? It depends…..   We have often stated that you can't be prepared for everything. Even if you're a mega billionaire that can build a big bunker on an island in Hawaii. You can't buy your way into success. But having that kind of money certainly helps. You could have all the money in the world but if you don't know how to take care of yourself and to support others through learned skills, then the money is just tinder to use to build a fire.   Many people think preppers are lonely people that go off and hide in the woods and drop out of society. The truth is there are many like that. It is where they feel safe from what they perceive as a threat. Perhaps it is also that they just want to be left alone.   Many are just everyday average people stocking a little extra food at their house along with some water, extra clothes and sometimes guns and ammunition. The joking mantra of preppers is “Beans, Bullets and Band-Aids”. Sounds ridiculous doesn't it? While those items will help you to survive, there are skills that far outweigh their value that is truly needed to thrive and survive. I want to ask a favor of you. Please send us an e-mail to inforvprepper@gmail.com And tell us what you think prepping is or should be. I value your input and we respond to every e-mail we get.  Skills Skills diminish went they are not used.  Are we the best at keeping our skills sharp? Unfortunately, no. We are working on that challenge, and it is starting to get caught up to date. We often use our down time to test products and evaluate their effectiveness. There are some good products out there. However, there is a plethora of crap too. If I were asked to give a percentage of the skill level right now, I would have to judge it at 85%. Room for improvement is a continuous struggle. I want perfection and I want it now. Maybe I should call JG Wentworth!       Being a social neighbor led me to let my neighbors know on either side of our house that we are there to help them should the need arise. Perhaps I was foolish to do that. Politically were 180° apart. However, I trust our neighbors. They are good people. It has been said by many prepping authors and trainers that you can't go it alone. That statement is true for the most part. There are always exceptions. We help our neighbors when we are asked and sometimes we help when we are not asked. Like a surprise snowfall, Rhonda and I will get out and shovel the walks. One of our neighbors usually gets up at least two hours before we do and many times he has beaten us to the punch of shoveling the walks.   And we appreciate that and always thank him for his efforts and deeds. And we depend equally on the neighbors to the other side of us that are much younger than we are and many times if we are late  getting up they will also shovel our walks. We extend the same gratitude and appreciation to them as well. But out of the three houses we're the only ones that have a snow thrower. And when the snow is deep enough,  it gets cranked up and goes to work. We had something funny happen this year when our younger neighbors asked us if we were going to crank up Big Orange and plow. I answered in the affirmative and laughed like hell. We all had a great laugh.  Neighbors look out for each other. We are a community and we'll do anything to help our neighbors out. We also know that they would do the same for us if asked. I have become known as the neighborhood snoop. Not that I creep up on people and look through their blinds. I am home a great deal of time depending on our travel schedule. When I am here, I pay attention to people, vehicles and animals that are moving through the area. Even in my aging process, I still keep memories of cars, faces and animals. I note when I see them and look for any oddities.  People are creatures of habits usually walking the same direction each and every day. I pay attention to our cameras in order to evaluate traffic around the house. Am I paranoid? I don't think so, I just call it awareness. I pay attention as often as I can. This is known as situational awareness.   Just today I had a solicitor come to the door as I am writing todays show. I scared the hell out of him when I talked to him through the camera system. I felt it strange that they were out trying to inspect siding and roofs for possible sales. I have about 18 inches of snow on the roof right now. I declined of course. His image is on the hard drive in perpetuity should something go wrong in the neighborhood.  Pay attention to your area of operation. Look for oddities, look for trouble. Verify, Verify and Verify. And stay prepared to Thrive and Survive.  

Healthy Perspectives w/ Jeremiah

Tip of the day: Be careful with your 20%   Today we take a look at the Pareto principle and apply it to our recent history in the United States. We talk about propaganda, which is political marketing. We talk about what you can do to make a difference as well.  Email us – healthyperspectives@protonmail.com Podcast home page - www.healthy-perspectives.com/podcast Sponsor/Support – https://healthy-perspectives.com/sponsor Rumble - https://rumble.com/c/c-2235930 YouTube - https://www.youtube.com/channel/UCEXZdWuBoM6KXof4YcP9nkQ LinkedIn page - www.linkedin.com/in/jeremiah-guidos-915b3426  Twitter aka X - https://twitter.com/hphonestviews Locals - https://locals.com/member/jeremiahguidos #healthyperspectives #podcast #jeremiah #mentalhealth #counseling #counselor #mindset #culture #socialresponsibility #psychology #clinical #education #walkingwithGod #Jesus #JesusisLord #LoveGod #Loveothers #paretoprinciple

The W. Edwards Deming Institute® Podcast
Quality as an Organizational Strategy with Cliff Norman and Dave Williams

The W. Edwards Deming Institute® Podcast

Play Episode Listen Later Feb 3, 2025 77:02


Join host Andrew Stotz for a lively conversation with Cliff Norman and Dave Williams, two of the authors of "Quality as an Organizational Strategy." They share stories of Dr. Deming, insights from working with businesses over the years, and the five activities the book is based on. TRANSCRIPT 0:00:02.2 Andrew Stotz: My name is Andrew Stotz, and I'll be your host as we dive deeper into the teachings of Dr. W. Edwards Deming. Today, we have a fantastic opportunity to learn more about a recent book that's been published called "Quality as an Organizational Strategy". And I'd like to welcome Cliff Norman and Dave Williams on the show, two of the three authors. Welcome, guys.   0:00:27.1 Cliff Norman: Thank you. Glad to be here.   0:00:29.4 Dave Williams: Yeah, thanks for having us.   0:00:31.9 Andrew Stotz: Yeah, I've been looking forward to this for a while. I was on LinkedIn originally, and somebody posted it. I don't remember who, the book came out. And I immediately ordered it because I thought to myself, wait, wait, wait a minute. This plugs a gap. And I just wanna start off by going back to Dr. Deming's first Point, which was create constancy of purpose towards improvement of product and service with the aim to become competitive and stay in business and to provide jobs. And all along, as anybody that learned the 14 Points, they knew that this was the concept of the strategy is to continue to improve the product and service in the eyes of the client and in your business. But there was a lot missing. And I felt like your book has started really to fill that gap. So maybe I'll ask Cliff, if you could just explain kind of where does this book come from and why are you bringing it out now?   0:01:34.5 Cliff Norman: That's a really good question, Andrew. The book was originally for the use of both our clients only. So it came into being, the ideas came out of the Deming four day seminar where Dr. Tom Nolan, Ron Moen and Lloyd Provost, Jerry Langley would be working with Dr. Deming. And then at the end of four days, the people who some of who are our clients would come up to us and said, he gave us the theory, but we don't have any methods. And so they took it very seriously and took Dr. Deming's idea of production viewed as a system. And from that, they developed the methods that we're going to discuss called the five activities. And all of our work with this was completely behind the wall of our clients. We didn't advertise. So the only people who became clients were people who would seek us out. So this has been behind the stage since about 1990. And the reason to bring it out now is to make it available beyond our client base. And Dave, I want you to go ahead and add to that because you're the ones that insisted that this get done. So add to that if you would.   [laughter]   0:02:53.0 Dave Williams: Well, thanks, Cliff. Actually, I often joke at Cliff. So one thing to know, Cliff and Lloyd and I all had a home base of Austin, Texas. And I met them about 15 years ago when I was in my own journey of, I had been a chief quality officer of an ambulance system and was interested in much of the work that API, Associates of Process Improvement, had been doing with folks in the healthcare sector. And I reached out to Cliff and Lloyd because they were in Austin and they were kind enough, as they have been over many years, to welcome me to have coffee and talk about what I was trying to learn and where my interests were and to learn from their work. And over the last 15 years, I've had a great benefit of learning from the experience and methods that API has been using with organizations around the world, built on the shoulders of the theories from Dr. Deming. And one of those that was in the Improvement Guide, one of the foundational texts that we use a lot in improvement project work that API wrote was, if you go into the back, there is a chapter, and Cliff, correct me if I'm wrong, I think it's chapter 13 in this current edition on creating value.   0:04:34.3 Dave Williams: In there, there was some description of kind of a structure or a system of activities that would be used to pursue qualities and organizational strategy. I later learned that this was built on a guide that was used that had been sort of semi self-published to be able to use with clients. And the more that I dove into it, the more that I really valued the way in which it had been framed, but also how, as you mentioned at the start, it provided methods in a place where I felt like there was a gap in what I saw in organizations that I was working with or that I had been involved in. And so back in 2020, when things were shut down initially during the beginning of the pandemic, I approached Lloyd and Cliff and I said, I'd love to help in any way that I can to try to bring this work forward and modernize it. And I say modernize it, not necessarily in terms of changing it, but updating the material from its last update into today's context and examples and make it available for folks through traditional bookstores and other venues.   0:05:58.9 Andrew Stotz: And I have that The Improvement Guide, which is also a very impressive book that helps us to think about how are we improving. And as you said, the, that chapter that you were talking about, 13, I believe it was, yeah, making the improvement of value a business strategy and talking about that. So, Cliff, could you just go back in time for those people that don't know you in the Deming world, I'm sure most people do, but for those people that don't know, maybe you could just talk about your first interactions with Dr. Deming and the teachings of that and what sparked your interest and also what made you think, okay, I wanna keep expanding on this.   0:06:40.0 Cliff Norman: Yeah. So I was raised in Southern California and of course, like many others, I'm rather horrified by what's going on out there right now with fires. That's an area I was raised in. And so I moved to Texas in '79, went to work for Halliburton. And they had an NBC White Paper called, "If Japan Can, Why Can't We?", and our CEO, Mr. Purvis Thrash, he saw that. And I was working in the quality area at that time. And he asked me to go to one of Deming's seminars that was held in Crystal City, actually February of 1982. And I got down there early and got a place up front. And they sent along with me an RD manager to keep an eye on me, 'cause I was newly from California into Texas. And so anyway, we're both sitting there. And so I forgot something. So I ran up stairs in the Sheraton Crystal City Hotel there. And I was coming down and lo and behold, next floor down, Dr. Deming gets on and two ladies are holding him up. And they get in the elevator there and he sees this George Washington University badge and he kind of comes over, even while the elevator was going down and picks it up and looks it up real close to his face. And then he just backs up and leans, holds onto the railing and he says, Mr. Norman, what I'm getting ready to tell you today will haunt you for the rest of your life.   0:08:11.8 Cliff Norman: And that came true. And of course, I was 29 at the time and was a certified quality engineer and knew all things about the science of quality. And I couldn't imagine what he would tell me that would haunt me for the rest of my life, but it did. And then the next thing he told me, he said, as young as you are, if you're not learning from somebody that you're working for, you ought to think about getting a new boss. And that's some of the best advice I've ever gotten. I mean, the hanging around smart people is a great thing to do. And I've been gifted with that with API. And so that's how I met him. And then, of course, when I joined API, I ended up going to several seminars to support Lloyd Provost and Tom Nolan and Ron Moen and Jerry as the various seminars were given. And Ron Moen, who unfortunately passed away about three years ago, he did 88 of those four day seminars, and he was just like a walking encyclopedia for me. So anytime I had questions on Deming, I could just, he's a phone call away, and I truly miss that right now.   0:09:20.5 Cliff Norman: So when Dave has questions or where this reference come from or whatever, and I got to go do a lot of work, where Ron, he could just recall that for me. So I miss that desperately, but we were busy at that time, by the time I joined API was in '88. And right away, I was introduced to what they had drafted out in terms of the five activities, which is the foundation of the book, along with understanding the science of improvement and the chain reaction that Dr. Deming introduced us to. So the science of improvement is what Dr. Deming called the System of Profound Knowledge. So I was already introduced to all that and was applying that within Halliburton. But QBS, as we called it then, Qualities of Business Strategy was brand new. I mean, it was hot off the press. And right away, I took it and started working with my clients with it. And we were literally walking on the bridge as we were building it. And the lady I'm married to right now, Jane Norman, she was working at Conagra, which is like a $15 billion poultry company that's part of Conagra overall, which is most of the food in your grocery store, about 75% of it. And she did one of the first system linkages that we ever did.   0:10:44.5 Cliff Norman: And since then, she's worked at like four other companies as a VP or COO, and has always applied these ideas. And so a lot of this in the book examples and so forth, comes from her actual application work. And when we'd worked together, she had often introduced me, this is my husband, Cliff, he and his partners, they write books, but some of us actually have to go to work. And then eventually she wrote a book with me with Dr. Maccabee, who is also very closely associated with Dr. Deming. So now she's a co-author. So I was hoping that would stop that, but again, we depend on her for a lot of the examples and contributions and the rest of it that show up in the book. So I hope that answers your question.   0:11:28.2 Andrew Stotz: Yeah, and for people like myself and some of our listeners who have heard Dr. Deming speak and really gotten into his teachings, it makes sense, this is going to haunt you because I always say that, what I read originally... I was 24 when I went to my first Deming seminar. And I went to two two-day seminars and it... My brain was open, I was ready, I didn't have anything really in it about, any fixed methods or anything. So, for me, it just blew my mind, some of the things that he was talking about, like thinking about things in a system I didn't think about that I thought that the way we got to do is narrow things down and get this really tight focus and many other things that I heard. And also as a young, young guy, I was in this room with, I don't know, 500 older gentlemen and ladies, and I sat in the front row and so I would see him kind of call them on the carpet and I would be looking back like, oh, wow, I never saw anybody talk to senior management like that and I was kind of surprised. But for those people that really haven't had any of that experience they're new to Deming, what is it that haunts you? What is... Can you describe what he meant when he was saying that?   0:12:42.9 Cliff Norman: I gotta just add to what you just said because it's such a profound experience. And when you're 29, if most of us, we think we're pretty good shape by that time, the brain's fully developed by age 25, judgment being the last function that develops. And so you're pretty well on your way and then to walk in and have somebody who's 81 years old, start introducing you to things you've never even thought about. The idea of the Chain Reaction that what I was taught as a certified quality engineer through ASQ is I need to do enough inspection, but I didn't need to do too much 'cause I didn't want to raise costs too much. And Dr. Deming brought me up on stage and he said, well, show me that card again. So I had a 105D card, it's up to G now or something. And he said, "well, how does this work?" And I said, "well, it tells me how many samples I got to get." And he says, "you know who invented that." And I said, "no, sir, I thought God did." He said, "no, I know the people that did it. They did it to put people like you out of business. Sit down, young man, you've got a lot to learn." And I thought, wow, and here you are in front of 500 people and this is a public flogging by any stretch.   0:13:56.1 Cliff Norman: And it just went on from there. And so a few years later, I'm up in Valley Forge and I'm working at a class with Lloyd and Tom Nolan and a guy named, I never met before named Jim Imboden. And he's just knock-down brilliant, but they're all working at General Motors at that time. And a lot of the book "Planned Experimentation" came out of their work at Ford and GM and Pontiac and the rest of it. And I mean, it's just an amazing contribution, but I go to dinner with Jim that night. And Jim looks at me across the table and he says, Cliff, how did you feel the day you found out you didn't know anything about business economics or anything else? I said, "you mean the first day of the Deming seminar?" He said, "that's what I'm talking about." And that just... That's how profound that experience is. Because all of a sudden you find out you can improve quality and lower costs at the same time. I'm sorry, most people weren't taught that. They certainly weren't taught that in business school. And so it was a whole transformation in thinking and just the idea of a system. Most of what's going on in the system is related to the system and the way it's constructed. And unfortunately, for most organizations, it's hidden.   0:15:04.2 Cliff Norman: They don't even see it. So when things happen, the first thing that happens is the blame flame. I had a VP I worked for and he'd pulled out his org chart when something went bad and he'd circle. He said, this is old Earl's bailiwick right here. So Cliff, go over and see Earl and I want you to straighten him out. Well, that's how most of it runs. And so the blame flame just takes off. And if you pull the systems map out there and if he had to circle where it showed up, he'd see there were a lot of friends around that that were contributing. And we start to understand the complexity of the issue. But without that view, and Deming insisted on, then you're back to the blame flame.   0:15:45.1 Andrew Stotz: Yeah. And Dave, I see a lot of books on the back on your shelf there about quality and productivity and team and many different things. But maybe you could give us a little background on kind of how how you, besides how you got onto this project and all that. But just where did you come from originally and how did you stumble into the Deming world?   0:16:08.9 Dave Williams: Sure. Well, sadly, I didn't have the pleasure of getting to sit in on a four-day workshop. Deming died in 1993. And at that time, I was working on an ambulance as a street paramedic and going to college to study ambulance system design and how to manage ambulance systems, which was a part of public safety that had sort of grown, especially in the United States in the '60s. And by the time I was joining, it was about 30 years into becoming more of a formalized profession. And I found my way to Austin, Texas, trying to find one of the more professionalized systems to work in and was, worked here as a paramedic for a few years. And then decided I wanted to learn more and started a graduate program. And one of the courses that was taught in the graduate program, this is a graduate program on ambulance management, was on quality. And it was taught by a gentleman who had written a, a guide for ambulance leaders in the United States that was based on the principles and methods of quality that was happening at this time. And it pieced together a number of different common tools and methods like Pareto charts and cause-and-effect diagrams and things like that.   0:17:33.1 Dave Williams: And it mentioned the different leaders like Deming and Juran and Crosby and others. And so that was my first exposure to many of these ideas. And because I was studying a particular type of healthcare delivery system and I was a person who was practicing within it and I was learning about these ideas that the way that you improve a system or make improvement is by changing the system. I was really intrigued and it just worked out at the time. One of the first roles, leadership roles that emerged in my organization was to be the Chief Quality Officer for the organization. And at the time, there were 20 applicants within my organization, but I was the only one that knew anything about any of the foundations of quality improvements. Everybody else applied and showed their understanding of quality from a lived experience perspective or what their own personal definitions of quality were, which was mostly around inspection and quality assurance. I had, and this won't surprise Cliff, but I had a nerdy response that was loaded with references and came from all these different things that I had been exposed to. And they took a chance on me because I was the only one that seemed to have some sense of the background. And I started working and doing...   0:19:10.1 Dave Williams: Improvement within this ambulance system as the kind of the dedicated leader who was supposed to make these changes. And I think one of the things that I learned really quickly is that frequently how improvement efforts were brought to my attention was because there was a problem that I, had been identified, a failure or an error usually attributed to an individual as Cliff pointed out, somebody did something and they were the unfortunate person who happened to kind of raise this issue to others. And if I investigated it all, I often found that there were 20 other people that made the same error, but he was, he or she was the only one that got caught. And so therefore they were called to my office to confess. And when I started to study and look at these different issues, every time I looked at something even though I might be able to attribute the, first instance to a person, I found 20 or more instances where the system would've allowed or did allow somebody else to make a similar error.   0:20:12.6 Dave Williams: We just didn't find it. And it got... And it became somewhat fascinating to me because my colleagues were very much from a, if you work hard and just do your job and just follow the policy then good quality will occur. And nobody seemed to spend any time trying to figure out how to create systems that produce good results or figure out how to look at a system and change it and get better results. And so most of my experience was coming from these, when something bubbled up, I would then get it, and then I'd use some systems thinking and some methods and all of a sudden unpack that there was a lot of variation going on and a lot of errors that could happen, and that the system was built to get results worse than we even knew.   0:21:00.7 Dave Williams: And it was through that journey that I ended up actually becoming involved with the Institute for Healthcare Improvement and learning about what was being done in the healthcare sector, which API at the time were the key advisors to Dr. Don Berwick and the leadership at IHI. And so much of the methodology was there. And actually, that's how I found my way to Cliff. I happened to be at a conference for the Institute for Healthcare Improvement, and there was an advertisement for a program called the Improvement Advisor Professional Development Program, which was an improvement like practitioner project level program that had been developed by API that had been adapted to IHI, and I noticed that Cliff and Lloyd were the faculty, and that they were in my hometown. And that's how I reached out to them and said, hey can we have coffee? And Cliff said, yes. And so...   0:21:53.1 Andrew Stotz: And what was that, what year was that roughly?   0:22:00.3 Dave Williams: That would've been back in 2002 or 2003, somewhere in that vicinity.   0:22:02.0 Andrew Stotz: Hmm. Okay.   0:22:06.8 Dave Williams: Maybe a little bit later.   0:22:06.9 Andrew Stotz: I just for those people that are new to the topic and listening in I always give an example. When I worked at Pepsi... I graduated in 1989 from university with a degree in finance. And I went to work at Pepsi in manufacturing and warehouse in Los Angeles at the Torrance Factory originally, and then in Buena Park. But I remember that my boss told me, he saw that I could work computers at that time, and so I was making charts and graphs just for fun to look at stuff. And he said, yeah, you should go to a one of these Deming seminars. And so he sent me to the one in... At George Washington University back in 1990, I think it was. And but what was happening is we had about a hundred trucks we wanted to get out through a particular gate that we had every single morning. And the longer it took to get those trucks out the longer they're gonna be on LA traffic and on LA roads, so if we can get 'em out at 5:00 AM, fantastic. If we get 'em out at 7:00, we're in trouble. And so they asked me to look at this and I did a lot of studying of it and I was coming for like 4:00 in the morning I'd go up to the roof of the building and I'd look down and watch what was happening. And then finally I'd interview everybody. And then finally the truck drivers just said, look, the loaders mess it up so I gotta open my truck every morning and count everything on it. And I thought, oh, okay.   0:23:23.7 Andrew Stotz: So I'll go to the loaders. And I go, why are you guys messing this up? And then the loaders was like, I didn't mess it up. We didn't have the production run because the production people changed the schedule, and so we didn't have what the guy needed. And so, and oh, yeah, there was a mistake because the production people put the product in the wrong spot, and therefore, I got confused and I put the wrong stuff on by accident. And then I went to the production people and they said, well, no, it's not us. It's the salespeople. They keep putting all this pressure on us to put this through right now, and it's messing up our whole system. And that was the first time in my life where I realized, okay, it's a system. There's interconnected parts here that are interacting, and I had to go back into the system to fix, but the end result was I was able to get a hundred trucks through this gate in about 45 minutes instead of two hours, what we had done before.   0:24:18.8 Andrew Stotz: But it required a huge amount of work of going back and looking at the whole system. So the idea of looking at the science of improvement, as you mentioned, and the System of Profound Knowledge, it's... There's a whole process. Now, I wanna ask the question for the person who gets this book and they dig into it, it's not a small book. I've written some books, but all of 'em are small because I'm just, maybe I just can't get to this point. But this book is a big book, and it's got about 300... More than 300 pages. What's the promise? What are they gonna get from digging into this book? What are they gonna take away? What are they gonna be able to bring to their life and their business that they couldn't have done without really going deeper into this material?   0:24:57.7 Cliff Norman: Dave, go ahead.   0:25:01.4 Dave Williams: Well, I was gonna joke by saying they're gonna get hard work and only half because this is just the theory in the book and many of the... And sort of examples of the method. But we're in the process of preparing a field guide which is a much deeper companion guide loaded with exercises and examples of and more of the methods. So the original guide that that API had developed was actually about an eight... Well, I don't know how many pages it was, but it was a thick three inch binder. This, what you have there is us refining the content part that explains the theory and kind of gets you going. And then we moved all of the exercises and things to the field guide for people that really wanna get serious about it.   0:26:00.3 Dave Williams: And the reason I say hard work is that the one thing that you won't get, and you should probably pass it if this book if you're on Amazon, is you're not gonna get an easy answer. This is, as a matter of fact, one of the things that emerged in our early conversations about was this project worth it? Is to say that this is hard work. It's work that a very few number of leaders who or leadership teams that really want to learn and work hard and get results are gonna embark on. But for those, and many of our clients, I think are representative of that, of those people that say, gosh, I've been working really hard, and I feel like we could do better. I feel like I could make a bigger impact, or I could serve more customers or clients.   0:26:44.0 Dave Williams: And but I am... And I'm in intrigued or inspired or gotten to a certain point with improvement science on my own, but I want to figure out how to be more systematic and more global and holistic at that approach. Then that's what QOS is about. It builds on the shoulders of the other books that you mentioned, like The Improvement Guide which we talked about as being a great book about improvement, and improvement specifically in the context of a project. And other books like The Healthcare Data Guide and the Planned Experimentation, which are also about methods, healthcare Data Guide being about Shewhart charts, and Planned Experimentation being about factorial design. This book is about taking what Cliff described earlier as that... I always say it's that that diagram that people put on a slide and never talk about from Deming of production views as a system and saying, well, how would we do this if this is the model for adopting quality as strategy, what are the methods that help us to do this?   0:28:01.3 Dave Williams: And this book breaks that down into five activities that are built on the shoulders of profound knowledge, built on the shoulders of the science of improvement and provide a structure to be able to initially develop a system, a systems view of your organization, and then build on that by using that system to continually operate and improve that organization over time. So the book describes the activities. The book describes some of the things that go into getting started, including being becoming good at doing results-driven improvement, building a learning system, focusing in on the things that matter to your organization. And then working towards building the structure that you can improve upon. The book creates that foundation. It provides examples from clients and from people that we've worked with so that you can see what the theory looks like in practice get, kind of get a flavor for that. And we hope it builds on the shoulders of other work that I mentioned in the other books that compliment it and provides a starting point for teams that are interested in taking that journey.   0:29:26.5 Andrew Stotz: And Cliff, from your perspective, if somebody had no, I mean, I think, I think the Deming community's gonna really dive in and they're gonna know a lot of this stuff, but is gonna help them take it to the next level. But for someone who never had any real experience with Deming or anything like that, and they stumble upon this interview, this discussion, they hear about this book, can they get started right away with what's in this book? Or do they have to go back to foundations?   0:29:49.6 Cliff Norman: No, I think that can definitely get started. There's a lot of learning as you know, Andrew, from going through the four-day to understand things. And I think we've done a pretty good job of integrating what Dr. Deming taught us, as well as going with the methods. And one of the things people would tell him in his four-day seminars is, Dr. Deming, you've given us the theory, but we have no method here. And he said, well, if I have to give you the method, then you'll have to send me your check too. So he expected us to be smart enough to develop the methods. And the API folks did a really good job of translating that into what we call the five activities. So those five activities are to understand the purpose of the organization.   0:30:35.6 Cliff Norman: And a lot of people when they write a purpose, they'll put something up there but it's usually we love all our people. We love our customers even more. If only they didn't spend so much, and we'll come out with something like that and there'll be some pablum that they'll throw up on the wall. Well, this actually has some structure to it to get to Deming's ideas. And the first thing is let's try to understand what business we're in and what need we're serving in society that drives customers to us. So that word is used not need coming from customers, but what is it that drives them to us so we can understand that? And then the second part of that purpose needs to define the mainstay, the core processes, the delivery systems that relate directly to customers. And just those two ideas alone, just in the first activity of purpose, most people haven't thought about those ideas.   0:31:27.8 Cliff Norman: And can somebody pick up this book and do that? Yes. And that will answer a big challenge from Dr. Deming. Most people don't even know what business they're in, haven't even thought about it. And so that we... That question gets answered here, I think, very thoroughly. In this second activity, which is viewing the organization as a system contains two components that's viewing the organization as a system. And that's difficult to do, and a lot of people really don't see the need for it. Jane Norman reminded Dave and I on a call we did last week, that when you talk about a systems map with people, just ask 'em how do they know what's going on inside other organizations, other departments within their organization? How do they know that? And most of us are so siloed.   0:32:11.2 Cliff Norman: Somebody over here is doing the best job they can in department X, and meanwhile, department Y doesn't know anything about it. And then three months later the improvement shows up and all of a sudden there's problems now in department Y. Well, somebody who's focused on the organization as a system and sees how those processes are related when somebody comes to a management meeting said, well, we've just made a change here, and this is gonna show up over here in about three months, and you need to be prepared for that. Andrew, that conversation never takes place. So the idea of having the systems map and this book can help you get started on that. The second book that Dave was just talking about, there are more replete examples in there. I mean, we've got six case studies from clients in there than the practitioners and people who actually are gonna be doing this work.   0:33:01.7 Cliff Norman: That's gonna be absolutely... They're gonna need that field guide. And I think that's where Dave was coming from. The third activity is the information activity, how are we learning from outside the organization and how do we get feedback and research into the development of new products and services and the rest of it? And so we provided a system there. In fact, Dave took a lead on that chapter, and we've got several inputs there that have to be defined. And people just thinking through that and understanding that is huge. When Dr. Deming went to Japan in 1950, he was there to do the census to see how many Japanese were left after World War II. And then he got an invitation to come and talk to the top 50 industrialists. And he started asking questions and people from the Bank of Tokyo over there and all the rest of it.   0:33:52.4 Cliff Norman: And Dr. Deming says, well, do you have any problems? And they said, what do you mean? He says, well, do customers call up and complain? And he said, yes. And he says, well, do you have any data? And he said, no. He says, but if they complain, we give them a Geisha calendar. And then Dr. Deming says, well, how many Geisha calendars have you given out? So it's like, in 1991, I'm sitting here talking to a food company and I asked him, I said, well, you get customer complaints? Oh yeah. Do you have any data on it? No, but we give 'em a cookbook. I said, well, how many cookbooks are you giving out? So I was right back to where Deming was in 1950, so having the information activity, that third activity critical so that we're being proactive with it and not just reactive.   0:34:43.7 Cliff Norman: And so I think people can read through that and say, well, what are we doing right now? Well, I guess we're not doing this and move on. Then the fourth activity is absolutely critical. This is where you know that you've arrived, because now you're going to integrate not only the plan to operate, but a plan to improve. That becomes the business plan. For most people in business plan they do a strategy, and then they have a bunch of sub strategies, and they vote on what's important, and they do some other things, and then a year later they come back and revisit it. Well, what happens here is there's some strategic objectives that are laid out, and then immediately it comes down to, okay, what's gonna be designed and redesigned in this system? Which processes, products and services are gonna be designed? 'Cause we can all see it now, Andrew.   0:35:31.6 Andrew Stotz: Mm.   0:35:31.6 Cliff Norman: We can, it's right in front of us. So it's really easy to see at this point, and now we can start to prioritize and make that happen on purpose. As an example when Jane was a vice president at Conagra, they came up with five strategic objectives. Then they made a bunch of promises to corporate about what they were gonna do and when they were going to achieve it. When she laid out the systems map for them, they were horrified that over 30% of the processes that they needed to be having precooked meat didn't even exist. They were gonna have to be designed. And so Jane and I sat there and looking at 'em and said, well, if you'd had this map before you made the promises, would you have made those promises? No, no, we're in trouble right now. I gotta go back to the CEO of the holding company and tell 'em we're not gonna make it.   0:36:22.4 Cliff Norman: But there's a whole bunch of people that sit around in goal settings. We're gonna do this by when and have no idea about what they're talking about. So that's a little bit dangerous here. And then the fifth activity, it's probably the most important. And where I want people to start, I actually want 'em to start on the fifth activity, which is managing individual improvement activities, team activities. And what I mean by that is, nothing can hold you up from starting today on making an improvement and use the model for improvement. The three basic questions, you can write that on an envelope and apply it to a project and start right away. Because learning the habit of improvement, and when you identify, and this is typical in the planning process, again, a chapter that Dave took a lead on in the planning chapter.   0:37:03.8 Cliff Norman: When you lay that out, you're gonna come up with three to five strategic objectives, but that's gonna produce anywhere between 15 and 20 improvement efforts. And when people start three improvement efforts, and they see how difficult that is to traffic through an organization, particularly if you have a systems map, makes it a lot easier. If you don't have that, then there's all sorts of things that happen to you.   0:37:21.3 Andrew Stotz: Hmm.   0:37:22.8 Cliff Norman: But the, the idea of that all coming together is critical. And where you... Where that really shows up for the reader here is in chapter one. So Lloyd Provost took a lead on chapter one. If you read chapter one, you got a pretty good idea of what's gonna happen in the rest of the book. But more importantly, in that book, in chapter one, there's a survey at the end. And every time we give this out to people, they feel real bad.   0:37:48.1 Cliff Norman: And well, Cliff, any, on a scale of one to 10, we only came up with a four. Well, what I would tell 'em is, if you can come up with a four, you're pretty good. And those fundamentals have to be in place. In other words, the management needs to trust each other. There are certain things that have to be in place before you can even think about skating backwards here. And quality as an organizational strategy is all about skating backwards. The people who don't have the fundamentals can't even start to think about that.   0:38:15.0 Cliff Norman: So that survey and the gap between where they are at a four and where they're going to be at a 10, we've integrated throughout the whole book. So as you're reading through the whole book, you're seeing that gap, and then you have a good plan forward as to what do I need to do to get to be a six, an eight, and what do I need to do to finally arrive at a 10? Dave, why don't you add to what I just said there, and I gotta turn on a light here, I think.   0:38:39.2 Dave Williams: Well, I think one of the things that, and Cliff has probably been the one that has helped me appreciate this to the biggest degree is the role in which improvement plays in quality as an organizational strategy. So, I mean, I think in general, in our world, improvement is seen as kind of like a given, but in our case, what we've found is that many times people are not working on the things right in front of them or the problems in which they have, that they are on the hook... I like to say, are on the hook to get accomplished right now. And like Cliff mentioned, many of my clients when I engage with them, I say, well, what have you promised this year? And they'll give me a list and I'll say, well, okay, what are you working on to improve? And they'll be working on projects that are not related to that list of things that they've got to affect. And so usually that's a first pivot is to say, well, let's think about what are the things that you're working on or should be working on that are either designing or redesigning your system to achieve these strategic objectives.   0:39:48.8 Dave Williams: And the reason to put the attention on that fifth activity and get people working on improvement, there's a good chance that the improvement capability within the organization currently isn't to the level that you need it, where you can get results-driven projects happening at a clip that will enable you to chip away at 20 projects versus four in a year. And that it's not well integrated into the leadership, into the support structures that you have. In addition, if you're trying to use improvement on things that you're on the hook for, and Cliff noted, especially if you've got a system map while you're on that journey, you're gonna start to pick up on where the disconnects are. Similar to your example, Andrew, where you were describing your experience working backwards in the process, you're going to start to recognize, oh, I'm working on this, but it's linked to these other things. Or in order for me to do this, I need that. Or... And so that amplifies the project to be kind of just a vehicle to appreciate other things that are interconnected, that are important in improving our work together.   0:41:05.1 Dave Williams: And so I think that that's a critical piece. I mean, I sometimes describe it as the disappointment that people have when they open QOS because they want to have a new method or a new thing to work on. I said, well, there's a lot new in here. And at the same time, we want to build on the shoulders of the fundamentals. We want to build it because it's the fundamentals that are going to be able for you to activate the things that are necessary in order for you to skate backwards, like Cliff was describing earlier.   0:41:36.2 Cliff Norman: I got to add to what Dave was saying because this actually happened to me with a... I'm not going to mention the name of the company, but it's a high-tech companies worldwide. And we got up, a good friend of mine, Bruce Bowles, and we were introducing the idea of quality as an organizational strategy. And one of the guys in the front row, he says, Cliff, this just sounds like common sense, why aren't we all doing this? I said, that's a real good question. Let me put that in the parking lot here. So I put it up on a flip chart. And so we went through the idea of... We were working on Shewhart control charts. And so we showed him one of those. And at the end of all that, he raised his hand and I said, yeah, he says, Cliff, this is hard. I said, well, let me put that up here. This is hard. Then we went through the systems map and he says, look, this is hard. By the end of the two days, it was, this is hard, this is hard, this is hard, this is hard. This goes back to what Dave was saying earlier about once you open this page, there's some work that takes off, but more importantly, there's something new to learn here.   0:42:40.3 Cliff Norman: And that's frustrating to people, especially when they've got to quit doing what they've done in the past. It's what Deming says, you got to give up on the guilt and you got to move forward and transform your own thinking. So there's something here for the management to do. And if they're not willing to do that work, then this is probably not a good thing for them. Just go back to the blame flame and circling org charts and that kind of stuff and then wonder why we're losing money.   0:43:11.8 Andrew Stotz: Yeah, and I think that that's one of the things that we see in the Deming community is that, why are people doing it the way they are, dividing things up and doing KPIs and saying, you take care of that. And we're gonna optimize by focusing on each... We see how that all kind of falls apart.   0:43:27.9 Cliff Norman: It all falls through reductionism.   0:43:29.8 Andrew Stotz: [laughter] Yeah.   0:43:32.5 Cliff Norman: It doesn't understand the system, yeah.   0:43:32.5 Andrew Stotz: Yeah, so what I want to do now is I was just thinking about a book on my shelf called "Competitive Strategy" by Michael Porter. And there's a whole field of study in the area of strategy for businesses. Now you guys use, and you explain a little bit about the way you come up with... Why you come up with organization rather than let's say company as an example. But let's just talk about strategy for a moment. Generally we're taught in business school that there's two main strategies. One is a differentiation strategy. I like to teach my students like Starbucks. It's very differentiated from the old model. And you can have a low cost strategy, which is like McDonald's, where it's all about operational efficiency.   0:44:18.4 Andrew Stotz: And those are two different strategies that can get to the same goal, which is to build a strong and sustainable business that's making a good profit for the employees to get paid well and for shareholders. And so for somebody that understands some of the foundations of typical strategy, it's hard for them to think, wait, wait, wait, what? You're just talking about just better quality is the strategy? How should they frame this concept of quality as a strategy in relation to what we've been taught about low cost and differentiation and other types of strategy? How do we think about this book in relation to that?   0:45:03.2 Cliff Norman: When Deming wrote his book, his very first one of the four "Out of the Crisis", which was the whole idea about quality and competitive position. But he was kind of answering that. And at that time, what we had is we had three companies in the United States that were going at each other, Ford, GM, and Chrysler. And they'd call each other up, well, what are you doing this year? Oh, we're making cars that don't work. Sometimes they break down. That's why we have Mr. Goodwrench to repair them. That's an extra revenue source for us. As one of the executives that are challenged, a colleague of mine, he said, you don't realize how much money we're gonna lose here taking the repair business out because we make a lot of money out of repair. So making cars that don't work has been a good revenue stream for us. Well, all that works out great, until somebody shows up like Toyota that has a car that works and doesn't need to be repaired by Mr. Goodwrench all the time.   0:45:58.8 Cliff Norman: So the mind shift there, and what Dr. Deming was saying is that he was focused on the competition's already licked. And I don't think Porter's thought about that very much, not to be overly critical, because I'm an admirer of his, but the idea of focusing on the need and why is that customer coming to us so that we make a journey, and the Japanese call that being in the Gemba, being in the presence with the customers as they use the product or service and doing the research and the rest of it. And then coming back and then redesign that product or service so that it not only grabs the current customer, but we start thinking about customers that are not even our customers and innovate and actually come up with a design that actually brings new customers to us through products and services that we haven't thought about yet. So if I show you three products just to make a picture of it, we often show like an abacus, which was a hand calculating machine about BC. Then there's a slide rule that came out about the same year that Columbus discovered America. And that was good till about 1968.   0:47:06.0 Cliff Norman: And then the calculator, the handheld calculator came out. Well the need for all three of those products is to do handheld calculations. So we've had that need since BC. Now in 1967, K&E Calculator was making that slide rule, which I used in junior high school. If you'd have come up to me and said, Cliff, what do you need in the way of a better slide rule? I said, well can you get me a holster for it? 'Cause I don't like having to stick me in the face. I put it in my pocket and it sticks me in the face. And if you can give me a holster for that, that would be my view of that. I wasn't about to come up with the TI calculator. That wasn't gonna happen. Not from Cliff. It's gonna come from an engineer at TI. Now, K&E Calculator, if they'd been doing research in the marketplace and saying, is there something that can totally disrupt us going on here? Rather than just looking at figuring out a way to make the K&E slide rule better, they might've discovered that.   0:48:07.0 Cliff Norman: Most people don't do that. They just go back. They just lose their business. And it was interesting in '67, their annual report put out, what's the world gonna look like 100 years from now? So they had dome cities, they had cars flying, they had all sorts of things going on that were great innovations, but they didn't have the TI calculator in there, along with the HP calculator. And that wiped out their business. And so if people understand the need, and that's what Dr. Deming is getting at, he says, they really haven't thought about what business they're in. So why are the customers coming to us? He says, no customer ever asked for pneumatic tire. No customer ever asked for a microwave oven. That came from people with knowledge that were looking at how the customers are using the current products and services and say, now, is there technology innovation going on that we can actually do a better job of providing a better match in the future?   0:48:56.9 Andrew Stotz: And can you explain why you use the word need as opposed to want?   0:49:06.5 Cliff Norman: That's a good question. The idea is that there's a need that's constant in society. So that need of having to do handheld calculations or needing healthcare or to pay bills, that need is constant throughout civilization. And so if I want something that's interesting, that might be the match. That might be something to do with some features what I'm offering and so forth. I'd like to have this, I'd like to have that. But the need and the way we're using that is it doesn't come from customers. It's what drives customers to us. And it's always been there. It's always been there. Need for transportation, for example. Whether you're walking or driving a bicycle or a car or a plane.   0:49:53.6 Andrew Stotz: And Dave, how would you answer the same question when you think about a person running a business and they've had many strategy meetings in their business, they've set their corporate strategy of what we're doing, where we're going and that type of thing. And maybe they've picked, we're gonna be a low cost producer. Thailand's an interesting one because Thailand had a ability to be low cost producers in the past. And then China came along and became the ultimate low cost producer. And all of a sudden, Thai companies had a harder time getting the economies of scale and the like. And now the Chinese manufacturers are just really coming into Thailand, into the Thai market. And now it's like, for a Thai company to become a low cost leader is almost impossible given the scale that China and the skills that they have in that. And so therefore, they're looking at things like I've got to figure out how to get a better brand. I've got to figure out how to differentiate and that type of thing. How does this... How could this help a place like that and a management team that is struggling and stuck and is looking for answers?   0:51:07.0 Dave Williams: Well, I go back to what Cliff said about that many organizations don't pause to ask, why do they exist? What is the need of which they are trying to fulfill? Much of my background involved working in the service industry, initially with public safety and ambulance systems and fire systems, and then later in healthcare and in education. And in many of those environments, especially in places where in public systems where they've been built and they may have existed for a long time, when you ask them about what are they trying to accomplish as an organization or what is it that they... The need that they're trying to fulfill? Typically, they're gonna come back to you with requests or desires or wants or sort of characteristics or outcomes that people say they expect, but they don't pause to ask, like, well, what is the actual thing of which I'm trying to tackle? And Cliff mentioned like, and we actually, I should mention in the book, we have a list of different strategies, different types of strategies, all the different ones that you mentioned, like price and raw material or distribution style or platform or technology.   0:52:30.9 Dave Williams: There's different types of strategies, and the one that we are focusing in on is quality. But I think it's important for people to ask the question. Cliff mentioned transportation. There's a number of different great examples, actually, I think in transportation, where you could look at that as being an ongoing need as Cliff mentioned from the days when there was no technology and we were all on foot to our current day. Transportation has been a need that existed and many different things over time have been created from bicycles, probably one of the most efficient technologies to transport somebody, wheels and carts. And now, and you were referencing, we've made reference to the car industry. It's a fascinating experience going on of the car world and gas versus electric, high technology versus not, autonomous vehicles. There's, and all of them are trying to ask the question of, are there different ways in which I might be able to leverage technology to achieve this need of getting from point A to point B and be more useful and potentially disrupt in the marketplace? And so I think the critical thing initially is to go back and ask and learn and appreciate what is that need?   0:53:58.6 Dave Williams: And then think about your own products and services in relation to that. And I think we include four questions in the book to be able to kind of think about the need. And one of those questions is also, what are other ways in which you could fulfill that need? What are other ways that somebody could get transportation or do learning or to help sort of break you away from just thinking about your own product as well? And that's useful because it's super tied to the system question, right? Of, well, this is the need that we're trying to fulfill and these are the products and services that are matching that need. Then the system that we have is about, we need to build that and design that in order to produce, not only produce the products and services that match that need, but also continually improve that system to either improve those products and services or add or subtract products and services to keep matching the need and keep being competitive or keep being relevant. And maybe if it's not in a competitive environment where you're gonna go out of business, at least be relevant in terms of the city service or community service, government service that continues to be there to match the need of the constituents. So I think it's a really important piece.   0:55:17.0 Dave Williams: It's that North star of saying, providing a direction for everything else. And going back to your original comment or question about strategy, and many times people jump to a strategy or strategies or, and those might be more around particular objectives or outcomes that they're trying to get to. It may not actually be about the method or the approach like cost or technology that they may not even think that way. They may be more thinking about a plan. And I really encourage people to be clear about what they're trying to accomplish and then start to ask, well, how's the system built for that? And later we can bring a process that'll help us learn about our system and learn about closing that gap.   0:56:05.1 Cliff Norman: Yeah. Just what I'd add to that, Andrew, because you mentioned China, a few other countries, but I think the days are coming to an end fairly quickly where somebody can say, oh, we can go to this country. They have low wages, we'll put our plant there and all that. There's a lot of pushback on that, particularly in the United States. And if that's your strategy, that hadn't required a lot of thinking to say the least. But in 1966, over 50% of the countries in the world were, let me rephrase that, over 50% of the population of the world lived in extreme poverty. So there were a lot of targets to pick out where you want to put your manufacturing. And in 2017, and you and Dave were probably like myself, I didn't see this hit the news, but that figure had been reduced from over 50% down to 9%. And all you have to do is just, and I worked in China a lot, they're becoming very affluent. And as they become very affluent, that means wages are going up and all the things that we want to see throughout the world. And I think that's happening on a grand scale right now, but you're also getting a lot of pushback from people when they see the middle class in their own country, like here in the United States, destroyed, and say, I think we've had enough of this. And I think you're gonna see that after January. You're gonna see that take off on steroids.   0:57:31.7 Cliff Norman: And that's gonna happen, and I think throughout the world, people are demanding more, there's gonna have to be more energy, every time a baby is born, the footprints gets bigger for more energy and all the rest of it. So it's gonna be interesting, and I think we are going into an age for the planet where people as Dr. Deming promised that they'd be able to live materially better, and the whole essence of this book is to focus on the quality of the organization and the design and redesign of a system to a better job of matching the need and cause that chain reaction to go off. When Jane and I went over to work in Sweden, Sven Oloff who ran three hospitals and 62 dental clinics there and also managed the cultural activities and young shipping. He said, Cliff, I report to 81 politicians. I don't wanna have to go to them to put a bond on an election to get more money for my healthcare system, I wanna use Dr. Deming's chain reaction here to improve care to the patients in my county and also reduce our costs. A whole bunch of people that don't even believe that's possible in healthcare.   0:58:39.9 Cliff Norman: But that's what Sven Oloff said that's what you're here for. And that's what we proceeded to do, they launched about 350 projects to do just that, and one of their doctors, Dr. Motz [?], he's amazing. We taught him a systems map, I came back two months later, and he had them in his hospital on display. And I said, Motz, how did you do this? He said well Cliff, I'm an endocrinologist by education as a doctor, of course, that's a person who understands internal systems in the body. So he said the systems approach was a natural for me. But I'd like to say it was that easy for everybody else, that systems map idea and as you know, being in the Deming seminar, that's quite a challenge to move from viewing the organization as an org chart, which has been around since Moses father-in-law told him, you need to break up the work here a little bit, and the tens of tens reporting to each other, and then of course, the Romans took that to a grander scale, and so a centurion soldier had 100 other soldiers reporting to him. So we've had org charts long and our federal government took that to a whole new level.   0:59:46.1 Cliff Norman: But the idea is switching off the org chart from biblical times to actually getting it up to Burt [?] about 1935 and understanding a system that's kind of a nose bleed in terms of how much we're traveling there to get us into the 21st century here.   1:00:04.0 Andrew Stotz: And I left Ohio, I grew up outside of Cleveland, and I left Ohio in about 1985, roughly. And it was still a working class, Cleveland had a huge number of jobs and there was factories and all that, and then I went to California, and then I moved to Thailand in 1992. So when I go back to Ohio now, many years later, decades later, it's like a hollowed out place, and I think about what you're saying is... And what's going on in the world right now is that I think there's a desire in America to bring back manufacturing to bring back production and all of that, and that's a very, very hard challenge, particularly if it's gone for a while and the skill sets aren't there, maybe the education system isn't there, I talk a lot with John Dues here on the show about the what's happening in education and it's terrifying.   1:01:05.9 Andrew Stotz: So how could this be... Book be a guide for helping people that are saying, we've got to revitalize American production and manufacturing and some of these foundational businesses and not just services, which are great. How can this book be a guide?   1:01:25.8 Dave Williams: One thing I would say that I think is interesting about our times, many times when I reflect on some of the examples that you just provided, I think about how changes were made in systems without thinking about the whole system together. And there may have been changes at various times that we're pursuing particular strategies or particular approaches, so it may have been the low-cost strategy, it may have been to disrupt a marketplace. And oftentimes, they don't think about... When somebody's pursuing one particular view, they may miss other views that are important to have an holistic perspective. One of the things that I appreciate about QoS in the methods and overall as a holistic view of looking at organizations that it's asking us to really think initially about that North Star, what we're trying to do, our purpose, and what are the tenants. What are the things that are important us, the values...   1:02:38.7 Dave Williams: That are important to us in pursuing that particular purpose? And in doing that, really thinking about how does the system work as it is today, and if we make changes, how does it move in alignment with the values that we have and in the direction that we wanna go? And appreciating, I would say, part of the value of the scientific thinking that is in the Science of Improvement is that it encourages you to try to see what happens and appreciate not only what happens in relation to the direction you're trying to go, but also the... Have a balanced view of looking at the collateral effects of things that you do, and I think that systems do is really important there. So I think from that perspective, the quality as an organizational strategy brings a holistic picture into these organizations, or at least...   1:03:45.1 Dave Williams: To be paying attention to the system that you have, maybe the direction you wanna go, and what happens as you... What are your predictions and what do you see when you study the results of making changes in the direction of the vision that you have. And I think that's at a high level that is one of the ways that I think about it. Cliff, how would you add on there?   1:04:09.1 Cliff Norman: Your question made me think of something that happened about two years ago, Jane and I got a call from a lady that worked for her in one of the chicken plants, and she said, Jane, I had to call you because I need to order some of those Shewhart charts. But what happened today, you should have been here and Jane said, what... She said, Remember that 10 year thing we buried in the ground that we're gonna open up in 10 years, and she said, yeah, said, well, we opened it up today, and the new plant manager was here, and those Shewhart charts came out, and he looked at the costs on them. He said, you were operating at this level? She said, yeah, routinely. And he said what happened? He said, well, they had new management come in and they got rid of the charts, that's the first thing they did, and then gradually they try to manage things like they normally did, and then they forgot everything that we had learned. And that's kind of where we are right now.   1:05:11.0 Cliff Norman: So just think of that a decade goes by, and it just as Dr. Deming said, there's nothing worse than the mobility of management, it's like getting AIDS in the system. And they basically destroyed their ability to run a low-cost operation in an industry that runs on 1 or 2%. And when you watch that happen and understand that we still have food companies in this country, and we have to start there and start looking at the system anew and start thinking about how it can actually cause that chain reaction to take off, and that comes from focusing on quality of the system. And then as Dr. Deming says, anybody that's ever worked for a living knows why costs go down with two words less rework, but instead of people will put in extra departments to handle the rework. Next thing they start building departments to handle...   1:06:01.8 Cliff Norman: The stuff that's not working because the system they don't understand. So that was a... What do they call those things, Dave, where they put them in the ground and pull him out?   1:06:11.0 Dave Williams: Time capsule.   1:06:13.4 Andrew Stotz: Time capsule yeah.   1:06:13.5 Cliff Norman: Yeah. Time capsule. The a 10-year time capsule.   1:06:19.2 Andrew Stotz: It's a great, great story. And a great idea. We had a company in Thailand a very large company that the CEO of it came upon the idea of the teachings of Dr. Deming and over time, as he implemented it in his company, the Japanese Union of Scientists have their prize and his company won that prize and then he had about 10 subsidiary companies that also were doing it and they also won over time. And so Thailand is actually is the second largest recipient of the Japanese Deming Award outside of India. But he left and he retired and another guy took over, a very bright guy and all that, but he threw most of that out and focused on newer methods like KPIs and things like that. And just at the end of last year, maybe six months ago, they reported a pretty significant loss, and I was kind of made me think how we can spend all this time getting the Deming teachings into our business, and then one little change in management and it's done.   1:07:26.9 Andrew Stotz: And that made me think, oh, well, that's the value of the book, in the sense that it's about building the concept of quality as a core part of strategy as opposed to just a tool or a way of thinking that could go out of the company as soon as someone else comes in. Go ahead, Dave.   1:07:41.9 Dave Williams: I was gonna say, Andrew, you raise a point, I think it's really, really important and Cliff mentioned this in terms of the problem of mobility of management. One thing that I don't know that we outline probably in dark enough ink in the book is the critically important piece of leadership, building the structures and the capability. I know we talk a little bit about it, but doing it in a way that both builds up the people that you have... So Cliff emphasiz

The Health Formula Show
334: The Synergised App, Mushroom Coffee, and the 80/20 Rule

The Health Formula Show

Play Episode Listen Later Feb 3, 2025 17:03


What if your morning coffee could fuel your health instead of working against it? In today's episode, we reveal a healthy coffee alternative that ditches the nasties but keeps the ritual you love. It's Paula's latest morning routine swap and it is a total game-changer! We also share a brand-new offering from Synergised designed to help you thrive and how the 80/20 rule can make healthy living easier (and way more enjoyable). Ready to upgrade your habits and feel amazing? Let's dive in! Tune in to hear: Our labour of love - behind the scenes (2:54) What to expect from the app (5:10) The best coffee alternative to try (8:22) Top 2 adaptogens your body needs (10:33) The Pareto principle (13:11) Head to www.paulabenedi.com/episode334 for the show notes Join our newsletter: www.synergised.info/newsletter Follow Synergised on Instagram: @synergiseduk Follow Paula on Instagram: @paulabenedi . P.S. This podcast and website represent the opinions of Paula Benedi. The content here should not be taken as medical advice and is for informational purposes only, and is not intended to diagnose, treat, cure, or prevent any disease. Please consult your healthcare professional for any medical questions.

Agent Power Huddle
Keeping it REAL Estate with Sara: Time Management 2025 | Sara Delansig | S18 E21

Agent Power Huddle

Play Episode Listen Later Feb 2, 2025 27:14


Sara hosted her real estate show, "Keeping it Real Estate," discussing the importance of time management and productivity in the industry. She introduced various time management techniques, including the Pareto principle, the Eisenhower matrix, and the Pomodoro technique, and shared her personal experiences with these methods. Sara also emphasized the importance of balancing work and personal life, and suggested strategies like task batching, time boxing, and identifying inefficiencies to optimize productivity.

Paretopodden
Fremtiden for fornybart: Høydepunktene fra kraft- og fornybarkonferansen vår

Paretopodden

Play Episode Listen Later Jan 30, 2025 16:39


Torsdag 30. januar holder vi for 27. året på rad fornybarkonferansen vår Power & Renewable Energy-konferanse i Oslo sammen med over 900 deltagere og 75 børsnoterte og ikke-noterte fornybarselskaper.Årets konferanse fokuserer på siste trender i kraft og fornybarmarkedet inkludert AI- og datasenterbehov, energisikkerhet, Kinas fremvekst på leverandørsiden og våre utsikter på fornybarmarkedet. Konferansen innledes av klima- og energiminister Terje Aasland, Harald von Heyden (Oljefondet/NBIM), Birgitte Vartdal (Statkraft) og vår kraft- og fornybardirektør Lars Ove Skorpen.I Paretopodden med Sebastian Baartvedt går Skorpen over det viktigste fra presentasjonen hans og hvilke trender han ser i dagens kraft- og fornybarmarked.Ta kontakt med Pareto-megleren din for presentasjoner fra konferansen. All analysedekningen vår finner du som Pareto-kunde i handels- og analyseplattformen for kunder: https://online.paretosec.com/researchIkke kunde ennå? Se hva vi kan tilby norske privatkunder: https://www.paretosec.no/aksjehandel-paa-nett/verdipapirhandel/aksjehandel-paa-nettDisclaimer:Pareto Securities' podkaster inneholder ikke profesjonell rådgivning, og skal ikke betraktes som investeringsrådgivning. Handel i verdipapirer medfører til enhver tid risiko, og historisk avkastning er ingen garanti for fremtidig avkastning. Pareto Securities er verken rettslig eller økonomisk ansvarlig for direkte eller indirekte tap, eller andre kostnader som måtte påløpe ved bruk av informasjon i denne podkasten.Se våre nettsider https://paretosec.com/our-firm/compliance/ for mer informasjon og full disclaimer. Hosted on Acast. See acast.com/privacy for more information.

Unleashed - How to Thrive as an Independent Professional
597. Jim Ettamarna, A Framework for Commercial Excellence

Unleashed - How to Thrive as an Independent Professional

Play Episode Listen Later Jan 27, 2025 30:58


Show Notes: Jim Ettamarna, a renowned expert in commercial excellence, defines it as incorporating commercial efficacy and efficiency. He believes that there are two key branches to drive down in this area, and it holds tremendous potential for clients and organizations. Jim's framework for commercial excellence is value creation, which involves understanding market demand, go-to- market models, market growth, and demand trends with a focus on each specific industry. A Six Sigma Lean Framework Jim uses a lean framework, starting with Six Sigma, to standardize the right work and ensure associates and employees are conducting the right activities and behaviors. He also emphasizes the importance of systems in psychology in commercial results, as it helps design standardized systems for onboarding talent, enhancing team engagement, and engaging with customers. In sales, motivation is crucial, and the human element of having a team is essential. However, dealing with complex buying processes can be challenging, so it is essential to tune processes and approaches to the specific needs of the customers. A Go-to-market Model The go-to-market model is a linkage between strategy and execution and commercial excellence. It should be tuned for the company's strategy and the strategic context. For example, a $300 million middle market private equity-backed company serving the Durable Medical Equipment market that sold to 5,000 independent organizations and specialty retailers. The company had to strategically think through market growth, accounts to capture, and the buying cycle for customers. To drive efficiency and effectiveness, the company had a set of building blocks, including an online component, independent sales reps, an inside sales team, and specialty sales people. The strategy piece involved determining what would drive value, growth, renewals, base volumes, and pricing. The go-to-market model was designed around these building blocks, and commercial excellence was driven by optimizing these aspects. Components of Commercial Excellence Jim discusses the importance of breaking down commercial excellence into various components, including channels, sales operations, content, and management systems. He emphasizes the need for segmentation at the top level to understand what will drive value and optimize the go-to-market model for the business. Within this model, he suggests ways to optimize each element, such as sales enablement, which includes training, scripts, and engagement strategies. He also emphasizes the importance of benchmarking and understanding the nuances of sales teams. He shares an example of a furniture retailer where he worked with 2500 full-time employees and 1000 part-time employees. The performance of the company was analyzed using Pareto curves, but some outliers were more successful than averages. To replicate these outliers, he spent time in the field with the best sellers and identified their backgrounds and profiles. He also highlights the importance of identifying B+ and A minus players and setting them as standards. The A plus players are often unique individuals that can be difficult to replicate, but they can still learn from them. Segmentation is crucial in understanding customer nuances. Value Mapping and Needs-based Segmentation In the past, value mapping and needs-based segmentation were crucial for designing sales teams and engaging with customers. This was particularly important when selling software into hospital systems, where hospitals may make localized decisions or have a system or GPO that drives these decisions. The CIO or clinical or nursing professional may specify the solution, and the CIO and finance will negotiate it. Jim cites a case where a big client involved segmenting the market and designing selling approaches based on how customers operated and how they bought. This involved investing in customer success research, conducting field interviews, and conducting surveys to understand their usage of the product. The consultant rolled out five archetypes and profiles for four segments, which were then rolled into product development and product teams. Different teams focused on different segments, such as geographic, size, SMB, or enterprise, and focusing on needs-based and purchasing behavior-based segmentation. The go-to-market model was designed around these archetypes, with territory design considering geographic, size, SMB, or enterprise boundaries. There is no right or wrong answer to this, but it is essential to consider these factors when designing the go-to-market model. This approach helps to understand the value in use and what drives value for customers. Diagnostics and Metrics The conversation turns to commercial excellence in organizations, particularly in B2B industrial or SaaS sectors. Jim emphasizes the need for a diagnostic assessment to understand opportunities and challenges. A diagnostic should focus on input and output metrics, such as sales reps' success, territories, and numbers. He suggests that data from sales operations and rev ops can be used to conduct quick diagnostics. Additionally, examining spreads and distributions to identify right spots and dark spots, which are indicators of opportunities and challenges. For example, he could work with a labeling client and identify bright spots where individuals were selling unique markets and promoting innovative products. These best practices could be disseminated among the team. A diagnostic should involve analytics, cost, interviews with sales people, and customer visits to gather customer feedback. The goal is to identify three to five things that can be done to achieve commercial excellence. Jim also offers tips on how to work with the sales department.  The Role of a Sales Playbook in Commercial Excellence Jim talks about the importance of rolling out a sales playbook and its role in commercial excellence. He shares an example of a software company that he helped develop a sales playbook for, which focused on making standard work and minimizing waste. The company had three different sales processes, and they trained employees on territory management, account management, and prospecting. They created a set of 10 difference makers based on actual activities performed by the best people, which were rolled out in a fun, gamified way to encourage adoption and recognition. Some of the key difference markers included prospecting, owning territory, and using Salesforce to drive compliance.  Metrics to Monitor in Sales Jim mentions the importance of having the right input and output metrics, such as the number of meaningful meetings and demonstrations per week, to ensure the right outbound results. By tracking these metrics, the sales team can make necessary adjustments to improve their performance and drive more profitable deals. To drive results in sales, Jim highlights metrics such as deal size, velocity, win rates, attachment, cross, sell, and upsell. He also emphasizes the importance of driving customer success and retention. He mentions that, in one case, key initiatives were displayed at the office, allowing for a competitive dynamic. The metrics were then distilled down to the board, with some metrics for frontline commercial team members and others for the board pack. The goal was to turn the dial on sales enablement, resulting in better win rates and accelerated funnel velocity. Jim also highlights the importance of gamification, making it fun, and rewards to encourage employees to work harder and drive competitive juices.  Timestamps: 01:32: Value Creation Framework  04:18: Go-to-Market Model  07:24: Tangible Elements of Commercial Excellence  11:10: Segmentation and Customer Nuances 14:18: Practical Segmentation Approach  18:18: Diagnostic Approach to Commercial Excellence  24:04: Sales Playbook and Metrics  29:50: Customer Success and Competitive Dynamics Links: Company website: https://www.suttongrowth.com/ LinkedIn: https://linkedin.com/in/jimettamarna   Unleashed is produced by Umbrex, which has a mission of connecting independent management consultants with one another, creating opportunities for members to meet, build relationships, and share lessons learned. Learn more at www.umbrex.com.  

Relentless Health Value
INBW42: A Philosophical Rabbit Hole of Considerations for Plan Sponsors and Others

Relentless Health Value

Play Episode Listen Later Jan 23, 2025 27:39


There have been two episodes lately that have sent me down a rabbit hole that I wanted to bring to your attention. Now, disclaimer: I know you people; you're busy. You listen on average to, like, 26 minutes of any given episode. So, yeah … look at me being self-aware. I say all this to say welcome to this inbetweenisode, otherwise known as The Rabbit Hole. But it's like a 20-something-minute rabbit hole, not a day-and-a-half retreat; so just be kind if you email me and tell me I forgot something or failed to dredge into a nuance or a background point. It might be that I just could not manage to pack it in. For a full transcript of this episode, click here. If you enjoy this podcast, be sure to subscribe to the free weekly newsletter to be a member of the Relentless Tribe. This rabbit hole really, really matters for anybody creating benefit design. It really matters for anybody trying to optimize the health that can be derived from said benefit design. It also probably matters for a whole lot of operational decisions involving patients or members, nothing for nothing. But it really matters for anybody trying not to, by accident, as an unintended consequence, hammer plan members or patients with some really blunt-force cost containment measures that do a lot of harm in the process of containing costs or, flip side, accidentally cost a whole lot but don't actually improve member health. Nina Lathia, RPh, MSc, PhD, kind of summed up this whole point or gave an adjacent thought really eloquently in episode 426. She said there's better or worse ways to do things and doing the worst kinds of cost containment may not actually contain costs. You squeeze a balloon, and that works great for some, like pharmacy vendors who don't really have any skin in the game. (See me using the “skin in the game” term for other people besides plan members? That's some really good foreshadowing right there, by the way.) So, squeezing the balloon works for some when they don't have skin in the game, in the place where the air goes when you squeeze the balloon—like a pharmacy vendor who makes it super unaffordable for patients to get meds so the patient doesn't take their meds and winds up in the ICU, or the patient's formerly controlled with meds condition that is now newly uncontrolled and requires all kinds of medical interventions to get said condition back under control. Like, these are the reasons and the why behind why some cost containment efforts don't actually contain costs at the plan level. But not at the vendor level. You see what I mean? Most pharmacy vendors don't get penalized if medical costs wind up going up. And I'm picking on pharmacy vendors a little bit here, but it's true for a lot of siloed entities. But, you know, balloon squeezing can also work, actually, at the plan level if where the air goes, it's to a place where the member or the patient has to pay themselves. Like, if there's a huge, I don't know, max out of pocket or deductible, does it really matter to a very mercenary plan that's running on a very short time horizon? Do they really care, that plan, if the patient's formerly controlled condition gets uncontrolled? Maybe not, I guess, as long as it doesn't cost more than the max out of pocket that the patient is on the hook for, for any given plan year. So, yeah … again, there are better or worse ways to do things; and a lot of questions kind of add up to, What kind of plan do we want to be? What are our values, and does the plan align with them? But that's not the rabbit hole I wanted to go down today—the aligning with our values rabbit hole—so let us move on. The Relentless Health Value episode that kicked off the rabbit hole for me on multiple levels was the show with Bill Sarraille (EP459) about co-pay maximizers and accumulators. And don't get me wrong, that is a complicated topic with lots of pros, lots of cons; and I am not weighing in on the inherent lawfulness or value of any of this. I am also not weighing in on the fact that there are forthright and well-run maximizers and really not good ones, which cause patients financial, for sure, and possibly clinical harm. But not talking about that right now at all. Go back and listen to the show with Bill Sarraille if you are interested. Where my “down the rabbit hole” spiral started was when I started noticing the very, very common main plan pushback that was given right out of the gate so often when talking about the problems that any given plan sponsor has with these pharma co-pay programs—that if these pharmacopeia card dollars count toward the plan deductibles, then the patient's deductible gets met and the plan member will then often overuse healthcare and cost the plan excessive dollars from that point forward. So again, if you ask any given plan sponsor what I was gonna say their main issue but a main issue that they have with these pharma co-pay programs, that's gonna be it—that if these pharma dollars count toward the plan deductible, then the patient's deductible is met and from that point henceforth, the patient goes nuts and overuses healthcare services and it costs the plan a lot of money. The second episode causing this rabbit hole to open up is the one coming up actually with Scott Conard, MD. So, check back in a couple of weeks for that one. But in the show with Dr. Conard, we get into the impact of high-deductible health plans or just big out of pockets, however they transpire in the benefit design. Both of these scenarios, by the way, the maximizer meets the deductible scenario and the very, very high-deductible plan scenario are to blame, in other words, for this rabbit hole of an inbetweenisode. So, let's do this thing. Let's talk about the moral hazard of insurance to start us off. In the context of health insurance, if you haven't heard that term moral hazard before, it's an economics term; and it is used to capture the idea that insurance coverage, by lowering the cost of care to the individual, because their plan is paying for part of said care, by lowering the cost of care to the individual, it increases healthcare use. So, you could see why this may be related to having a deductible fully paid or not. Pre-deductible, the plan is not paying for a part of said care or paying a much smaller part. And after the deductible is paid for, then the plan is paying for a much larger percentage of care. So, moral hazard kicks in bigger after the deductible is fully paid, when the plan is paying for a bigger percentage or a bigger part of the care. So, before I proceed, let me just offer again a disclaimer to the many economists who listen to this show that this is a short inbetweenisode; so I am 100% glossing over some of the points that, for sure, have a lot of nuance. For anyone who wants a thick pack of pages for background reading, I have included some links below. Because you see, a few weeks ago, my Sunday did not go as planned. And instead of running errands, I wound up reading eight papers on moral hazard. So, my lack of groceries is your gain. You're welcome. I am happy to send you these links if you really want to dig in hard on this. Okay … so, moral hazard is the concept that individuals have incentives to offer their behavior when their risk or cost is borne by others. That's the why with deductibles, actually. We gotta give patients skin in the game because once a member has their deductible paid, it's like member gone wild and they will get all manner of excessive care. Again, I hear that a lot from plan sponsors—a lot, in all kinds of contexts but almost always, again, whenever the conversation has anything to do with manufacturer co-pay card programs and a lot when it has to do with just, you know, high-deductible plans and what happens when the patient meets their deductible. Once a patient or family has a fully paid deductible, their medical trend is like a spike, I hear over and over again. And again, this is the reason why many insist—and again, no judgment here, maybe they're right, I'm just rehashing the conversation—but this is why many insist the moral hazard of letting people have their deductible paid for them by Pharma or whatever is the reason why some believe it is imperative to have maximizers or accumulators where pharma dollars can absolutely not apply to patient deductibles. Because then we have sick patients who now have their deductibles reached, who have very few financial disincentives to go seek whatever care they want. Right. Moral hazard has entered the building. I've beaten this point to death, so let's move on. One time, I asked a plan sponsor, What exactly is it that these plan members are going wild spending plan money on once their deductible gets paid off? And he said, well, you know, they go get their suspicious-looking moles checked. Did you hear that silence just now? Yeah, that was my reaction. I don't know. I would consider getting suspicious moles checked kind of high-value care. There are posters all over the place saying if you have a suspicious-looking mole, it might be melanoma. Cancer. So, you should get ahead of that before you have a metastasized cancer. I'm no doctor, but yeah, this feels like high-value care. So, let's just, in arguendo, say it is high-value care and follow this thread for a sec. Once members reach their deductible, let's say they run around and get high-value care, care they actually need but haven't gotten before because they couldn't afford it earlier or were putting it off until they saved up enough, right? Like, this is the other side of the moral hazard coin. If patients delay or abandon care—and, by the way, there was a survey (it's in the Wayne Jenkins, MD, show from a while ago [EP358])—but 46% of patients with commercial insurance these days have delayed or abandoned care due to cost. But if they delay or abandon care that is high value and medically actually necessary and they put it off or abandon that high-value care because they cannot afford said care, then yeah, we have, again, the opposite of the moral hazard problem. We have members paying a whole lot for insurance that they cannot afford to use, they're functionally uninsured, and it's not gonna end healthfully if they need high-value care and they're not getting it. It's not. Functionally uninsured patients who have chronic conditions that really should be managed will, as per evidence, wind up with health problems if those chronic conditions are not managed. I read another study about this just recently. This is why members with chronic diseases on high-deductible health plans tend to have worse health, by the way. Now, I need to say, same rules do not always apply for healthy patients who, at least at this point, don't need regular healthcare. But do keep in mind, as it comes up in the Dr. Scott Conard show, 30% of patients who think they're healthy, they feel fine—actually they are not fine and will become sick and costly in the coming years. So, yeah … tune back in for that discussion if you are interested, but you get the gist of this whole thing, right? So, that's scenario 1 as to what patients may choose to buy once they're in the moral hazard zone and have met their deductible. They go get high-value care. So, let's move on from the high-value care case study where patients reach their deductible and get high-value care or they haven't met their deductible and fail to get care they actually need. I want to circle over to the other moral hazard potential situation: patients who meet their deductible. And in this scenario, they again embark on a health system jamboree; but they don't get a whole lot of high-value care in this scenario. They run around getting all manner of all kinds of stuff that is well outside of any evidence-based pathway. Like, weird example, I went to a doctor recently asking a question about something that everyone ultimately agreed was nothing. At which point, the doctor asked if I wanted an MRI. I was like, “What?” We and everyone else just agreed this was a big nothing burger. Why would I want an MRI? Is there something else that we didn't discuss to indicate that I need imaging? Like, why are we going there? And the doc said, “Oh, well, everyone in New York City has an anxiety problem. So, I thought you might just want to get an MRI.” Yeah, low-value stuff like that is now not financially prohibitive. So, someone who had met their deductible, in a similar situation to my example, might have shrugged and said, “Sure, I do have some anxiety. Let's go get that MRI.” Or if they hadn't met their deductible, then the whole skin-in-the-game, market-driven approach may work, I guess, to prevent them from getting low-value care that was clearly excessive and pretty wasteful. So, summing up these two scenarios, the implications of the moral hazard issue are, if it's expensive, people don't do it. If it's free or cheap, they will overutilize. And the issue with both of these patient choices is, patients are not good at discerning low-value care from high-value care. And because patients are not good at discerning high-value from low-value care, moral hazard is not mitigated with any sort of binary kind of vote for moral hazard or against moral hazard types of brute-force, broad-stroke tactics. Like, say I'm a moral hazard full-on believer. I assume all or most of the care a patient will go for is low value, right? Because if I try to prevent moral hazard from happening, then by default, what I'm effectively saying is, whatever they choose to buy on the basis of moral hazard is low value. So, I make basically everything I can pretty unaffordable so as not to invoke any moral hazard. But right, the problem with that is that some of the care is actually high value. And it's also expensive for the patient, so they don't get it. And patients are harmed, and balloons might get squeezed. Or the opposite, against moral hazard, right? Like, I'm against the concept of moral hazard. I don't believe in it, so I don't set up absolutely anything to combat it. Maybe because I assume all care that a patient might want to get is actually high value and totally worth it. That's gonna be a problem for the opposite reason. Plans can waste a lot of money this way. Random example, in 2014, the Commonwealth of Virginia reported spending $586 million on unnecessary costs from low-value care. I mean, they say something like a third of all care is waste and unnecessary, so … yeah. Plan sponsors can waste a lot of money on low-value care, and a bunch of that may happen when patients have less skin in the game because they reach their deductible, as one example, and the care is not financially prohibitive and moral hazard is realized. So, yeah … as I said, a couple of weeks ago, I did not spend my Sunday as planned. I spent my Sunday reading papers about moral hazard in insurance and how financial incentives impact patient decision making. And I'm gonna repeat the grand takeaway because this is a podcast and you might be multitasking. So, once again, here's the sum of it all: If it's expensive, people tend not to do it. If it's free or cheap, they will overutilize. And the issue with both of these patient choices is, patients are simply quite bad at distinguishing high-value care from low-value care. Once their deductibles are met, most patients will—due to moral hazard—they will, in fact, go on a spending spree; and part of what they will get done will be really, really important and necessary stuff, like getting their unusual moles looked at or their heart pain checked out or going for that follow-up visit or lab work that their doctor told them they need to come in for. And the other part of what they will do will be things that are outside the best-practice, evidence-based pathway guidelines by the length of the Appalachian Trail—you know, doing what appears to be a tour of specialty medicine physicians for unclear reasons but which lead to a cascade of testing and who knows what else. Why do they do this, these members? Do they do this on purpose? No. There is study after study that shows, again, members/patients do not, most of the time, have the chops to figure out if some medical service is high-value or low-value care. And no kidding. Most members and patients have no clinical training. They're not doctors. They're not nurses. They're not physician assistants. They're humans whose uncle died of cancer, and now they have a pain in their foot and they're convinced it's a tumor. Right? Like, do we blame them when they finally go see a doctor because they crushed their budget that particular year paying thousands and thousands of dollars out of pocket for whatever earlier in the year, and now they've made it to their deductible—do we blame them for taking the very rational step of getting the most out of those thousands of dollars of sunk costs? At that point, it's a “let me get my money's worth” situation because they can't afford to do this again next year. I mean, we hire employees because they're smart and rational, and this is really actually a pretty smart and rational thing to do. It's not somebody trying to commit fraud. Okay, sure … some people are. There's always bad apples. But the vast majority are just trying to live their life and not spend all of their vacation money next year on medical services like they did this year. I'm saying all this because it's actionable, by the way. And I'm getting to that, but indulge me for like 60 more seconds because I want to acknowledge you, listeners of this show, are probably nodding along to this whole thing this whole time and thinking all of this is pretty obvious. Well, yeah … maybe. Except here's the reason I decided to do an inbetweenisode about this rabbit hole instead of doing my normal thing, which is just ranting about it over dinner for three days straight—and God bless my husband for sitting through it—is the bottom line. But the reason we are here together today is the number of emails and posts and et cetera that cross my desk where it doesn't seem like these dots have been connected on all of this or at least connected in magic marker. Like fat, indelible magic marker, which is what I think is necessary for these dots to be connected with the ones between moral hazard and patients not being able to discern high- and low-value care. There are so many ways and places these dots will show up. Like, here's another moral hazard issue with those maximizers or accumulators, which apparently are on my mind right now—the not good ones I'm talking about now, where patients find themselves on the hook for hundreds or thousands of dollars midyear if they want to pick up the meds that they've been prescribed. If you need more details on how that might happen to understand what I'm saying fully, listen to the show again a couple of weeks ago with Bill Sarraille (EP459). But even if you're a little confused, it doesn't matter because the question is this: Do we justify having programs that make drugs really expensive for patients? Do we put in place one of these pretty darn punitive types of accumulators or maximizers, right? Like, there's different kinds, and I'm talking about the punitive ones of accumulators or maximizers. Do we justify putting one of those into place and figure that if a patient really wants the med, they'll pay a whole lot of money for it? Because if they're willing to pay a whole lot of money for it, then, right? It must be high-value care, so they'll figure out how to pay for it. Keep in mind, as I said earlier, if it's expensive, people don't do it. If it's free or cheap, they will overutilize. And the issue with both of these patient choices is, patients are not good at discerning low-value care or meds from high-value care or meds. So, look, Pharma can be up to all kinds of crap, and list prices are really expensive. No arguments here. That isn't the point. The point is, What is the actual problem that we're trying to solve for, for our plan and our patients and our members? And if that problem is making sure that the right patients get the right high-value meds or care, then not letting members get co-pay assistance such that all drugs—the good ones and the too-expensive ones and the ones that we don't really want our members to take for whatever reason—if we make all of them way too expensive with a maximizer or accumulator designed to make all the drugs really expensive … dots connected. We wind up with the all-in to prevent moral hazard issue we just talked about, where patients could easily be harmed and the plan can easily get into a balloon squeezing situation. All I'm saying is that there's a big-picture view of moral hazard here that we need to be looking at and over-indexing into binary, moral hazard black and white, where we attribute malice to members, some of whom, some of the time, may actually be trying to get high-value care, or the flip side, the plan's paying too much for low-value care and causing financial difficulties and not understanding the root cause. Going black and white or over-indexing to prevent outlier kind of stuff is probably not gonna end well. Not seeking a middle way can easily result in a solution that is possibly worse than the problem. So, look, moral hazard is actually a thing. There are lots of implications to patients not being able to distinguish high-value and low-value care. But if we know this, then, philosophically at least, how do we conceptualize a solve? What should we be doing? If we're not doing black and white, what does the gray in the middle look like? Alright, we don't want to be a solution looking around for a problem. So, let's think about the problems that we want to solve for. I would start with, What's the goal? The goal of plan sponsors providing insurance most of the time is attract and retain talent. Also, I was at the HBCH (Houston Business Coalition on Health) Conference at the beginning of December 2024. And there was a poll question. There was a bunch of employers in the audience, and the poll question asked the audience, “What's your biggest plan goal this year?” Main answer by a mile: Cut costs. Okay … so, we want to attract and retain, and we want to control costs. Obviously, you can go about achieving these three things a bunch of different ways, and they will all be tradeoffs. As Luke Prettol reminded me of the other day, there are no solutions, only tradeoffs. And so, with that, right now, I want to introduce the second concept that I have been ruminating over in my rabbit hole lately, that I've kind of been hinting at for this whole time. But here's a word we've been waiting for to solve all of our problems in a good kind of way, not the bad black-and-white ways that are so often either financially a problem or deploying brute force and harming patients in the name of solving something else: Pareto optimality. Pareto optimality is the state where resources are allocated as efficiently as possible so that improving one criterion will not worsen other criteria. It's essential to consider this, that Pareto optimality is the ideal we should at least be striving for when attempting to overcome any challenge but, in particular, the moral hazard issue, when we know that patients do not know what care is high value and what care is low value. Because if we don't try to at least Pareto optimize (if that's a word), if we try to fix the moral hazard problem and wind up with a new problem or new problems that might be worse than the old problem, that's not optimal. We have improved one criterion and worsened another. So, fixing the members going wild after they meet their deductible by slamming the lid on the fingers of members trying to get high-value care as well as low-value care, well … not sure about this, but I'd assume if not the attract but at least the retain criterion might be compromised by member dissatisfaction. But also, as I've said nine times, we might not actually cut costs. We might be doing a squeeze of the balloon. Especially that could be true when, as we all probably know or suspect, what's driving costs at the plan level is rising hospital prices. There's a show coming up on rising hospital prices as a primary driver of rising plan costs, and it's pretty hard to argue with. So, it's financially pretty advantageous to keep patients from needing to go to the hospital. So, yeah … I'd strongly suggest not squeezing balloons when hospitalizations are where the air goes. I'm not gonna belabor this. My only suggestion is, do the Pareto optimality math. A lot of you already are, I'm sure, and do a great job. But just for any given policy plan change, or decision, keep in mind moral hazard and then really go through the whole cascade of likely impact on other factors based on likely member/patient behavior. It's so easy to get sucked into kind of these philosophical, “those are my enemies” kinds of conversations that are actually philosophically sort of interesting, but they aren't the goal. I mean, there's always unintended consequences; but not all unintended consequences should come as some kind of, like, wild-ass surprise. They were pretty predictable, actually. Let me also mention that when considering Pareto optimal solutions, advanced primary care starts to get really compelling. It's because having a PCP team with data and a relationship to the patient helps patients stay on the high-value care bus. And that can minimize the bad that comes from lowering the barrier to care and inviting in a little bit of moral hazard. Just saying. Okay, so this has been going on a little bit longer than I had originally intended, but I do want to remind you of the so-called theory of second best. It's probably really appropriate here, and one of the reasons why I'm mentioning this and not finishing the show right now is that, in a very synchronistic moment, I was writing up my outline for this inbetweenisode and—how random is this?—Steve Schutzer, MD, wrote an email that included something about the theory of second best. Great minds and all of that. Anyway, the theory of second best is really aligned with Pareto optimality. It's just that sometimes you gotta be really practical. You gotta be a little scrappy. If you cannot achieve the best option, either because you just can't or because the best option for one thing results in too many negative consequences elsewhere, then don't do the best option. Forget it. Do the second best (ie, the theory of second best). There is nothing wrong with that. Don't be a hero. Okay, so in summary, moral hazard is actually a thing and so is the opposite; and it's even more of an impactful thing because most people cannot distinguish high-value from low-value care. And if they meet their deductible that they have paid a lot of money to reach, of course, they are going to want to try to get through their checklist of medical appointments that they have been putting off. This is not a surprise. And it's not all bad, as long as the care that they are trying to go get is high value; and that matters if we're trying to cut costs. Because to cut costs for real and not in a squeezing of the balloon way, we need to direct or limit somehow what gets done to high-value care. And we got to do that without accidentally causing other problems, meaning think through Pareto optimality and possibly consider the theory of second best. I hope this has been helpful at some level. It's helped me. I feel better having vented. Also mentioned in this episode are Nina Lathia, RPh, MSc, PhD; Bill Sarraille; Scott Conard, MD; Wayne Jenkins, MD; Houston Business Coalition on Health (HBCH); Luke Prettol; and Steve Schutzer, MD. Additional studies mentioned: Moral Hazard in Health Insurance: What We Know and How We Know It Do People Choose Wisely After Satisfying Health Plan Deductibles? Evidence From the Use of Low-Value Health Care Services Healthcare and the Moral Hazard Problem Distinguishing Moral Hazard From Access for High-Cost Healthcare Under Insurance   For more information, go to aventriahealth.com.   Each week on Relentless Health Value, Stacey uses her voice and thought leadership to provide insights for healthcare industry decision makers trying to do the right thing. Each show features expert guests who break down the twists and tricks in the medical field to help improve outcomes and lower costs across the care continuum. Relentless Health Value is a top 100 podcast on iTunes in the medicine category and reaches tens of thousands of engaged listeners across the healthcare industry. In addition to hosting Relentless Health Value, Stacey is co-president of QC-Health, a benefit corporation finding cost-effective ways to improve the health of Americans. She is also co-president of Aventria Health Group, a consultancy working with clients who endeavor to form collaborations with payers, providers, Pharma, employer organizations, or patient advocacy groups.   04:05 Where did Stacey's rabbit hole spiral start? 05:40 What is the moral hazard of insurance? 09:31 EP358 with Wayne Jenkins, MD. 12:49 Why isn't moral hazard mitigated in insurance? 18:16 EP459 with Bill Sarraille. 20:51 “How do we conceptualize a solve?” 22:24 Why should we be striving for Pareto optimality? 25:20 What is the theory of second best?   For more information, go to aventriahealth.com.   Our host, Stacey Richter, discusses considerations for #plansponsors and others. #healthcare #podcast #changemanagement #healthcareleadership #healthcaretransformation #healthcareinnovation   Recent past interviews: Click a guest's name for their latest RHV episode! Chris Crawford, Dr Rushika Fernandopulle, Bill Sarraille, Stacey Richter (INBW41), Andreas Mang (Encore! EP419), Dr Komal Bajaj, Cynthia Fisher, Stacey Richter (INBW40), Mark Cuban and Ferrin Williams (Encore! EP418), Rob Andrews (Encore! EP415)  

Crafting for Profit Live
How to Set and Crush Your Craft Business Goals

Crafting for Profit Live

Play Episode Listen Later Jan 20, 2025 49:51


Let's kick off season 2 of the Crafting for Profit Live podcast chatting about how to set and crush your craft business goals. Learn how to use the Pareto principle to determine your goals and our method for making sure our goals are SMART! Then break them down into actionable steps that you can use throughout the year to make your business more productive and more profitable. Grab our Sublimation Craft Kit before January 31, 2025 for tons of sublimation resources all packed within an extremely low priced bundle. Crafters of all types will love this bundle of resources! Get it here: https://sublimationcraftkit.com/ Check out Cori's Etsy shop here: https://www.etsy.com/shop/ChapterCraftStudio  Don't forget to shop our merch store to support the podcast! https://link.craftingcamps.com/merch   Let us help you craft your future by turning your passion into a paycheck. Angie Holden and Cori George are teaming up for a series of live events dedicated to helping you start and grow your craft business. Be sure to subscribe so you don't miss any of the future episodes! Sign up for our email newsletter here: https://crafting-camps.ck.page/4715c59751 Ask us questions here: https://forms.gle/ShKt64gKjeuneMLeA Want more from Cori and Angie? Be sure to subscribe to our YouTube channels and follow on Instagram using the links below. https://www.instagram.com/craftingcamps https://www.instagram.com/heyletsmakestuff https://www.instagram.com/angieholdenmakes

Hyper Conscious Podcast
Everything Matters But Not The Same Amount (1951)

Hyper Conscious Podcast

Play Episode Listen Later Jan 18, 2025 41:11


In today's episode, Kevin and Alan explore how self-awareness and focused effort can transform your results. Discover why identifying your top priorities can simplify your journey to success and lead to exponential growth. From fitness and brain health to relationships and business, they share personal lessons, practical tips, and the science behind prioritization to inspire your breakthroughs.Links mentioned:Free 30-minute Coaching Call with Alan - https://bit.ly/4f3MSUzNext Level Dreamliner: https://a.co/d/9fPpxEtSubscribe & follow -  https://www.buzzsprout.com/742955/shareNext Level Nation - https://www.facebook.com/groups/459320958216700_____________________NLU is not just a podcast; it's a gateway to a wealth of resources designed to help you achieve your goals and dreams. From our Next Level Dreamliner to our Group Coaching, we offer a variety of tools and communities to support your personal development journey.For more information, please check out our website at the link below.

Point of Pivot
Episode 57 | How to Create Sustainability with Health & Fitness in Order to Maintain Fat Loss

Point of Pivot

Play Episode Listen Later Jan 16, 2025 11:41


What are the high impact habits that will help you to lose weight and maintain your results?   The Pareto principle teaches us that the majority of our results come from the minority of our effort. The boring basics are foundational, and they work when executed consistently over time.    If you enjoyed the episode, please leave me a review or share it with someone who could benefit from it!    Next Steps: Are you ready for coaching or want to find out more? Schedule a coffee chat!   Or, interested in some accountability and community? Join my free Macros & Midlife Facebook group to chat with other likeminded ladies who are working on improving their health and fitness too!   Follow me on Instagram: Emily Iboa Coaching on Instagram   Do you have a question or a comment you want me to feature on the show? Leave me a voicemail! https://www.speakpipe.com/Macros_Midlife

What If It Did Work?
Mastering Recurring Sales: Transforming Buyers into Loyal Customers with Guitze Messina

What If It Did Work?

Play Episode Listen Later Jan 15, 2025 54:39 Transcription Available


Unlock the secrets of recurring sales success with our special guest, Guitze Messina, executive director at Hardy the Avocer Trade Association. Discover how businesses can thrive by transforming one-time buyers into loyal customers through a deep understanding of customer needs and behaviors. Guitze shares pivotal insights on the dynamics of recurring sales compared to long-cycle sales like real estate and life insurance, emphasizing the art of maintaining relationships without the pressure to purchase. Explore proactive sales strategies that keep your business ahead, taking cues from Guitze's transformative journey from skepticism to advocacy in the sales domain.If you've ever wondered how some businesses keep clients coming back without slashing prices, you'll find valuable techniques in this episode. We delve into the essence of effective sales techniques, focusing on listening and understanding customer needs to craft tailored solutions. Avoid the common pitfall of competing solely on price, and learn from real-world examples, like the restaurant owner who paid dearly for a cheaper ventilation system. Gidzi guides us through the importance of asking the right questions, making client feedback a cornerstone for strengthening business relationships and growing sales.Close the episode with actionable insights for both budding and seasoned sales professionals, as we explore strategic marketing solutions tailored for self-published authors and businesses alike. From conducting a Pareto analysis to identify key clients, to the power of storytelling in sales literature, we offer strategies that can reshape your marketing efforts. Embrace the transformative power of implementing ideas from nonfiction books with Messina's proven techniques, designed to unlock potential and leave a lasting impact on your personal and professional legacy.Join the What if it Did Work movement on FacebookGet the Book!www.omarmedrano.comwww.calendly.com/omarmedrano/15min

Simon Ward, The Triathlon Coach Podcast Channel
Unlocking High Performance in triathlon and the workplace with Paul Wheat

Simon Ward, The Triathlon Coach Podcast Channel

Play Episode Listen Later Jan 15, 2025 62:46


This week, I'm thrilled to welcome Paul Wheat, a dedicated 55-year-old triathlete who exemplifies what it means to be a High Performance Human. For Paul, high performance transcends athletic excellence; it encompasses all aspects of life, including sleep, nutrition, exercise, relationships, and mental health. The beauty of this journey is that you don't need to be a top-tier athlete to excel in these areas. Paul's transformative journey began in 2020 when he joined my SWAT Inner Circle after experimenting with a complimentary 12-week plan amid the COVID pandemic. The structured approach not only challenged him but also revealed the vital connections between fitness, nutrition, sleep, and work habits. Like many, Paul was once caught in the cycle of stress and fatigue from constant travel as a manager. However, he took proactive steps by applying training principles from SWAT, incorporating the 80:20 Pareto principle, and catalyzing a substantial transformation in both his personal life and professional environment. One of Paul's standout achievements has been the establishment of a workplace culture that encourages his team to maintain a healthy work-life balance, limiting work to eight hours a day. This impactful change has led to enhanced productivity and notably reduced staff turnover. Over the past five years, Paul has made impressive strides in his athletic performance, significantly improving his 70.3 Ironman time by 75 minutes—down from 6:45 to 5:30. Additionally, he has shed 19 kg, proudly stating that he achieved this by “cleaning up what I eat, understanding that processed foods aren't real food, and being more active.” If you're inspired to embark on a lifestyle transformation in 2025, Paul's journey is sure to motivate you! In this episode, we'll explore: The lifestyle changes Paul implemented and their impact on his health. How these changes elevated his triathlon performance. The process of integrating healthy habits within his work team. The positive effects of these workplace changes on overall performance. Paul's top three pieces of advice for anyone looking to start a similar journey. Join us for an enlightening conversation that could kickstart your own path to high performance! Paul doesn't do social media so you can't follow him but if you'd like to get a feel for how he has upped his game he recommends the following books: Ultra-Processed People:The Definitive #1 Bestseller You Need to Understand Ultra-Processed Food by Chris can Tulleken. Never Split the Difference: Negotiating as if Your Life Depended on It by Chris Voss.   He also loves to listen to THIS podcast Things People Do with Joe Marler ***Join our SWAT/High Performance Human tribe using this link, with a happiness guarantee! You can watch a brief video about the group by going to our website here, and join our SWAT High Performance Human tribe here. You can find all of my social media links HERE: You'll also find some really great content on my Instagram and YouTube! Instagram  YouTube   **To get a free copy of my personal daily mobility routine, please click HERE** **To download your FREE infographic ‘7 steps to swimming faster', please click HERE  Sign up for Simon's weekly newsletter Sign up for Beth's weekly newsletter To contact Beth regarding Life Coaching, please visit her website at BethanyWardLifeCoaching.uk. If you would like to help offset the cost of our podcast production, we would be so grateful. Please click here to support the HPH podcast. Thank you! Visit Simon's website for more information about his coaching programmes.  For any questions please email Beth@TheTriathlonCoach.com.

The Simplicity Sessions
The #1 Productivity Hack Everyone Business Owner Needs

The Simplicity Sessions

Play Episode Listen Later Jan 2, 2025 40:16


In this episode, I dive deep into the widely-recognized 80/20 principle, also known as Pareto's Principle, and its application in business, productivity, and personal growth. I provide valuable insights on how to streamline workflows, increase productivity, and drive revenue by focusing on the most impactful 20% of efforts.  Episode Highlights:  My New Year Plans and Business Masterminds  Understanding Pareto's Principle  Identifying Your 20%  Implementing The 80/20 Rule in Business  Audit and Optimize Your Operations   Let's dive in! Thank you for joining us today. If you could rate, review & subscribe, it would mean the world to me! While you're at it, take a screenshot and tag me @jennpike to share on Instagram – I'll re-share that baby out to the community & once a month I'll be doing a draw from those re-shares and send the winner something special! Click here to listen: Apple Podcasts – CLICK HERESpotify – CLICK HERE This episode is sponsored by: withinUs | Use the code JENNPIKE20 at withinus.ca for a limited time to save 20% off your order St. Francis Herb Farm | Go to stfrancisherbfarm.com and save 15% off every order with code JENNPIKE15 Skin Essence Organics | Go to skinessence.ca and save 15% off your first order with code JENNPIKE15 /// Save 10% off every order with code JENNPIKE10 Eversio Wellness | Go to eversiowellness.com/discount/jennpike15 and save 15% off every order with code JENNPIKE15 /// not available for “subscribe & save” option Free Resources: Free Perimenopause Support Guide | jennpike.com/perimenopausesupport Free Blood Work Guide | jennpike.com/bloodworkguide The Simplicity Sessions Podcast | jennpike.com/podcast Programs: The Perimenopause Project | jennpike.com/theperimenopauseproject The Hormone Project Academy | jennpike.com/thehormoneproject Synced Virtual Fitness Studio | jennpike.com/synced The Simplicity Women's Wellness Clinic | jennpike.com/wellnessclinic The Audacious Woman Mentorship | jennpike.com/theaudaciouswoman Connect with Jenn: Instagram | @jennpike Facebook | @thesimplicityproject YouTube | Simplicity TV Website | The Simplicity Project Inc. Have a question? Send it over to hello@jennpike.com and I'll do my best to share helpful insights, thoughts and advice.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
2024 in Post-Transformers Architectures (State Space Models, RWKV) [LS Live @ NeurIPS]

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 24, 2024 43:02


Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Of perennial interest, particularly at academic conferences, is scaled-up architecture research as people hunt for the next Attention Is All You Need. We have many names for them: “efficient models”, “retentive networks”, “subquadratic attention” or “linear attention” but some of them don't even have any lineage with attention - one of the best papers of this NeurIPS was Sepp Hochreiter's xLSTM, which has a particularly poetic significance as one of the creators of the LSTM returning to update and challenge the OG language model architecture:So, for lack of a better term, we decided to call this segment “the State of Post-Transformers” and fortunately everyone rolled with it.We are fortunate to have two powerful friends of the pod to give us an update here:* Together AI: with CEO Vipul Ved Prakash and CTO Ce Zhang joining us to talk about how they are building Together together as a quote unquote full stack AI startup, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms, with notable industry contributions from RedPajama v2, Flash Attention 3, Mamba 2, Mixture of Agents, BASED, Sequoia, Evo, Dragonfly, Dan Fu's ThunderKittens and many more research projects this year* Recursal AI: with CEO Eugene Cheah who has helped lead the independent RWKV project while also running Featherless AI. This year, the team has shipped RWKV v5, codenamed Eagle, to 1.5 billion Windows 10 and Windows 11 machines worldwide, to support Microsoft's on-device, energy-usage-sensitive Windows Copilot usecases, and has launched the first updates on RWKV v6, codenamed Finch and GoldFinch. On the morning of Latent Space Live, they also announced QRWKV6, a Qwen 32B model modified with RWKV linear attention layers. We were looking to host a debate between our speakers, but given that both of them were working on post-transformers alternativesFull Talk on YoutubePlease like and subscribe!LinksAll the models and papers they picked:* Earlier Cited Work* Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention* Hungry hungry hippos: Towards language modeling with state space models* Hyena hierarchy: Towards larger convolutional language models* Mamba: Linear-Time Sequence Modeling with Selective State Spaces* S4: Efficiently Modeling Long Sequences with Structured State Spaces* Just Read Twice (Arora et al)* Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. * To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. * Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives 11.0±1.3 points of improvement, averaged across 16 recurrent LMs and the 6 ICL tasks, with 11.9× higher throughput than FlashAttention-2 for generation prefill (length 32k, batch size 16, NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides 99% of Transformer quality at 360M params., 30B tokens and 96% at 1.3B params., 50B tokens on average across the tasks, with 19.2× higher throughput for prefill than FA2.* Jamba: A 52B Hybrid Transformer-Mamba Language Model* We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. * Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. * This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU.* Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. * We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.* SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers* We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: * (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. * (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. * (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. * (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. * As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. * RWKV: Reinventing RNNs for the Transformer Era* Transformers have revolutionized almost all natural language processing (NLP) tasks but suffer from memory and computational complexity that scales quadratically with sequence length. In contrast, recurrent neural networks (RNNs) exhibit linear scaling in memory and computational requirements but struggle to match the same performance as Transformers due to limitations in parallelization and scalability. * We propose a novel model architecture, Receptance Weighted Key Value (RWKV), that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.* Our approach leverages a linear attention mechanism and allows us to formulate the model as either a Transformer or an RNN, thus parallelizing computations during training and maintains constant computational and memory complexity during inference. * We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers, suggesting future work can leverage this architecture to create more efficient models. This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.* LoLCATs: On Low-Rank Linearizing of Large Language Models* Recent works show we can linearize large language models (LLMs) -- swapping the quadratic attentions of popular Transformer-based LLMs with subquadratic analogs, such as linear attention -- avoiding the expensive pretraining costs. However, linearizing LLMs often significantly degrades model quality, still requires training over billions of tokens, and remains limited to smaller 1.3B to 7B LLMs. * We thus propose Low-rank Linear Conversion via Attention Transfer (LoLCATs), a simple two-step method that improves LLM linearizing quality with orders of magnitudes less memory and compute. * We base these steps on two findings. * First, we can replace an LLM's softmax attentions with closely-approximating linear attentions, simply by training the linear attentions to match their softmax counterparts with an output MSE loss ("attention transfer").* Then, this enables adjusting for approximation errors and recovering LLM quality simply with low-rank adaptation (LoRA). * LoLCATs significantly improves linearizing quality, training efficiency, and scalability. We significantly reduce the linearizing quality gap and produce state-of-the-art subquadratic LLMs from Llama 3 8B and Mistral 7B v0.1, leading to 20+ points of improvement on 5-shot MMLU. * Furthermore, LoLCATs does so with only 0.2% of past methods' model parameters and 0.4% of their training tokens. * Finally, we apply LoLCATs to create the first linearized 70B and 405B LLMs (50x larger than prior work). * When compared with prior approaches under the same compute budgets, LoLCATs significantly improves linearizing quality, closing the gap between linearized and original Llama 3.1 70B and 405B LLMs by 77.8% and 78.1% on 5-shot MMLU.Timestamps* [00:02:27] Intros* [00:03:16] Why Scale Context Lengths? or work on Efficient Models* [00:06:07] The Story of SSMs* [00:09:33] Idea 1: Approximation -> Principled Modeling* [00:12:14] Idea 3: Selection* [00:15:07] Just Read Twice* [00:16:51] Idea 4: Test Time Compute* [00:17:32] Idea 2: Hardware & Kernel Support* [00:19:49] RWKV vs SSMs* [00:24:24] RWKV Arch* [00:26:15] QWRKWv6 launch* [00:30:00] What's next* [00:33:21] Hot Takes - does anyone really need long context?Transcript[00:00:00] AI Charlie: We're back at Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field.[00:00:24] AI Charlie: 200 of you joined us in person throughout the day, with over 2200 watching live online. Thanks Our next keynote covers the State of Transformers alternative architectures, with a special joint presentation with Dan Fu of Together AI and Eugene Chia of Recursal AI and Featherless AI. We've featured both Together and Recursal on the pod before, with CEO Veepal Vedprakash introducing them.[00:00:49] AI Charlie: And CTO CE Zhang joining us to talk about how they are building together together as a quote unquote full stack AI startup from the lowest level kernel and systems [00:01:00] programming to the highest level mathematical abstractions driving new model architectures and inference algorithms with notable industry contributions from Red Pajama V2, Flash Attention 3, Mamba 2, Mixture of Agents.[00:01:15] AI Charlie: Based, Sequoia, Evo, Dragonfly, Danfoo's Thunder Kittens, and many more research projects this year. As for Recursal and Featherless, we were the first podcast to feature RWKV last year, and this year the team has shipped RWKV v5, codenamed Eagle, to 1. 5 billion Windows 10 and Windows 11 machines worldwide to support Microsoft's on device, end Energy Usage Sensitive Windows Copilot Use Cases and has launched the first updates on RWKV v6, codenamed Finch and Goldfinch.[00:01:53] AI Charlie: On the morning of Latent Space Live, they also announced QRdata UKv6, a QEN32B model [00:02:00] modified with RDWKV linear attention layers. Eugene has also written the most single most popular guest post on the Latent Space blog this year. Yes, we do take guest posts on what he has discovered about the H100 GPU inference NeoCloud market since the successful launch of Featherless AI this year.[00:02:20] AI Charlie: As always, don't forget to check the show notes for the YouTube link to their talk as well as their slides. Watch out and take care.[00:02:27] Intros[00:02:27] Dan Fu: Yeah, so thanks so much for having us. So this is going to be a little bit of a two part presentation. My name is Dan. I'm at Together AI, and I'll be joining UCSD as faculty in about a year. And Eugene, you want to introduce yourself?[00:02:46] Eugene Cheah: Eugene, I lead the art activity team, and I, I'm CEO of Featherless, and we both work on this new post transformer architecture space.[00:02:55] Dan Fu: Yeah, so yeah, so today we're really excited to talk to you a little bit [00:03:00] about that. So first I'm going to give a broad overview of kind of the last few years of progress in non post transformer architectures. And then afterwards Eugene will tell us a little bit about the latest and the greatest and the latest frontier models in this space.[00:03:16] Why Scale Context Lengths? or work on Efficient Models[00:03:16] Dan Fu: So, the story starts with Scaling. So this is probably a figure or something like this that you've seen very recently. Over the last five to six years, we've seen models really scale up in parameter size, and that's brought with it a bunch of new capabilities, like the ability to talk to you and tell you sometimes how to use your Colab screens.[00:03:35] Dan Fu: But another place where we've seen scaling especially recently is scaling in context length. So this can mean Having more text inputs for your models, but it can also mean things like taking a lot of visual token inputs image inputs to your models or generating lots of outputs. And one thing that's been really exciting over the last few months or so is that we're, we're seeing scaling, not only during training time, but also [00:04:00] during test time.[00:04:00] Dan Fu: So this is one of the, the, this is the iconic image from the OpenAI 01 release. Not only are we starting to scale train time compute, but we're also starting to scale test time compute. Now if you're familiar with our attention and our transformer architectures today, this graph on the right might look a little bit scary.[00:04:19] Dan Fu: And one of the reasons is that the implications are a little bit Interesting. So what does it mean if we want to continue having smarter and smarter models? Do we just need to start building bigger, bigger data centers, spending more flops? Is this this little Dolly 3, we need more flops, guys? Is this going to be the future of all of AI?[00:04:39] Dan Fu: Or is there a better way, another path forward? Maybe we can get the same capabilities that we've gotten used to, But for a lot less compute, a lot less flops. And one of the things that we're going to talk about today is specifically looking at that core attention operator in some of these models.[00:04:57] Dan Fu: And the reason is that so this is just some, some [00:05:00] basic you know, scaling curves, but attention has compute that scales quadratically in the context length. So that means that if you're doing something like test time compute and you want to spend a bunch of tokens thinking about what comes next, the longer that that goes the, the, the more tokens you spend on that, that compute grows quadratically in that.[00:05:19] Dan Fu: One of the questions that we're interested in is, can we take that basic sequence model, that basic sequence primitive at the bottom, and get it to scale better? Can we scale in, let's say, n to the 3 halves or n log n? So in, in the first part of the talk, so we just went over the introduction. What I'm gonna do over the next few slides is just talk about some of the key advances and ideas that have shown over the past few years since maybe early 2020 to, to now that shown promise that this might actually be possible.[00:05:48] Dan Fu: That you can actually get potentially the same quality that we want while scale, while scaling better. So to do that, we're and, and basically the, the story that we're gonna look is we're gonna start to see [00:06:00] how. So this is a basic graph of just the past couple years of progress of perplexity where that blue line, that dotted blue line, is attention.[00:06:07] The Story of SSMs[00:06:07] Dan Fu: It's your basic transformer, full dense attention. And then the dots coming down are some of the methods that you'll see in this presentation today. We're going to turn the clock back all the way to 2020. So this, this, this question of can we make attention subquadratic? Basically, as soon as we said attention is all you need, People started asking this question.[00:06:28] Dan Fu: So we have this quadratic attention operator. Can we do better? I'll briefly talk about why attention is quadratic. And the basic thing that happens, if you're not familiar, is that you have these inputs, these keys and queries. And what you do in this attention matrix, this S matrix over here, is that you're using, you're comparing every token in your input to every other token.[00:06:49] Dan Fu: So when I try to do something like upload a whole book to Gemini, what happens beyond the Maybe not Gemini, because we don't necessarily know what architecture is. But let's say we upload it to LLAMA, what happens beyond [00:07:00] the scenes, behind the scenes, is that it's going to take every single word in that book and compare it to every other word.[00:07:05] Dan Fu: And this has been a really, it's, it's led to some pretty impressive things. But it's kind of a brute forcing of the way that you would try to interpret a interpret something. And what attention does in particular is the, and then what attention, sorry, don't want to. Okay, no, no laser pointer. What, what attention does afterwards is that instead of always operating in this quadratic thing, it takes a row wise softmax over this matrix, and then multiplies it by this values matrix.[00:07:32] Dan Fu: So, one of the key points to notice is that the output size is always going to be the same as the inputs, at least in standard self attention. So one of the first things that folks tried to do around 2020 is this thing called linear attention, which is just, just noticing that if we take out this softmax from here, if we take out this non linearity in the middle of the attention operation, and then if you compute the keys and the values operation first, you actually never hit this quadratic bottleneck.[00:07:57] Dan Fu: So that, that's potentially a way [00:08:00] to get a lot more computationally efficient. And there are various ways to do this by basically using feature maps or try to approximate this overall attention computation. But some of this work sort of started to hit a wall in 2020. And the basic challenges were, were two.[00:08:16] Dan Fu: So one was quality. It was back then, it was kind of hard to, to get good quality with these linear attention operators. The other one was actually hardware efficiency. So these, this feature map that was just shown by a simplify simplify here. Actually ends up being quite computationally expensive if you just implement it naively.[00:08:34] Dan Fu: So you started having these operators that not only were you sure, you're not really sure if they have the same quality, but also they're actually just wall clock slower. So you kind of end up getting the worst of both worlds. So this was the the stage. So that kind of sets the stage for four years ago.[00:08:49] Dan Fu: Keep this in mind because linear attention is actually going to come back in a few years once we have a better understanding. But one of the works that started kicking off this, this [00:09:00] mini revolution in post transformer architectures was this idea called states based model. So here the seminal work is, is one about our work queue in 2022.[00:09:09] Dan Fu: And this, this piece of work really brought together a few ideas from, from some long running research research lines of work. The first one was, and this is really one of the keys to, to closing the gap in quality was just using things that, that if you talk to a, a, an electrical engineer off the street, they might know off, off the, like the back of their hand.[00:09:33] Idea 1: Approximation -> Principled Modeling[00:09:33] Dan Fu: But taking some of those properties with how we model dynamical systems in signal processing and then using those ideas to model the inputs, the, the text tokens in, for example a transformer like Next Token Prediction Architecture. So some of those early states-based model papers were looking at this relatively, relatively simple recurrent update model that comes from maybe chapter one of a signal processing class.[00:09:59] Dan Fu: But then using [00:10:00] some principle theory about how you should do that recurrent update in order to really get the most that you can out of your hidden state, out of your out of your sequence. So that, that was one key idea for quality and. When this was eventually realized, you started to see a bunch of benchmarks that were pretty sticky for a few years.[00:10:20] Dan Fu: Things like long range arena, some long sequence evaluation benchmarks, There was stuff in time series, time series analysis. They started to, you started to see the quality tick up in meaningful ways. But the other key thing that What's so influential about these states based models is that they also had a key idea about how you can compute these things efficiently.[00:10:45] Dan Fu: So if you go back to your machine learning 101 class where you learned about RNNs, one thing that you may have learned is that they don't paralyze as well as detention, because if you just run them naively, you have to do this kind of sequential update to process new tokens, [00:11:00] whereas in attention, you can process all the tokens in parallel at one time.[00:11:04] Dan Fu: One of the key insights behind the S4 paper was that these recurrent models, you could take them and you could also formulate them as a convolution. And in particular, with a convolution, you could, instead of using a PyTorch conv1d operation, you can compute that with the FFT. And that would give you n log n compute in the in the sequence length n with an operator that was relatively well optimized for modern hardware.[00:11:28] Dan Fu: So those are really, I'd say, the two key ideas in 2022 that started allowing these breakthroughs to happen in these non transformer architectures. So, these ideas about how to principally model sorry, how to model the recurrent updates of a mo of, of a sequence in a principled way, and also these key ideas in how you can compute it efficiently by turning it into a convolution and then scaling it up with the FFT.[00:11:53] Dan Fu: Along those same lines, so afterwards we started putting out some work on specialized kernels, so just [00:12:00] like we have flash attention for transformers, we also have works like flash fft conf, and if you look at these lines of work oftentimes when, whenever you see a new architecture, you see a new primitive one of the, one of the table stakes now is, do you have an efficient kernel so that you can actually get wall clock speed up?[00:12:14] Idea 3: Selection[00:12:14] Dan Fu: So by 2022, We are starting to have these models that had promising quality primitives, but and, and also promising wall clocks. So you could actually see regimes where they were better than transformers in meaningful ways. That being said, there were, there's still sometimes a quality gap, particularly for language modeling.[00:12:33] Dan Fu: And because languages, It's so core to what we do in sequence modeling these days the, the next, the next key idea that I'm going to talk about is this idea of selection mechanisms. And this is basically an idea of, so you have this recurrent state that you're keeping around that just summarizes everything that, that came before.[00:12:50] Dan Fu: And to get a good sequence model, one of the things that you really need to be able to do is have the model learn what's the best way to pick out pieces from that recurrent [00:13:00] state. So one of the, one of the major ideas here in a line of work called H3, Hungry Hungry Hippos, and also these hyena models were One way you can do this is by just adding some simple element wise gates.[00:13:13] Dan Fu: So versions of these ideas have been around for decades. If you squint at the LSTM paper you, you can probably find, find this gating mechanism. But turns out you can take those old ideas, add them into these new. state space models, and then you can see quality start to pick up. If you've heard of the Mamba model, this also takes the selection to the next level by actually making some changes in that fundamental recurrent state space.[00:13:40] Dan Fu: So, it's not only just this gating that happens around the SSM layer, but also you can actually make The ABCD matrices of your state space model, you can make them data dependent, which will allow you to even better select out different pieces from your hidden state depending on what you're seeing. I'll also point out if you look at the [00:14:00] bottom right of this figure, there's this little triangle with a GPU SRAM, GPU HBM, and this, this is just continuing that trend of when you have a new architecture you, you, you also release it with a kernel to, to, to show that it is hardware efficient, that it, that it can be hardware efficient on modern hardware.[00:14:17] Dan Fu: The, the, one of the next cool things that happened is once we had this understanding of these are the basic pieces, these are the basic principles behind some of the sequence models linear attention actually started to come back. So in earlier this year, there was a model called BASED the, from Simran Arora and, and some other folks, that combined a more principled version of linear attention that basically the, the, the, the two second summary is that it used a Taylor approximation of the softmax attention, combined that with a simple sliding window attention and was starting to able, starting to be able to expand the Pareto frontier of how much data can you recall from your sequence, versus how small is your recurrent state size.[00:14:58] Dan Fu: So those orange dots [00:15:00] are, at the top there, are just showing smaller sequences that can recall more memory.[00:15:07] Just Read Twice[00:15:07] Dan Fu: And the last major idea I think that has been influential in this line of work and is very relatively late breaking just a few months ago, is just the basic idea that when you have these models that are fundamentally more efficient in the sequence length, you maybe don't want to prompt them or use them in exactly the same way.[00:15:26] Dan Fu: So this was a really cool paper called Just Read Twice, also from Simran. That basically said, hey, all these efficient models can process tokens so much more efficiently than transformers that they can sometimes have unfair advantages compared to a simple transformer token. So, or sorry, a simple transformer model.[00:15:44] Dan Fu: So take, for example the standard, the standard use case of you have some long document, you're going to pass it in as input, and then you're going to ask some question about it. One problem you might imagine for a recurrent model where you have a fixed state size is, let's say that [00:16:00] you're. Article is very long, and you're trying to ask about some really niche thing.[00:16:04] Dan Fu: You can imagine it might be hard for the model to know ahead of time what information to put into the hidden state. But these, these, these models are so much more efficient that you can do something really stupid, like, you can just put the document write down the document, write down the question, write down the document again, and then write down the question again, and then this time, the second time that you go over that document, you know exactly what to look for.[00:16:25] Dan Fu: And the cool thing about this is, so this is, And this this results in better quality, especially on these recall intensive tasks. But the other interesting thing is it really takes advantage of the more efficient architectures that, that we're having here. So one of the other, I think, influential ideas in this line of work is if you change the fundamental compute capabilities of your model and the way that it scales, you can actually start to query it at test time differently.[00:16:51] Idea 4: Test Time Compute[00:16:51] Dan Fu: And this actually, of course, goes back to those slides on test time compute. So while everybody's looking at, say, test time compute for big transformer models, [00:17:00] I think potentially a really interesting research question is, how can you take those and how does it change with this new next generation of models?[00:17:09] Dan Fu: So the, I'll just briefly summarize what some of those key ideas were and then talk and then show you briefly kind of what the state of the art is today. So, so the four key ideas are instead of just doing a simple linear attention approximation, instead take ideas that we know from other fields like signal processing, do a more principled approach to your modeling of the sequence.[00:17:32] Idea 2: Hardware & Kernel Support[00:17:32] Dan Fu: Another key idea throughout all these lines of work is you really want. Hardware and kernel support from day one. So, so even if your model is theoretically more efficient if somebody goes and runs it and it's two times slower one of the things that, that we've learned is that if, if you're in that situation, it's, it's just gonna be dead on arrival.[00:17:49] Dan Fu: So you want to be designing your architectures one of the key, key machine learning ideas that has been important for the quality is just making sure that you encode different ways that you can [00:18:00] select from your hidden state and, and really focus on that as a key decider of quality. And finally, I think one of the, the, the emerging new, new things for, for this line of work and something that's quite interesting is, What are the right test time paradigms for these models?[00:18:15] Dan Fu: How do they change relative to relative to what you might do for a standard transformer? I'll briefly end this section. So I've labeled this slide where we are yesterday because Eugene is going to talk about some new models that he released literally this morning. But as of yesterday, some of the really cool results out of the, these efficient alternative models were so AI2 trained this hybrid MOE called Jamba.[00:18:40] Dan Fu: That, that, that seems, that is currently the state of the art for these non transformer architectures. There's this NVIDIA and MIT put out this new diffusion model called SANA recently that one of their key key observations is that you can take a standard diffusion transformer diffusion model, replace the layers with linear [00:19:00] attention, and then that lets you scale to much larger much larger images, much, much Much larger sequences more efficiently.[00:19:07] Dan Fu: And and one thing that I don't think anybody would have called when a few years ago is that one of those gated SSM, gated states based models ended up on the cover of Science because a great group of folks went and trained some DNA models. So that's Michael Polley, Eric Yuen from from Stanford and the Arc Institute.[00:19:26] Dan Fu: So it's, we're really at an exciting time in 2024 where these non transformer, post transformer architectures are showing promise across a wide range. Across a wide range of, of modalities, of applications, and, and of tasks. And with that, I'll pass it on to Eugene, who can tell you a little bit about the latest and greatest with RWKV.[00:19:49] RWKV vs SSMs[00:19:49] Eugene Cheah: So, that's useful? Yeah. You're talking to here. Oh, I'm talking to here. Okay. So, yeah, two streams. Yeah. So, I think one common questions that we tend to get asked, right, is what's the difference between [00:20:00] RWKV and state space? So I think one of the key things to really understand, right the difference between the two groups, right, is that we are actually more like an open source, random internet meets academia kind of situation.[00:20:11] Eugene Cheah: Like, most of us never wrote any paper, but we, we basically look at RNNs and linear intention when intention is all you need came out, and then we decided to like, hey there is a quadratic scaling problem. Why don't we try fixing that instead? So, so, so we end up developing our own branch, but we end up sharing ideas back and forth.[00:20:30] Eugene Cheah: So, and, and we do all this actively in Discord, GitHub, etc. This was so bad for a few years, right, that basically, the average group's H index was so close to zero, right, Illuter. ai actually came in and helped us write our first paper. Great, now our H index is now three, apparently. So, so, so, but, but the thing is, like, a lot of these experiments led to results, and, and, essentially, essentially, we we took the same ideas from linear attention, [00:21:00] and we built on it.[00:21:01] Eugene Cheah: So, to take a step back into, like, how does RWKB handle its own attention mechanic and achieve the same goals of, like, O and compute, respectively, and in focus of our overall goal to make AI accessible to everyone, regardless of language, nation, or compute, that's our goal. We actually train our models primarily on over a hundred languages, which is another topic altogether.[00:21:23] Eugene Cheah: And our goal is to train to even 200 languages to cover all languages in the world. But at the same time, we work on this architecture, To lower the compute cost so that people can run it on Raspberry Pis and on anything. So, how did RWKB break the dependency of LSTM token flow? Because I think to understand architecture, right, it's probably easier to understand it from the RNN lens.[00:21:46] Eugene Cheah: Because that's where we built on. We all, we all state space kind of like try to, try to start anew and took lessons from that and say, So there's a little bit of divergence there. And AKA, this our version of linear attention. So to take step back [00:22:00] all foundation models, be it transformers or non transformers at a very high level, right?[00:22:05] Eugene Cheah: Pumps in the token. I mean, text that things into embeddings and go through a lot of layers. Generate a lot of states where the QKV cache or be iron in states or RW KB states. And outputs and embedding, they are not the same thing. And we just take more layers and more embeddings. And somehow that magically works.[00:22:23] Eugene Cheah: So, if you, if you remember your ancient RNN lessons which we, which we, which we we call best learning these days the general idea is that you have the embedding information flowing all the way up, and when, and you take that information and you flow it back down, and then you process it as part of your LSTM layers.[00:22:41] Eugene Cheah: So, this is how it generally works. Kapati is quoted saying that RNNs are actually unreasonably effective. The problem is this is not scalable. To start doing work on the second token, you need to wait for the first token. And then you need to, and likewise for the third token and fourth token, yada yada.[00:22:55] Eugene Cheah: That is CPU land, not GPU land. So, so, so, you [00:23:00] can have a H100 and you can't even use 1 percent of it. So, so that's kind of why RNNs didn't really take off in the direction that we wanted, like, billions of parameters when it comes to training. So, what did RDAP KV version 0 do? Boom. We just did the dumbest, lamest thing.[00:23:13] Eugene Cheah: Sorry, this is the bottleneck for RNN. We did the dumb thing of removing that line. And it kind of worked. It trained. It sucked, but it kind of worked. Then we were like, hey, then no one cared because the loss was crap, but how do we improve that? And that's essentially where we move forward, because if you see this kind of flow, right, you can actually get your GPU saturated quickly, where it essentially cascades respectively.[00:23:41] Eugene Cheah: So I'm just waiting for this to loop again. So it's like, once you get your first layer, your token to be computed finish. You start to cascade your compute all the way until you are, Hey, I'm using 100 percent of the GPU. So we, we worked on it, and we started going along the principle of that as long as we keep this general architecture [00:24:00] where, where we can cascade and, and be highly efficient with our architecture, nothing is sacred in our architecture.[00:24:06] Eugene Cheah: And we have done some crazy ideas. In fact, you ask us, if you ask me to explain some things in the paper, right, officially in the paper, I'll say we had this idea and we wrote it this way. The reality is someone came with a code, we tested it, it worked, and then we rationalized later. So, so the general[00:24:24] RWKV Arch[00:24:24] Eugene Cheah: The idea behind rwkbr is that we generally have two major blocks that we do.[00:24:30] Eugene Cheah: We call time mix and channel mix. And time mix generally handles handles long term memory states, where essentially, where essentially where we apply the matrix multiplication and Cilu activation functions into processing an input embedding and an output embedding. I'm oversimplifying it because this, This calculation changed every version and we have, like, version 7 right now.[00:24:50] Eugene Cheah: ChannelMix is similar to Base in the sense that it does shorter term attention, where it just looks at the sister token, or the token before it, because [00:25:00] there's a shift in the token shift matrix. I don't really want to go too much into the papers itself, because, like, we do have three papers on this.[00:25:09] Eugene Cheah: Basically, RWKB, RNN for the transformer, ERA, Ego and Pinch, RWKB, Matrix Value State. This is the updated version 5, version 6. And Goldfinch is our, is, is, is, is our hybrid model respectively. We are writing the paper already for V seven and which is, which is for R wk V seven. Called, named Goose, or architectures are named by Bird.[00:25:30] Eugene Cheah: And, I'm going to cover as well, qrwkb, and mama100k, and rwkb, and Where did that lead to? Great! Because we are all GPU poor and to be clear, like, most of this research is done, like, only on a handful H100s, which I had one Google researcher told me that was, like, his experiment budget for a single researcher.[00:25:48] Eugene Cheah: So, our entire organization has less compute than a single researcher in Google. So We, we, one of the things that we explored into was to how do we convert transformer models instead? Because [00:26:00] someone already paid that billion dollars, a million dollars onto training, so why don't we take advantage of those weights?[00:26:05] Eugene Cheah: And, and to, I believe, together AI worked on the lockets for, for the Lambda side of things, and, and we took some ideas from there as well, and we essentially did that for RWKB.[00:26:15] QWRKWv6 launch[00:26:15] Eugene Cheah: And that led to, Q RWKB6, which we just dropped today, a 32 bit instruct preview model, where we took the Quen 32 bit instruct model, freeze the feedforward layer, remove the QKB attention layer, and replace it with RWKB linear layers.[00:26:32] Eugene Cheah: So to be clear, this means we do not have the rwkv channel mix layer, we only have the time mix layer. But but once we do that, we train the rwkv layer. Important is that the feedforward layer needs to be frozen, so the new attention can be learned. And then we unfreeze the feedforward layer, and train all the layers together with a custom learning rate schedule, so that they can learn how to work together.[00:26:54] Eugene Cheah: The end result, surprisingly, And, to be honest, to the frustration of the R. W. [00:27:00] KV MOE team, which ended up releasing the model on the same day, was that, with just a few hours of training on two nodes, we managed to get it to be on par, kind of, with the original QUAN32B model. So, in fact, when the first run, right, that completely confused us, it was like, and I was telling Daniel Goldstein, Smirky, who kind of leads most of our research coordination, When you pitched me this idea, you told me at best you'll get the same level of performance.[00:27:26] Eugene Cheah: You didn't tell me the challenge and score and Winograd score will shoot up. I don't know what's happening there. But it did. MMLU score dropping, that was expected. Because if you think about it, when we were training all the layers, right, we were essentially Like, Frankenstein this thing, and we did brain damage to the feedforward network layer 2 with the new RWKB layers.[00:27:47] Eugene Cheah: But, 76%, hey, somehow it's retained, and we can probably further train this. We didn't even spend more than 3 days training this, so there's a lot more that can be done, hence the preview. This brings up [00:28:00] a big question, because We are already now in the process of converting to 7TB. We are now, this is actually extremely compute efficient to test our attention mechanic.[00:28:10] Eugene Cheah: It's like, it becomes a shortcut. We can, we are already planning to do our version 7 and our hybrid architecture for it. Because we don't need to train from scratch. And we get a really good model out of it. And the other thing that is uncomfortable to say is that because we are doing right now on the 70b is that if this scales correctly to 128k context length, I'm not even talking about a million 128, majority of enterprise workload today is just on 70b at under 32k context length.[00:28:41] Eugene Cheah: That means if this works and the benchmark matches it, It means we can replace the vast majority of current AI workload, unless you want super long context. And then sorry, can someone give us more GPUs? Because we do need the VRAM for super long context, sadly. So yeah, that's what we are working on, and essentially, [00:29:00] we are excited about this to just push it further.[00:29:02] Eugene Cheah: And this conversion process, to be clear, I don't think it's going to be exclusive to RWKB. It probably will work for Mamba as well, I don't see why not. And we will probably see more ideas, or more experiments, or more hybrids, or Yeah, like, one of the weirdest things that I wanted to say outright, and I confirmed this with the Black Mamba team and the Jamba team, which because we did the GoFinch hybrid model, is that none of us understand why a hard hybrid with a state based model to be R.[00:29:28] Eugene Cheah: QA state space and transformer performs better when, than the baseline of both. It's like, it's like when you train one, you expect, and then you replace, you expect the same results. That's our pitch. That's our claim. But somehow when we jam both together, it outperforms both. And that's like one area of emulation that, like, we only have four experiments, plus four teams, that a lot more needs to be done.[00:29:51] Eugene Cheah: But, but these are things that excite me, essentially, because that is what it's potentially we can move ahead for. Which brings us to what comes next.[00:30:00] What's next[00:30:00] [00:30:00][00:30:00] Dan Fu: So, this part is kind of just some, where we'll talk a little bit about stuff that, that we're excited about. Maybe have some wild speculation on, on what, what's, what's coming next.[00:30:12] Dan Fu: And, of course this is also the part that will be more open to questions. So, a couple things that, that I'm excited about is continued hardware model co design for, for these models. So one of the things that we've put out recently is this library called ThunderKittens. It's a CUDA library.[00:30:29] Dan Fu: And one of the things that, that we found frustrating is every time that we built one of these new architectures, and I'm sure you had the exact same experience, we'd have to go and spend two months in CUDA land, like writing these, these new efficient things. And. If we decided to change one thing in PyTorch, like one line of PyTorch code is like a week of CUDA code at least.[00:30:47] Dan Fu: So one of our goals with, with a library like Thunderkitten, so we, we just broke down what are the key principles, what are the key hardware things what are the key, Compute pieces that you get from the hardware. So for example on [00:31:00] H100 everything is really revolves around a warp group matrix multiply operation.[00:31:06] Dan Fu: So you really want your operation to be able to split into relatively small matrix, matrix multiply operations. So like multiplying two 64 by 64 matrices, for example. And so if you know that ahead of time when you're designing your model, that probably gives you you know, some information about how you set the state sizes, how you set the update, how you set the update function.[00:31:27] Dan Fu: So with Thunderkittens we basically built a whole library just around this basic idea that all your basic compute primitives should not be a float, but it should be a matrix, and everything should just be matrix compute. And we've been using that to, to try to both re implement some existing architectures, and also start to design code.[00:31:44] Dan Fu: Some new ones that are really designed with this core with a tensor core primitive in mind. Another thing that that we're, that at least I'm excited about is we, over the last four or five years, we've really been looking at language models as the next thing. But if you've been paying [00:32:00] attention to Twitter there's been a bunch of new next generation models that are coming out.[00:32:04] Dan Fu: So there, there are. So, video generation models that can run real time, that are supported by your mouse and your keyboard, that I'm told if you play with them that, you know, that they only have a few seconds of memory. Can we take that model, can we give it a very long context length so that you could actually maybe generate an entire game state at a time?[00:32:25] Dan Fu: What does that look like for the model? You're certainly not going to do a giant quadratic attention computation to try to run that. Maybe, maybe use some of these new models, or some of these new video generation models that came out. So Sora came out I don't know, two days ago now. But with super long queue times and super long generation times.[00:32:43] Dan Fu: So that's probably a quadratic attention operation at the, at the bottom of it. What if we could remove that and get the same quality, but a lot faster generation time? Or some of the demos that we saw from Paige earlier today. You know, if I have a super long conversation with my [00:33:00] Gemini bot, what if I wanted to remember everything that it's seen in the last week?[00:33:06] Dan Fu: I mean, maybe you don't for personal reasons, but what if I did, you know? What does that mean for the architecture? And I think, you know, that's certainly something I'm pretty excited about. I'm sure you're excited about it too. So, I think we were supposed to have some hot takes, but I honestly don't remember what our hot takes were.[00:33:21] Hot Takes - does anyone really need long context?[00:33:21] Eugene Cheah: Yeah, including the next slide. Hot takes, yes, these are our[00:33:25] Dan Fu: hot takes.[00:33:25] Eugene Cheah: I think the big one on Twitter that we saw, that we shared, was the question is like, is RAG relevant? In the case of, like, the future of, like, state based models?[00:33:38] Dan Fu: Let's see, I haven't played too much with RAG. But when I have. I'll say I found it was a little bit challenging to do research on it because we had this experience over and over again, where you could have any, an embedding model of any quality, so you could have a really, really bad embedding model, or you could have a really, really [00:34:00] good one, By any measure of good.[00:34:03] Dan Fu: And for the final RAG application, it kind of didn't matter. That's what I'll say about RAG while I'm being recorded. I know it doesn't actually answer the question, but[00:34:13] Eugene Cheah: Yeah, so I think a lot of folks are like, extremely excited of the idea of RWKB or State Space potentially having infinite context.[00:34:21] Eugene Cheah: But I think the reality is that when we say infinite context, we just mean a different kind of infinite context, or you, or as it's previously covered, you need to test the model differently. So, think of it more along the lines of the human. Like, I don't remember what I ate for breakfast yesterday.[00:34:37] Eugene Cheah: Yeah, that's the statement that I'll say. And And we humans are not quadratic transformers. If we did, if let's say we increased our brain size for every second we live, we would have exploded by the time we are 5 years old or something like that. And, and I think, I think basically fundamentally for us, right, be it whether we, regardless of whether RWKB, statespace, XLSTM, [00:35:00] etc, our general idea is that instead of that expanding state, that increase in computational cost, what if we have a fixed state size?[00:35:08] Eugene Cheah: And Information theory detects that that fixed state size will have a limit. Just how big of a limit is a question, like, we, like, RWKB is running at 40 megabytes for, for its state. Its future version might run into 400 megabytes. That is like millions of tokens in, if you're talking about mathematically, the maximum possibility.[00:35:29] Eugene Cheah: It's just that I guess we were all more inefficient about it, so maybe we hit 100, 000. And that's kind of like the work we are doing, trying to like push it and maximize it. And that's where the models will start differing, because it will choose to forget things, it will choose to remember things. And that's why I think that there might be some element of right, but it may not be the same right.[00:35:49] Eugene Cheah: It may be the model learn things, and it's like, hmm, I can't remember that, that article. Let me do a database search, to search. Just like us humans, when we can't remember the article in the company. We do a search on Notion. [00:36:00][00:36:00] Dan Fu: I think something that would be really interesting is if you could have facts that are, so right now, the one intuition about language models is that all those parameters are around just to store random facts about the world.[00:36:14] Dan Fu: And this intuition comes from the observation that if you take a really small language model, it can do things like talk to you, or kind of has like the The style of conversation, it can learn that, but where it will usually fall over compared to a much larger one is it'll just be a lot less factual about things that it knows or that it can do.[00:36:32] Dan Fu: But that points to all those weights that we're spending, all that SGD that we're spending to train these models are just being used to store facts. And we have things like databases that are pretty good at storing facts. So I think one thing that would be really interesting is if we could actually have some sort of outside data store that a language model can can look at that that maybe is you know, has has some sort of gradient descent in it, but but would be quite interesting.[00:36:58] Dan Fu: And then maybe you could edit it, delete [00:37:00] facts, you know, change who's president so that it doesn't, it doesn't get lost.[00:37:04] Vibhu: Can we open up Q& A and hot takes for the audience? I have a hot take Q& A. Do these scale? When, when 405B state space model, RAG exists, no one does long context, who's throwing in 2 million token questions, hot takes?[00:37:24] Dan Fu: The, the who's throwing in 2 million token question, I think, is, is a really good question. So I actually, I was going to offer that as a hot take. I mean, my hot take was going to be that long context doesn't matter. I know I just gave a whole talk about it, but you know, what, what's the point of doing research if you can't, you know, play both sides.[00:37:40] Dan Fu: But I think one of the, so I think for both of us, the reason that we first got into this was just from the first principled questions of there's this quadratic thing. Clearly intelligence doesn't need to be quadratic. What is going on? Can we understand it better? You know, since then it's kind of turned into a race, which has [00:38:00] been exciting to watch, like, how much context you can take in.[00:38:03] Dan Fu: But I think it's right. Nobody is actually putting in a two million context prompt into these models. And, and, you know, if they are, maybe we can go, go You know, design a better model to do that particular thing. Yeah, what do you think about that? So you've also been working on this. Do you think long context matters?[00:38:19] Eugene Cheah: So I'm going to burn a bit. How many of you remember the news of Google Gemini supporting 3 million contacts, right? Raise your hand.[00:38:28] Vibhu: Yeah, 2 million.[00:38:29] Eugene Cheah: Oh, it's 2 million.[00:38:31] Eugene Cheah: Yeah, how many of you actually tried that? See?[00:38:34] Vibhu: I use it a lot. You? You work for MindsTV. I use it a lot.[00:38:41] Eugene Cheah: So, for some people that has used, and I think, I think that's the, that's might be, like, this is where my opinion starts to differ, because I think the big labs may have a bigger role in this, because Like, even for RWKB, even when we train non contacts, the reason why I say VRAM is a problem is that because when we did the, we need to backprop [00:39:00] against the states, we actually need to maintain the state in between the tokens by the token length.[00:39:05] Eugene Cheah: So that means we need to actually roll out the whole 1 million contacts if we are actually training 1 million. Which is the same for transformers, actually, but it just means we don't magically reuse the VRAM consumption in the training time space. So that is one of the VRAM bottlenecks, and I'm neither OpenAI nor Google, so donate GPUs if you have too much of them.[00:39:27] Eugene Cheah: But then, putting it back to another paradigm, right, is that I think O1 style reasoning might be actually pushing that direction downwards. In my opinion, this is my partial hot take is that if, let's say you have a super big model, And let's say you have a 70B model that may take double the tokens, but gets the same result.[00:39:51] Eugene Cheah: Strictly speaking, a 70B, and this is even for transformer or non transformer, right? We we'll take less less resources than that 400 B [00:40:00] model, even if it did double the amount thinking. And if that's the case, and we are still all trying to figure this out, maybe the direction for us is really getting the sub 200 B to be as fast as efficient as possible.[00:40:11] Eugene Cheah: We a very efficient architecture that some folks happen to be working on to, to just reason it out over larger and larger context thing.[00:40:20] Question: Yeah. One thing I'm super interested in is. Models that can watch forever? Obviously you cannot train something on infinite context length. How are y'all thinking about that, where you run on a much longer context length than is possible to train on?[00:40:38] Dan Fu: Yeah, it's a, it's a great question. So I think when I think you guys probably had tweets along these lines, too. When we first started doing these things, because these are all recurrent models in theory you could just run it forever. You could just run it forever. And at the very least it won't, it won't like error out on your crash.[00:40:57] Dan Fu: There's another question of whether it can actually [00:41:00] use what it's seen in that infinite context. And I think there, so one place where probably the research and architectures ran faster Then another research is actually the benchmarks for long context. So you turn it on forever. You want to do everything or watch everything.[00:41:16] Dan Fu: What is it that you actually wanted to do? Can we actually build some benchmarks for that? Then measure what's happening. And then ask the question, can the models do it? Is there something else that they need? Yeah, I think that if I were to turn back the clock to 2022, that's probably one of the things I would have done differently, which would have been actually get some long context benchmarks out at the same time as we started pushing context length on all these models.[00:41:41] Eugene Cheah: I will also say the use case. So like, I think we both agree that there's no Infinite memory and the model needs to be able to learn and decide. I think what we have observed for, I think this also fits the state space model, is that one of the key advantages of this alternate attention mechanic that is not based on token position is that the model don't suddenly become crazy when you go past the [00:42:00] 8k training context tank, or a million context tank.[00:42:03] Eugene Cheah: It's actually still stable. It's still able to run, it's still able to rationalize. It just starts forgetting things. But some of these things are still there in latent memory. Some of these things are still somewhat there. That's the whole point of why reading twice works. Things like that. And one of the biggest pushes in this direction is that I think both Statespace and RWKB have Separate papers by other researchers where they use this architecture for time series data.[00:42:26] Eugene Cheah: Weather modeling. So, you are not asking what was the weather five days ago. You're asking what's the weather tomorrow based on the infinite length that we, as long as this Earth and the computer will keep running. So, so, and they found that it is like, better than existing, like, transformer or existing architecture in modeling this weather data.[00:42:47] Eugene Cheah: Control for the param size and stuff. I'm quite sure there are people with larger models. So, so there are things that, that in this case, right, there is future applications if your question is just what's next and not what's 10 years ago.[00:42:59] Dan Fu: Thanks so [00:43:00] much for having us. Get full access to Latent Space at www.latent.space/subscribe

Mi Mejor Versión
#166 Lo que se queda en el 2024 [borrar 500 posts]

Mi Mejor Versión

Play Episode Listen Later Dec 23, 2024 91:10


En este episodio hablamos de cómo aplicar el Principio de Pareto para maximizar tu productividad. Hablamos de la diferencia entre hacer mucho y hacer lo CORRECTO para ver la mayor cantidad de resultados. Te llevo en el paso a paso de por qué decidimos borrar más de 500 pedazos de contenido de la cuenta de Isa García Corp y cómo encontrar las estrategias más efectivas para tu negocio digital. Para desbloquear un cupón de $111 USD para la Academia de Empresarias Digitales haz click aquí (www.isagarcia.online/academia) Para leer el artículo Maker Schedule vs. Manager Schedule por Paul Graham haz click aquí. Para leer el libro 10x Is Easier Than 2x por Dan Sullivan y Dr. Benjamin Hardy haz click aquí.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.The single most requested domain was computer vision, and we could think of no one better to help us recap 2024 than our friends at Roboflow, who was one of our earliest guests in 2023 and had one of this year's top episodes in 2024 again. Roboflow has since raised a $40m Series B!LinksTheir slides are here:All the trends and papers they picked:* Isaac Robinson* Sora (see our Video Diffusion pod) - extending diffusion from images to video* SAM 2: Segment Anything in Images and Videos (see our SAM2 pod) - extending prompted masks to full video object segmentation* DETR Dominancy: DETRs show Pareto improvement over YOLOs* RT-DETR: DETRs Beat YOLOs on Real-time Object Detection* LW-DETR: A Transformer Replacement to YOLO for Real-Time Detection* D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement* Peter Robicheaux* MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)* * Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks) * PalíGemma / PaliGemma 2* PaliGemma: A versatile 3B VLM for transfer* PaliGemma 2: A Family of Versatile VLMs for Transfer* AlMv2 (Multimodal Autoregressive Pre-training of Large Vision Encoders) * Vik Korrapati - MoondreamFull Talk on YouTubeWant more content like this? Like and subscribe to stay updated on our latest talks, interviews, and podcasts.Transcript/Timestamps[00:00:00] Intro[00:00:05] AI Charlie: welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. When we were thinking of ways to add value to our academic conference coverage, we realized that there was a lack of good talks, just recapping the best of 2024, going domain by domain.[00:00:36] AI Charlie: We sent out a survey to the over 900 of you. who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field. 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our second featured keynote is The Best of Vision 2024, with Peter Robichaud and Isaac [00:01:00] Robinson of Roboflow, with a special appearance from Vic Corrapati of Moondream.[00:01:05] AI Charlie: When we did a poll of our attendees, the highest interest domain of the year was vision. And so our first port of call was our friends at Roboflow. Joseph Nelson helped us kickstart our vision coverage in episode 7 last year, and this year came back as a guest host with Nikki Ravey of Meta to cover segment Anything 2.[00:01:25] AI Charlie: Roboflow have consistently been the leaders in open source vision models and tooling. With their SuperVision library recently eclipsing PyTorch's Vision library. And Roboflow Universe hosting hundreds of thousands of open source vision datasets and models. They have since announced a 40 million Series B led by Google Ventures.[00:01:46] AI Charlie: Woohoo.[00:01:48] Isaac's picks[00:01:48] Isaac Robinson: Hi, we're Isaac and Peter from Roboflow, and we're going to talk about the best papers of 2024 in computer vision. So, for us, we defined best as what made [00:02:00] the biggest shifts in the space. And to determine that, we looked at what are some major trends that happened and what papers most contributed to those trends.[00:02:09] Isaac Robinson: So I'm going to talk about a couple trends, Peter's going to talk about a trend, And then we're going to hand it off to Moondream. So, the trends that I'm interested in talking about are These are a major transition from models that run on per image basis to models that run using the same basic ideas on video.[00:02:28] Isaac Robinson: And then also how debtors are starting to take over the real time object detection scene from the YOLOs, which have been dominant for years.[00:02:37] Sora, OpenSora and Video Vision vs Generation[00:02:37] Isaac Robinson: So as a highlight we're going to talk about Sora, which from my perspective is the biggest paper of 2024, even though it came out in February. Is the what?[00:02:48] Isaac Robinson: Yeah. Yeah. So just it's a, SORA is just a a post. So I'm going to fill it in with details from replication efforts, including open SORA and related work, such as a stable [00:03:00] diffusion video. And then we're also going to talk about SAM2, which applies the SAM strategy to video. And then how debtors, These are the improvements in 2024 to debtors that are making them a Pareto improvement to YOLO based models.[00:03:15] Isaac Robinson: So to start this off, we're going to talk about the state of the art of video generation at the end of 2023, MagVIT MagVIT is a discrete token, video tokenizer akin to VQ, GAN, but applied to video sequences. And it actually outperforms state of the art handcrafted video compression frameworks.[00:03:38] Isaac Robinson: In terms of the bit rate versus human preference for quality and videos generated by autoregressing on these discrete tokens generate some pretty nice stuff, but up to like five seconds length and, you know, not super detailed. And then suddenly a few months later we have this, which when I saw it, it was totally mind blowing to me.[00:03:59] Isaac Robinson: 1080p, [00:04:00] a whole minute long. We've got light reflecting in puddles. That's reflective. Reminds me of those RTX demonstrations for next generation video games, such as Cyberpunk, but with better graphics. You can see some issues in the background if you look closely, but they're kind of, as with a lot of these models, the issues tend to be things that people aren't going to pay attention to unless they're looking for.[00:04:24] Isaac Robinson: In the same way that like six fingers on a hand. You're not going to notice is a giveaway unless you're looking for it. So yeah, as we said, SORA does not have a paper. So we're going to be filling it in with context from the rest of the computer vision scene attempting to replicate these efforts. So the first step, you have an LLM caption, a huge amount of videos.[00:04:48] Isaac Robinson: This, this is a trick that they introduced in Dolly 3, where they train a image captioning model to just generate very high quality captions for a huge corpus and then train a diffusion model [00:05:00] on that. Their Sora and their application efforts also show a bunch of other steps that are necessary for good video generation.[00:05:09] Isaac Robinson: Including filtering by aesthetic score and filtering by making sure the videos have enough motion. So they're not just like kind of the generators not learning to just generate static frames. So. Then we encode our video into a series of space time latents. Once again, SORA, very sparse in details.[00:05:29] Isaac Robinson: So the replication related works, OpenSORA actually uses a MAG VIT V2 itself to do this, but swapping out the discretization step with a classic VAE autoencoder framework. They show that there's a lot of benefit from getting the temporal compression, which makes a lot of sense as the Each sequential frames and videos have mostly redundant information.[00:05:53] Isaac Robinson: So by compressing against, compressing in the temporal space, you allow the latent to hold [00:06:00] a lot more semantic information while avoiding that duplicate. So, we've got our spacetime latents. Possibly via, there's some 3D VAE, presumably a MAG VATV2 and then you throw it into a diffusion transformer.[00:06:19] Isaac Robinson: So I think it's personally interesting to note that OpenSORA is using a MAG VATV2, which originally used an autoregressive transformer decoder to model the latent space, but is now using a diffusion diffusion transformer. So it's still a transformer happening. Just the question is like, is it?[00:06:37] Isaac Robinson: Parameterizing the stochastic differential equation is, or parameterizing a conditional distribution via autoregression. It's also it's also worth noting that most diffusion models today, the, the very high performance ones are switching away from the classic, like DDPM denoising diffusion probability modeling framework to rectified flows.[00:06:57] Isaac Robinson: Rectified flows have a very interesting property that as [00:07:00] they converge, they actually get closer to being able to be sampled with a single step. Which means that in practice, you can actually generate high quality samples much faster. Major problem of DDPM and related models for the past four years is just that they require many, many steps to generate high quality samples.[00:07:22] Isaac Robinson: So, and naturally, the third step is throwing lots of compute at the problem. So I didn't, I never figured out how to manage to get this video to loop, but we see very little compute, medium compute, lots of compute. This is so interesting because the the original diffusion transformer paper from Facebook actually showed that, in fact, the specific hyperparameters of the transformer didn't really matter that much.[00:07:48] Isaac Robinson: What mattered was that you were just increasing the amount of compute that the model had. So, I love how in the, once again, little blog posts, they don't even talk about [00:08:00] like the specific hyperparameters. They say, we're using a diffusion transformer, and we're just throwing more compute at it, and this is what happens.[00:08:08] Isaac Robinson: OpenSora shows similar results. The primary issue I think here is that no one else has 32x compute budget. So we end up with these we end up in the middle of the domain and most of the related work, which is still super, super cool. It's just a little disappointing considering the context. So I think this is a beautiful extension of the framework that was introduced in 22 and 23 for these very high quality per image generation and then extending that to videos.[00:08:39] Isaac Robinson: It's awesome. And it's GA as of Monday, except no one can seem to get access to it because they keep shutting down the login.[00:08:46] SAM and SAM2[00:08:46] Isaac Robinson: The next, so next paper I wanted to talk about is SAM. So we at Roboflow allow users to label data and train models on that data. Sam, for us, has saved our users 75 years of [00:09:00] labeling time.[00:09:00] Isaac Robinson: We are the, to the best of my knowledge, the largest SAM API that exists. We also, SAM also allows us to have our users train just pure bounding box regression models and use those to generate high quality masks which has the great side effect of requiring less training data to have a meaningful convergence.[00:09:20] Isaac Robinson: So most people are data limited in the real world. So anything that requires less data to get to a useful thing is that super useful. Most of our users actually run their object per frame object detectors on every frame in a video, or maybe not most, but many, many. And so Sam follows into this category of taking, Sam 2 falls into this category of taking something that really really works and applying it to a video which has the wonderful benefit of being plug and play with most of our Many of our users use cases.[00:09:53] Isaac Robinson: We're, we're still building out a sufficiently mature pipeline to take advantage of that, but it's, it's in the works. [00:10:00] So here we've got a great example. We can click on cells and then follow them. You even notice the cell goes away and comes back and we can still keep track of it which is very challenging for existing object trackers.[00:10:14] Isaac Robinson: High level overview of how SAM2 works. We there's a simple pipeline here where we can give, provide some type of prompt and it fills out the rest of the likely masks for that object throughout the rest of the video. So here we're giving a bounding box in the first frame, a set of positive negative points, or even just a simple mask.[00:10:36] Isaac Robinson: I'm going to assume people are somewhat familiar with SAM. So I'm going to just give a high level overview of how SAM works. You have an image encoder that runs on every frame. SAM two can be used on a single image, in which case the only difference between SAM two and SAM is that image encoder, which Sam used a standard VIT [00:11:00] Sam two replaced that with a hara hierarchical encoder, which gets approximately the same results, but leads to a six times faster inference, which is.[00:11:11] Isaac Robinson: Excellent, especially considering how in a trend of 23 was replacing the VAT with more efficient backbones. In the case where you're doing video segmentation, the difference is that you actually create a memory bank and you cross attend the features from the image encoder based on the memory bank.[00:11:31] Isaac Robinson: So the feature set that is created is essentially well, I'll go more into it in a couple of slides, but we take the features from the past couple frames, plus a set of object pointers and the set of prompts and use that to generate our new masks. Then we then fuse the new masks for this frame with the.[00:11:57] Isaac Robinson: Image features and add that to the memory bank. [00:12:00] It's, well, I'll say more in a minute. The just like SAM, the SAM2 actually uses a data engine to create its data set in that people are, they assembled a huge amount of reference data, used people to label some of it and train the model used the model to label more of it and asked people to refine the predictions of the model.[00:12:20] Isaac Robinson: And then ultimately the data set is just created from the engine Final output of the model on the reference data. It's very interesting. This paradigm is so interesting to me because it unifies a model in a dataset in a way that is very unique. It seems unlikely that another model could come in and have such a tight.[00:12:37] Isaac Robinson: So brief overview of how the memory bank works, the paper did not have a great visual, so I'm just, I'm going to fill in a bit more. So we take the last couple of frames from our video. And we take the last couple of frames from our video attend that, along with the set of prompts that we provided, they could come from the future, [00:13:00] they could come from anywhere in the video, as well as reference object pointers, saying, by the way, here's what we've found so far attending to the last few frames has the interesting benefit of allowing it to model complex object motion without actually[00:13:18] Isaac Robinson: By limiting the amount of frames that you attend to, you manage to keep the model running in real time. This is such an interesting topic for me because one would assume that attending to all of the frames is super essential, or having some type of summarization of all the frames is super essential for high performance.[00:13:35] Isaac Robinson: But we see in their later ablation that that actually is not the case. So here, just to make sure that there is some benchmarking happening, we just compared to some of the stuff that's came out prior, and indeed the SAM2 strategy does improve on the state of the art. This ablation deep in their dependencies was super interesting to me.[00:13:59] Isaac Robinson: [00:14:00] We see in section C, the number of memories. One would assume that increasing the count of memories would meaningfully increase performance. And we see that it has some impact, but not the type that you'd expect. And that it meaningfully decreases speed, which justifies, in my mind, just having this FIFO queue of memories.[00:14:20] Isaac Robinson: Although in the future, I'm super interested to see A more dedicated summarization of all of the last video, not just a stacking of the last frames. So that another extension of beautiful per frame work into the video domain.[00:14:42] Realtime detection: DETRs > YOLO[00:14:42] Isaac Robinson: The next trend I'm interested in talking about is this interesting at RoboFlow, we're super interested in training real time object detectors.[00:14:50] Isaac Robinson: Those are bread and butter. And so we're doing a lot to keep track of what is actually happening in that space. We are finally starting to see something change. So, [00:15:00] for years, YOLOs have been the dominant way of doing real time object detection, and we can see here that they've essentially stagnated.[00:15:08] Isaac Robinson: The performance between 10 and 11 is not meaningfully different, at least, you know, in this type of high level chart. And even from the last couple series, there's not. A major change so YOLOs have hit a plateau, debtors have not. So we can look here and see the YOLO series has this plateau. And then these RT debtor, LW debtor, and Define have meaningfully changed that plateau so that in fact, the best Define models are plus 4.[00:15:43] Isaac Robinson: 6 AP on Cocoa at the same latency. So three major steps to accomplish this. The first RT deditor, which is technically a 2023 paper preprint, but published officially in 24, so I'm going to include that. I hope that's okay. [00:16:00] That is showed that RT deditor showed that we could actually match or out speed YOLOs.[00:16:04] Isaac Robinson: And then LWdebtor showed that pre training is hugely effective on debtors and much less so on YOLOs. And then DeFine added the types of bells and whistles that we expect from these types, this, this arena. So the major improvements that RTdebtor shows was Taking the multi scale features that debtors typically pass into their encoder and decoupling them into a much more efficient transformer encoder.[00:16:30] Isaac Robinson: The transformer is of course, quadratic complexity. So decreasing the amount of stuff that you pass in at once is super helpful for increasing your runtime or increasing your throughput. So that change basically brought us up to yellow speed and then they do a hardcore analysis on. Benchmarking YOLOs, including the NMS step.[00:16:54] Isaac Robinson: Once you once you include the NMS in the latency calculation, you see that in fact, these debtors [00:17:00] are outperforming, at least this time, the the, the YOLOs that existed. Then LW debtor goes in and suggests that in fact, the frame, the huge boost here is from pre training. So, this is the define line, and this is the define line without pre training.[00:17:19] Isaac Robinson: It's within range, it's still an improvement over the YOLOs, but Really huge boost comes from the benefit of pre training. When YOLOx came out in 2021, they showed that they got much better results by having a much, much longer training time, but they found that when they did that, they actually did not benefit from pre training.[00:17:40] Isaac Robinson: So, you see in this graph from LWdebtor, in fact, YOLOs do have a real benefit from pre training, but it goes away as we increase the training time. Then, the debtors converge much faster. LWdebtor trains for only 50 epochs, RTdebtor is 60 epochs. So, one could assume that, in fact, [00:18:00] the entire extra gain from pre training is that you're not destroying your original weights.[00:18:06] Isaac Robinson: By relying on this long training cycle. And then LWdebtor also shows superior performance to our favorite data set, Roboflow 100 which means that they do better on the real world, not just on Cocoa. Then Define throws all the bells and whistles at it. Yellow models tend to have a lot of very specific complicated loss functions.[00:18:26] Isaac Robinson: This Define brings that into the debtor world and shows consistent improvement on a variety of debtor based frameworks. So bring these all together and we see that suddenly we have almost 60 AP on Cocoa while running in like 10 milliseconds. Huge, huge stuff. So we're spending a lot of time trying to build models that work better with less data and debtors are clearly becoming a promising step in that direction.[00:18:56] Isaac Robinson: The, what we're interested in seeing [00:19:00] from the debtors in this, this trend to next is. Codetter and the models that are currently sitting on the top of the leaderboard for large scale inference scale really well as you switch out the backbone. We're very interested in seeing and having people publish a paper, potentially us, on what happens if you take these real time ones and then throw a Swingy at it.[00:19:23] Isaac Robinson: Like, do we have a Pareto curve that extends from the real time domain all the way up to the super, super slow but high performance domain? We also want to see people benchmarking in RF100 more, because that type of data is what's relevant for most users. And we want to see more pre training, because pre training works now.[00:19:43] Isaac Robinson: It's super cool.[00:19:48] Peter's Picks[00:19:48] Peter Robicheaux: Alright, so, yeah, so in that theme one of the big things that we're focusing on is how do we get more out of our pre trained models. And one of the lenses to look at this is through sort of [00:20:00] this, this new requirement for like, how Fine grained visual details and your representations that are extracted from your foundation model.[00:20:08] Peter Robicheaux: So it's sort of a hook for this Oh, yeah, this is just a list of all the the papers that I'm going to mention I just want to make sure I set an actual paper so you can find it later[00:20:18] MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)[00:20:18] Peter Robicheaux: Yeah, so sort of the big hook here is that I make the claim that LLMs can't see if you go to if you go to Claude or ChatGPT you ask it to see this Watch and tell me what time it is, it fails, right?[00:20:34] Peter Robicheaux: And so you could say, like, maybe, maybe the Like, this is, like, a very classic test of an LLM, but you could say, Okay, maybe this, this image is, like, too zoomed out, And it just, like, it'll do better if we increase the resolution, And it has easier time finding these fine grained features, Like, where the watch hands are pointing.[00:20:53] Peter Robicheaux: Nodice. And you can say, okay, well, maybe the model just doesn't know how to tell time from knowing the position of the hands. But if you actually prompt [00:21:00] it textually, it's very easy for it to tell the time. So this to me is proof that these LLMs literally cannot see the position of the watch hands and it can't see those details.[00:21:08] Peter Robicheaux: So the question is sort of why? And for you anthropic heads out there, cloud fails too. So the, the, my first pick for best paper of 2024 Envision is this MMVP paper, which tries to investigate the Why do LLMs not have the ability to see fine grained details? And so, for instance, it comes up with a lot of images like this, where you ask it a question that seems very visually apparent to us, like, which way is the school bus facing?[00:21:32] Peter Robicheaux: And it gets it wrong, and then, of course, it makes up details to support its wrong claim. And so, the process by which it finds these images is sort of contained in its hypothesis for why it can't. See these details. So it hypothesizes that models that have been initialized with, with Clip as their vision encoder, they don't have fine grained details and the, the features extracted using Clip because Clip sort of doesn't need to find these fine grained [00:22:00] details to do its job correctly, which is just to match captions and images, right?[00:22:04] Peter Robicheaux: And sort of at a high level, even if ChatGPT wasn't initialized with Clip and wasn't trained contrastively at all. The vision encoder wasn't trained contrastively at all. Still, in order to do its job of capturing the image it could do a pretty good job without actually finding the exact position of all the objects and visual features in the image, right?[00:22:21] Peter Robicheaux: So This paper finds a set of difficult images for these types of models. And the way it does it is it looks for embeddings that are similar in clip space, but far in DynaV2 space. So DynaV2 is a foundation model that was trained self supervised purely on image data. And it kind of uses like some complex student teacher framework, but essentially, and like, it patches out like certain areas of the image or like crops with certain areas of the image and tries to make sure that those have consistent representations, which is a way for it to learn very fine grained visual features.[00:22:54] Peter Robicheaux: And so if you take things that are very close in clip space and very far in DynaV2 space, you get a set of images [00:23:00] that Basically, pairs of images that are hard for a chat GPT and other big language models to distinguish. So, if you then ask it questions about this image, well, as you can see from this chart, it's going to answer the same way for both images, right?[00:23:14] Peter Robicheaux: Because to, to, from the perspective of the vision encoder, they're the same image. And so if you ask a question like, how many eyes does this animal have? It answers the same for both. And like all these other models, including Lava do the same thing, right? And so this is the benchmark that they create, which is like finding clip, like clip line pairs, which is pairs of images that are similar in clip space and creating a data set of multiple choice questions based off of those.[00:23:39] Peter Robicheaux: And so how do these models do? Well, really bad. Lava, I think, So, so, chat2BT and Jim and I do a little bit better than random guessing, but, like, half of the performance of humans who find these problems to be very easy. Lava is, interestingly, extremely negatively correlated with this dataset. It does much, much, much, much worse [00:24:00] than random guessing, which means that this process has done a very good job of identifying hard images for, for Lava, specifically.[00:24:07] Peter Robicheaux: And that's because Lava is basically not trained for very long and is initialized from Clip, and so You would expect it to do poorly on this dataset. So, one of the proposed solutions that this paper attempts is by basically saying, Okay, well if clip features aren't enough, What if we train the visual encoder of the language model also on dyno features?[00:24:27] Peter Robicheaux: And so it, it proposes two different ways of doing this. One, additively which is basically interpolating between the two features, and then one is interleaving, which is just kind of like training one on the combination of both features. So there's this really interesting trend when you do the additive mixture of features.[00:24:45] Peter Robicheaux: So zero is all clip features and one is all DynaV2 features. So. It, as you, so I think it's helpful to look at the right most chart first, which is as you increase the number of DynaV2 features, your model does worse and worse and [00:25:00] worse on the actual language modeling task. And that's because DynaV2 features were trained completely from a self supervised manner and completely in image space.[00:25:08] Peter Robicheaux: It knows nothing about text. These features aren't really compatible with these text models. And so you can train an adapter all you want, but it seems that it's in such an alien language that it's like a very hard optimization for this. These models to solve. And so that kind of supports what's happening on the left, which is that, yeah, it gets better at answering these questions if as you include more dyna V two features up to a point, but then you, when you oversaturate, it completely loses its ability to like.[00:25:36] Peter Robicheaux: Answer language and do language tasks. So you can also see with the interleaving, like they essentially double the number of tokens that are going into these models and just train on both, and it still doesn't really solve the MMVP task. It gets Lava 1. 5 above random guessing by a little bit, but it's still not close to ChachiPT or, you know, Any like human performance, obviously.[00:25:59] Peter Robicheaux: [00:26:00] So clearly this proposed solution of just using DynaV2 features directly, isn't going to work. And basically what that means is that as a as a vision foundation model, DynaV2 is going to be insufficient for language tasks, right?[00:26:14] Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks)[00:26:14] Peter Robicheaux: So my next pick for best paper of 2024 would be Florence 2, which tries to solve this problem by incorporating not only This dimension of spatial hierarchy, which is to say pixel level understanding, but also in making sure to include what they call semantic granularity, which ends up, the goal is basically to have features that are sufficient for finding objects in the image, so they're, they're, they have enough pixel information, but also can be talked about and can be reasoned about.[00:26:44] Peter Robicheaux: And that's on the semantic granularity axis. So here's an example of basically three different paradigms of labeling that they do. So they, they create a big dataset. One is text, which is just captioning. And you would expect a model that's trained [00:27:00] only on captioning to have similar performance like chat2BT and like not have spatial hierarchy, not have features that are meaningful at the pixel level.[00:27:08] Peter Robicheaux: And so they add another type, which is region text pairs, which is essentially either classifying a region or You're doing object detection or doing instance segmentation on that region or captioning that region. And then they have text phrased region annotations, which is essentially a triple. And basically, not only do you have a region that you've described, you also find it's like, It's placed in a descriptive paragraph about the image, which is basically trying to introduce even more like semantic understanding of these regions.[00:27:39] Peter Robicheaux: And so like, for instance, if you're saying a woman riding on the road, right, you have to know what a woman is and what the road is and that she's on top of it. And that's, that's basically composing a bunch of objects in this visual space, but also thinking about it semantically, right? And so the way that they do this is they take basically they just dump Features from a vision encoder [00:28:00] straight into a encoder decoder transformer.[00:28:03] Peter Robicheaux: And then they train a bunch of different tasks like object detection and so on as a language task. And I think that's one of the big things that we saw in 2024 is these, these vision language models operating in, on pixel space linguistically. So they introduced a bunch of new tokens to point to locations and[00:28:22] Peter Robicheaux: So how does it work? How does it actually do? We can see if you look at the graph on the right, which is using the, the Dino, the the Dino framework your, your pre trained Florence 2 models transfer very, very well. They get 60%, 60 percent map on Cocoa, which is like approaching state of the art and they train[00:28:42] Vik Korrapati: with, and they[00:28:43] Peter Robicheaux: train with a much more more efficiently.[00:28:47] Peter Robicheaux: So they, they converge a lot faster, which both of these things are pointing to the fact that they're actually leveraging their pre trained weights effectively. So where is it falling short? So these models, I forgot to mention, Florence is a 0. 2 [00:29:00] billion and a 0. 7 billion parameter count. So they're very, very small in terms of being a language model.[00:29:05] Peter Robicheaux: And I think that. This framework, you can see saturation. So, what this graph is showing is that if you train a Florence 2 model purely on the image level and region level annotations and not including the pixel level annotations, like this, segmentation, it actually performs better as an object detector.[00:29:25] Peter Robicheaux: And what that means is that it's not able to actually learn all the visual tasks that it's trying to learn because it doesn't have enough capacity.[00:29:32] PalíGemma / PaliGemma 2[00:29:32] Peter Robicheaux: So I'd like to see this paper explore larger model sizes, which brings us to our next big paper of 2024 or two papers. So PolyGemma came out earlier this year.[00:29:42] Peter Robicheaux: PolyGemma 2 was released, I think like a week or two ago. Oh, I forgot to mention, you can actually train You can, like, label text datasets on RoboFlow and you can train a Florence 2 model and you can actually train a PolyGemma 2 model on RoboFlow, which we got into the platform within, like, 14 hours of release, which I was really excited about.[00:29:59] Peter Robicheaux: So, anyway, so [00:30:00] PolyGemma 2, so PolyGemma is essentially doing the same thing, but instead of doing an encoder decoder, it just dumps everything into a decoder only transformer model. But it also introduced the concept of location tokens to point to objects in pixel space. PolyGemma 2, so PolyGemma uses Gemma as the language encoder, and it uses Gemma2B.[00:30:17] Peter Robicheaux: PolyGemma 2 introduces using multiple different sizes of language encoders. So, the way that they sort of get around having to do encoder decoder is they use the concept of prefix loss. Which basically means that when it's generating, tokens autoregressively, it's all those tokens in the prefix, which is like the image that it's looking at and like a description of the task that it's trying to do.[00:30:41] Peter Robicheaux: They're attending to each other fully, full attention. Which means that, you know, it can sort of. Find high level it's easier for the, the prefix to color, to color the output of the suffix and also to just find like features easily. So this is sort of [00:31:00] an example of like one of the tasks that was trained on, which is like, you describe the task in English and then you give it all these, like, You're asking for it to segment these two classes of objects, and then it finds, like, their locations using these tokens, and it finds their masks using some encoding of the masks into tokens.[00:31:24] Peter Robicheaux: And, yeah, so, one of my critiques, I guess, of PolyGemma 1, at least, is that You find that performance saturates as a pre trained model after only 300 million examples seen. So, what this graph is representing is each blue dot is a performance on some downstream task. And you can see that after seeing 300 million examples, It sort of does equally well on all of the downtrend tasks that they tried it on, which was a lot as 1 billion examples, which to me also kind of suggests a lack of capacity for this model.[00:31:58] Peter Robicheaux: PolyGemma2, [00:32:00] you can see the results on object detection. So these were transferred to to Coco. And you can see that this sort of also points to an increase in capacity being helpful to the model. You can see as. Both the resolution increases, and the parameter count of the language model increases, performance increases.[00:32:16] Peter Robicheaux: So resolution makes sense, obviously, it helps to find small images, or small objects in the image. But it also makes sense for another reason, which is that it kind of gives the model a thinking register, and it gives it more tokens to, like, process when making its predictions. But yeah, you could, you could say, oh, 43.[00:32:30] Peter Robicheaux: 6, that's not that great, like Florence 2 got 60. But this is not Training a dino or a debtor on top of this language or this image encoder. It's doing the raw language modeling task on Cocoa. So it doesn't have any of the bells and whistles. It doesn't have any of the fancy losses. It doesn't even have bipartite graph matching or anything like that.[00:32:52] Peter Robicheaux: Okay, the big result and one of the reasons that I was really excited about this paper is that they blow everything else away [00:33:00] on MMVP. I mean, 47. 3, sure, that's nowhere near human accuracy, which, again, is 94%, but for a, you know, a 2 billion language, 2 billion parameter language model to be chat2BT, that's quite the achievement.[00:33:12] Peter Robicheaux: And that sort of brings us to our final pick for paper of the year, which is AIMV2. So, AIMV2 sort of says, okay, Maybe this language model, like, maybe coming up with all these specific annotations to find features and with high fidelity and pixel space isn't actually necessary. And we can come up with an even simpler, more beautiful idea for combining you know, image tokens and pixel tokens in a way that's interfaceable for language tasks.[00:33:44] Peter Robicheaux: And this is nice because it can scale, you can come up with lots more data if you don't have to come up with all these annotations, right? So the way that it works. is it does something very, very similar to PolyGemo, where you have a vision encoder that dumps image tokens into a decoder only transformer.[00:33:59] Peter Robicheaux: But [00:34:00] the interesting thing is that it also autoregressively tries to learn the mean squared error of the image tokens. So instead of having to come up with fancy object detection or semantic, or segment, or segmentation labels, you can just try to reconstruct the image and have it learn fine grained features that way.[00:34:16] Peter Robicheaux: And it does this in kind of, I think, a beautiful way that's kind of compatible with the PolyGemma line of thinking, which is randomly sampling a prefix line of thinking Prefix length and using only this number of image tokens as the prefix. And so doing a similar thing with the causal. So the causal with prefix is the, the attention mask on the right.[00:34:35] Peter Robicheaux: So it's doing full block attention with some randomly sampled number of image tokens to then reconstruct the rest of the image and the downstream caption for that image. And so, This is the dataset that they train on. It's image or internet scale data, very high quality data created by the data filtering networks paper, essentially which is maybe The best clip data that exists.[00:34:59] Peter Robicheaux: [00:35:00] And we can see that this is finally a model that doesn't saturate. It's even at the highest parameter count, it's, it appears to be, oh, at the highest parameter account, it appears to be improving in performance with more and more samples seen. And so you can sort of think that. You know, if we just keep bumping the parameter count and increasing the example scene, which is the, the, the line of thinking for language models, then it'll keep getting better.[00:35:27] Peter Robicheaux: So how does it actually do at finding, oh, it also improves with resolution, which you would expect for a model that This is the ImageNet classification accuracy, but yeah, it does better if you increase the resolution, which means that it's actually leveraging and finding fine grained visual features.[00:35:44] Peter Robicheaux: And so how does that actually do compared to CLIP on Cocoa? Well, you can see that if you slap a transformer detection head on it, Entry now in Cocoa, it's just 60. 2, which is also within spitting distance of Soda, which means that it does a very good job of [00:36:00] finding visual features, but you could say, okay, well, wait a second.[00:36:03] Peter Robicheaux: Clip got to 59. 1, so. Like, how does this prove your claim at all? Because doesn't that mean like clip, which is known to be clip blind and do badly on MMVP, it's able to achieve a very high performance on fine, on this fine grained visual features task of object detection, well, they train on like, Tons of data.[00:36:24] Peter Robicheaux: They train on like objects, 365, Cocoa, Flickr and everything else. And so I think that this benchmark doesn't do a great job of selling how good of a pre trained model MV2 is. And we would like to see the performance on fewer data as examples and not trained to convergence on object detection. So seeing it in the real world on like a dataset, like RoboFlow 100, I think would be quite interesting.[00:36:48] Peter Robicheaux: And our, our, I guess our final, final pick for paper of 2024 would be Moondream. So introducing Vic to talk about that.[00:36:54] swyx: But overall, that was exactly what I was looking for. Like best of 2024, an amazing job. Yeah, you can, [00:37:00] if there's any other questions while Vic gets set up, like vision stuff,[00:37:07] swyx: yeah,[00:37:11] swyx: Vic, go ahead. Hi,[00:37:13] Vik Korrapati / Moondream[00:37:13] question: well, while we're getting set up, hi, over here, thanks for the really awesome talk. One of the things that's been weird and surprising is that the foundation model companies Even these MLMs, they're just like worse than RT Tether at detection still. Like, if you wanted to pay a bunch of money to auto label your detection dataset, If you gave it to OpenAI or Cloud, that would be like a big waste.[00:37:37] question: So I'm curious, just like, even Pali Gemma 2, like is worse. So, so I'm curious to hear your thoughts on like, how come, Nobody's cracked the code on like a generalist that really you know, beats a specialist model in computer vision like they have in in LLM land.[00:38:00][00:38:01] Isaac Robinson: Okay. It's a very, very interesting question. I think it depends on the specific domain. For image classification, it's basically there. In the, in AIMv2 showed, a simple attentional probe on the pre trained features gets like 90%, which is as well as anyone does. The, the, the, the bigger question, like, why isn't it transferring to object detection, especially like real time object detection.[00:38:25] Isaac Robinson: I think, in my mind, there are two answers. One is, object detection is really, really, really the architectures are super domain specific. You know, we see these, all these super, super complicated things, and it's not super easy to, to, to build something that just transfers naturally like that, whereas image classification, you know, clip pre training transfers super, super quickly.[00:38:48] Isaac Robinson: And the other thing is, until recently, the real time object detectors didn't even really benefit from pre training. Like, you see the YOLOs that are like, essentially saturated, showing very little [00:39:00] difference with pre training improvements, with using pre trained model at all. It's not surprising, necessarily, that People aren't looking at the effects of better and better pre training on real time detection.[00:39:12] Isaac Robinson: Maybe that'll change in the next year. Does that answer your question?[00:39:17] Peter Robicheaux: Can you guys hear me? Yeah, one thing I want to add is just like, or just to summarize, basically, is that like, Until 2024, you know, we haven't really seen a combination of transformer based object detectors and fancy losses, and PolyGemma suffers from the same problem, which is basically to say that these ResNet, or like the convolutional models, they have all these, like, extreme optimizations for doing object detection, but essentially, I think it's kind of been shown now that convolution models like just don't benefit from pre training and just don't like have the level of intelligence of transformer models.[00:39:56] swyx: Awesome. Hi,[00:39:59] Vik Korrapati: can [00:40:00] you hear me?[00:40:01] swyx: Cool. I hear you. See you. Are you sharing your screen?[00:40:04] Vik Korrapati: Hi. Might have forgotten to do that. Let me do[00:40:07] swyx: that. Sorry, should have done[00:40:08] Vik Korrapati: that.[00:40:17] swyx: Here's your screen. Oh, classic. You might have to quit zoom and restart. What? It's fine. We have a capture of your screen.[00:40:34] swyx: So let's get to it.[00:40:35] Vik Korrapati: Okay, easy enough.[00:40:49] Vik Korrapati: All right. Hi, everyone. My name is Vic. I've been working on Moondream for almost a year now. Like Shawn mentioned, I just went and looked and it turns out the first version I released December [00:41:00] 29, 2023. It's been a fascinating journey. So Moonbeam started off as a tiny vision language model. Since then, we've expanded scope a little bit to also try and build some tooling, client libraries, et cetera, to help people really deploy it.[00:41:13] Vik Korrapati: Unlike traditional large models that are focused at assistant type use cases, we're laser focused on building capabilities that developers can, sorry, it's yeah, we're basically focused on building capabilities that developers can use to build vision applications that can run anywhere. So, in a lot of cases for vision more so than for text, you really care about being able to run on the edge, run in real time, etc.[00:41:40] Vik Korrapati: So That's really important. We have we have different output modalities that we support. There's query where you can ask general English questions about an image and get back human like answers. There's captioning, which a lot of our users use for generating synthetic datasets to then train diffusion models and whatnot.[00:41:57] Vik Korrapati: We've done a lot of work to minimize those sessions there. [00:42:00] So that's. Use lot. We have open vocabulary object detection built in similar to a couple of more recent models like Palagem, et cetera, where rather than having to train a dedicated model, you can just say show me soccer balls in this image or show me if there are any deer in this image, it'll detect it.[00:42:14] Vik Korrapati: More recently, earlier this month, we released pointing capability where if all you're interested in is the center of an object you can just ask it to point out where that is. This is very useful when you're doing, you know, I automation type stuff. Let's see, LA we, we have two models out right now.[00:42:33] Vik Korrapati: There's a general purpose to be para model, which runs fair. Like it's, it's it's fine if you're running on server. It's good for our local Amma desktop friends and it can run on flagship, flagship mobile phones, but it never. so much for joining us today, and we'll see you in the [00:43:00] next one. Less memory even with our not yet fully optimized inference client.[00:43:06] Vik Korrapati: So the way we built our 0. 5b model was to start with the 2 billion parameter model and prune it while doing continual training to retain performance. We, our objective during the pruning was to preserve accuracy across a broad set of benchmarks. So the way we went about it was to estimate the importance of different components of the model, like attention heads, channels MLP rows and whatnot using basically a technique based on the gradient.[00:43:37] Vik Korrapati: I'm not sure how much people want to know details. We'll be writing a paper about this, but feel free to grab me if you have more questions. Then we iteratively prune a small chunk that will minimize loss and performance retrain the model to recover performance and bring it back. The 0. 5b we released is more of a proof of concept that this is possible.[00:43:54] Vik Korrapati: I think the thing that's really exciting about this is it makes it possible for for developers to build using the 2B param [00:44:00] model and just explore, build their application, and then once they're ready to deploy figure out what exactly they need out of the model and prune those capabilities into a smaller form factor that makes sense for their deployment target.[00:44:12] Vik Korrapati: So yeah, very excited about that. Let me talk to you folks a little bit about another problem I've been working on recently, which is similar to the clocks example we've been talking about. We had a customer reach out who was talking about, like, who had a bunch of gauges out in the field. This is very common in manufacturing and oil and gas, where you have a bunch of analog devices that you need to monitor.[00:44:34] Vik Korrapati: It's expensive to. And I was like, okay, let's have humans look at that and monitor stuff and make sure that the system gets shut down when the temperature goes over 80 or something. So I was like, yeah, this seems easy enough. Happy to, happy to help you distill that. Let's, let's get it going. Turns out our model couldn't do it at all.[00:44:51] Vik Korrapati: I went and looked at other open source models to see if I could just generate a bunch of data and learn from that. Did not work either. So I was like, let's look at what the folks with [00:45:00] hundreds of billions of dollars in market cap have to offer. And yeah, that doesn't work either. My hypothesis is that like the, the way these models are trained are using a large amount of image text data scraped from the internet.[00:45:15] Vik Korrapati: And that can be biased. In the case of gauges, most gauge images aren't gauges in the wild, they're product images. Detail images like these, where it's always set to zero. It's paired with an alt text that says something like GIVTO, pressure sensor, PSI, zero to 30 or something. And so the models are fairly good at picking up those details.[00:45:35] Vik Korrapati: It'll tell you that it's a pressure gauge. It'll tell you what the brand is, but it doesn't really learn to pay attention to the needle over there. And so, yeah, that's a gap we need to address. So naturally my mind goes to like, let's use synthetic data to, Solve this problem. That works, but it's problematic because it turned out we needed millions of synthetic gauge images to get to reasonable performance.[00:45:57] Vik Korrapati: And thinking about it, reading a gauge is like [00:46:00] not a one, like it's not a zero short process in our minds, right? Like if you had to tell me the reading in Celsius for this, Real world gauge. There's two dials on there. So first you have to figure out which one you have to be paying attention to, like the inner one or the outer one.[00:46:14] Vik Korrapati: You look at the tip of the needle, you look at what labels it's between, and you count how many and do some math to figure out what that probably is. So what happens if we just add that as a Chain of thought to give the model better understanding of the different sub, to allow the model to better learn the subtasks it needs to perform to accomplish this goal.[00:46:37] Vik Korrapati: So you can see in this example, this was actually generated by the latest version of our model. It's like, okay, Celsius is the inner scale. It's between 50 and 60. There's 10 ticks. So the second tick, it's a little debatable here, like there's a weird shadow situation going on, the dial is off, so I don't know what the ground truth is, but it works okay.[00:46:57] Vik Korrapati: There's points on there that are, the points [00:47:00] over there are actually grounded. I don't know if this is easy to see, but when I click on those, there's a little red dot that moves around on the image. The model actually has to predict where this points are, I was already trying to do this with bounding boxes, but then Malmo came out with pointing capabilities.[00:47:15] Vik Korrapati: And it's like pointing is a much better paradigm to to represent this. We see pretty good results. This one's actually for clock reading. I couldn't find our chart for gauge reading at the last minute. So the light. Blue chart is with our rounded chain of thought. This measures, we have, we built a clock reading benchmark about 500 images.[00:47:37] Vik Korrapati: This measures accuracy on that. You can see it's a lot more sample efficient when you're using the chain of thought to model. Another big benefit from this approach is like, you can kind of understand how the model is. it and how it's failing. So in this example, the actual correct reading is 54 Celsius, the model output [00:48:00] 56, not too bad but you can actually go and see where it messed up. Like it got a lot of these right, except instead of saying it was on the 7th tick, it actually predicted that it was the 8th tick and that's why it went with 56.[00:48:14] Vik Korrapati: So now that you know that this. Failing in this way, you can adjust how you're doing the chain of thought to maybe say like, actually count out each tick from 40, instead of just trying to say it's the eighth tick. Or you might say like, okay, I see that there's that middle thing, I'll count from there instead of all the way from 40.[00:48:31] Vik Korrapati: So helps a ton. The other thing I'm excited about is a few short prompting or test time training with this. Like if a customer has a specific gauge that like we're seeing minor errors on, they can give us a couple of examples where like, if it's miss detecting the. Needle, they can go in and correct that in the chain of thought.[00:48:49] Vik Korrapati: And hopefully that works the next time. Now, exciting approach, we only apply it to clocks and gauges. The real question is, is it going to generalize? Probably, like, there's some science [00:49:00] from text models that when you train on a broad number of tasks, it does generalize. And I'm seeing some science with our model as well.[00:49:05] Vik Korrapati: So, in addition to the image based chain of thought stuff, I also added some spelling based chain of thought to help it understand better understand OCR, I guess. I don't understand why everyone doesn't do this, by the way. Like, it's trivial benchmark question. It's Very, very easy to nail. But I also wanted to support it for stuff like license plate, partial matching, like, hey, does any license plate in this image start with WHA or whatever?[00:49:29] Vik Korrapati: So yeah, that sort of worked. All right, that, that ends my story about the gauges. If you think about what's going on over here it's interesting that like LLMs are showing enormous. Progress in reasoning, especially with the latest set of models that we've seen, but we're not really seeing, I have a feeling that VLMs are lagging behind, as we can see with these tasks that should be very simple for a human to do [00:50:00] that are very easy to find VLMs failing at.[00:50:04] Vik Korrapati: My hypothesis on why this is the case is because On the internet, there's a ton of data that talks about how to reason. There's books about how to solve problems. There's books critiquing the books about how to solve problems. But humans are just so good at perception that we never really talk about it.[00:50:20] Vik Korrapati: Like, maybe in art books where it's like, hey, to show that that mountain is further away, you need to desaturate it a bit or whatever. But the actual data on how to, like, look at images is, isn't really present. Also, the Data we have is kind of sketched. The best source of data we have is like image all text pairs on the internet and that's pretty low quality.[00:50:40] Vik Korrapati: So yeah, I, I think our solution here is really just we need to teach them how to operate on individual tasks and figure out how to scale that out. All right. Yep. So conclusion. At Moondream we're trying to build amazing PLMs that run everywhere. Very hard problem. Much work ahead, but we're making a ton of progress and I'm really excited [00:51:00] about If anyone wants to chat about more technical details about how we're doing this or interest in collaborating, please, please hit me up.[00:51:08] Isaac Robinson: Yeah,[00:51:09] swyx: like, I always, when people say, when people say multi modality, like, you know, I always think about vision as the first among equals in all the modalities. So, I really appreciate having the experts in the room. Get full access to Latent Space at www.latent.space/subscribe

Hyper Conscious Podcast
SIMPLIFY Your Focus For Maximum Productivity! (1908)

Hyper Conscious Podcast

Play Episode Listen Later Dec 6, 2024 26:35


Send us a textConsistency and simplification are your allies in a hectic world! In this freestyle, Friday episode hosts Kevin Palmieri and Alan Lazaros talk about what it takes to grow while juggling life's chaos. They explore how focusing on clear goals, staying consistent, and embracing struggle can lead to success. Discover why simplifying your routines and prioritizing what matters most can be game-changers for your journey. Whether you're stuck or need encouragement, this episode will help you reset your mindset.Link mentioned:Subscribe now -  https://www.buzzsprout.com/742955/share______________________NLU is not just a podcast; it's a gateway to a wealth of resources designed to help you achieve your goals and dreams. From our Next Level Dreamliner to our Group Coaching, we offer a variety of tools and communities to support your personal development journey.For more information, please check out our website at the link below.

Productivity Smarts
Productivity Smarts 079 - Productive Wealth Building with Rodger Friedman

Productivity Smarts

Play Episode Listen Later Dec 3, 2024 37:45


Have you ever considered how much financial stress impacts your life? How often does money worry keep you up at night or cause tension in your relationships? Did you know that financial problems are one of the leading causes of divorce? It's a reality that money issues can strain everything—from your career to your personal life. But what if there was a way to ease that pressure? What if having a solid financial plan could not only relieve stress but also enhance your relationships and career? In this episode of the Productivity Smarts Podcast, host Gerald J. Leonard is joined by Rodger Friedman, a veteran retirement strategist and wealth advisor with over 40 years of experience. Together, they explore the powerful link between financial planning and productivity. Rodger shares invaluable insights on how a well-thought-out financial strategy can reduce stress, increase productivity, and even improve your personal life. He also discusses the importance of stepping out of your comfort zone, building positive habits, and staying focused on clear goals. Tune in now to learn how you can start managing your money more effectively, reduce financial stress, and take the next step toward a more productive, successful future.  What We Discuss [02:01] Introduction to Rodger Friedman [05:29] Success factors in financial services [07:40] Avoiding common pitfalls [10:07] The complexity of wealth management [12:29] Importance of habits [14:49] The use of index cards to reinforce daily goals and decisions [19:27] Intentions vs. actions [20:43] Clarity in goal setting [23:43] The Pareto principle discussion [26:07] Importance of reading [30:55] The differences between philosophy, strategy, and tactics in wealth building [35:54] Closing remarks and resources Notable Quotes [02:38] "When you have a good financial plan and a strategy for managing money, it really releases a lot of stress because money plays a big part in the stress that people take on, and it can hinder them in their careers and everything else." - Gerald J. Leonard [05:29] "All growth happens outside of your comfort zone." - Rodger Friedman [07:40] “Look at what everyone else is doing, and don't do that. If you want to be successful, avoid following the crowd.” – Rodger Friedman [13:02] “Success is determined by the quality of your decisions and the habits you build around them. Bad decisions lead to poor behavior, and vice versa.” – Rodger Friedman [17:35] “Writing down your goals and reviewing them daily puts them in your bones. It keeps you focused, and makes sure you're always aware of the opportunities around you.” – Rodger Friedman [18:03] “When you program your brain with your goals, you start to see all the opportunities and resources aligned with those goals, and they start showing up in your life.” – Gerald J. Leonard [28:40] "Once your brain has been stretched by a new idea, a book, an insight, it can never go back to thinking the old way again." - Gerald J. Leonard Our Guest Rodger Friedman is a seasoned retirement strategist and wealth advisor with over 40 years of experience in the financial industry. With a distinguished career that includes serving as Senior Vice President of Wealth Management at Morgan Stanley and Senior Investment Management Consultant at Smith Barney, Rodger is a trusted voice in wealth building and retirement planning. A Chartered Retirement Planning Counselor, he has participated in thousands of discussions on financial planning and passive income. Beyond his financial expertise, Rodger is also an accomplished author of seven books, including the bestselling Erasing America: Broken Politics, Broken Country, among others. A proud member of the Sons of the Legion, he brings a unique perspective to both his professional and personal pursuits. With over 60 interviews and articles to his name, Rodger remains dedicated to helping clients achieve lasting financial security and success. Resources Rodger Friedman Website -https://eocritic.com/ LinkedIn - https://www.linkedin.com/in/rodger-friedman-crpc%C2%AE-1300903a/ Book- 18 Wealth Lessons That Will Transform Your Thinking Productivity Smarts Podcast Website - productivitysmartspodcast.com Gerald J. Leonard Website - geraldjleonard.com Turnberry Premiere website - turnberrypremiere.com Scheduler - vcita.com/v/geraldjleonard Mentioned Song:Jim Croce Photographs and Memories. Atlas Shrugged" by Ayn Rand Kiva is a loan, not a donation, allowing you to cycle your money and create a personal impact worldwide. https://www.kiva.org/lender/topmindshelpingtopminds