Podcasts about Alternatives

  • 6,930PODCASTS
  • 11,256EPISODES
  • 37mAVG DURATION
  • 2DAILY NEW EPISODES
  • Feb 25, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories




Best podcasts about Alternatives

Show all podcasts related to alternatives

Latest podcast episodes about Alternatives

BAST Training podcast
Ep.248 Thyroid Tilt Under the Microscope: Perception vs Physiology with Mathias Aaen

BAST Training podcast

Play Episode Listen Later Feb 25, 2026 65:11


What's really happening inside the larynx when we ‘tilt?' In this episode, Alexa is joined by voice researcher Mathias Aaen to unpack the science behind thyroid tilt - exploring what his latest studies reveal about pitch, vocal fold lengthening, and healthy singing. The pair cut through common misconceptions, translate research into studio-ready language, and ask the big question: are our teaching prompts actually doing what we think they are? If you love practical pedagogy grounded in solid science, this one's for you.  WHAT'S IN THIS PODCAST?  2:58 What is tilt? Anatomy & physiology  6:35 CVT framework 16:13 Study results  22:45 Physiology vs the perceptual  25:36 Teaching prompts  43:10 Vocal fold length and pitch change  48:14 Enemies of tilt 52:37 Common misconceptions about tilt   About the presenter HERE RELEVANT MENTIONS & LINKS Investigating Laryngeal “Tilt” on Same-pitch Phonation—Preliminary Findings of Vocal Mode, Metal and Density Parameters as Alternatives to Cricothyroid-Thyroarytenoid “Mix” by Mathias Aaen et al Correlating Degree of Thyroid Tilt Independent of fo Control as a Mechanism for Phonatory Density with EGG and Acoustic Measures across Loudness Conditions by Mathias Aaen et al Singing Teachers Talk - Ep.131 Mastering Research Papers: How to Read with Ease and Extract Knowledge  Complete Vocal Training  Ian Howell Dr Mark Tempesta Kerrie Obert  Dr Ingo Titze Estill CVT App Folia Phoniatrica et Logopaedica  Manuel Garcia  Praat  ABOUT THE GUEST  Mathias Aaen, PhD, is a voice researcher, educator, and certified rehabilitation specialist. He serves as Honorary Researcher at Nottingham University Hospitals and VP of Research & Collaboration at CVI, and was previously a Fulbright Fellow at UC Berkeley. His work focuses on voice physiology, acoustics, auditory-perceptual analysis, and voice habilitation and rehabilitation, with groundbreaking research into the physiology and health of contemporary commercial music styles, including rock and heavy metal. He recently completed a PostDoc investigating the CVT framework as a clinical treatment for dysphonia in MTD and ABI patients. An award-winning researcher and Authorised CVT Teacher, Mathias is also an active performer who has worked with leading opera houses and voice professionals worldwide.  SEE FULL BIO HEREWebsite

The John Batchelor Show
S8 Ep508: Preview for later today: Liz Peek joins John Batchelor to discuss how AI developments are causing market sell-offs in software and logistics, prompting investors to seek alternatives to MAG 7 stocks.

The John Batchelor Show

Play Episode Listen Later Feb 24, 2026 2:12


Preview for later today: Liz Peek joins John Batchelor to discuss how AI developments are causing market sell-offs in software and logistics, prompting investors to seek alternatives to MAG 7 stocks.1963

Thoughts on the Market
Why Stocks Keep Rising Despite AI Anxiety

Thoughts on the Market

Play Episode Listen Later Feb 24, 2026 4:39


Our CIO and Chief U.S. Equity Strategist Mike Wilson explains why he still believes in a growth cycle for equity markets, even as investors show growing concerns around AI.Mike Wilson: Welcome to Thoughts on the Market. I'm Mike Wilson, Morgan Stanley's CIO and Chief U.S. Equity Strategist. Today on the podcast, I'll be discussing recent concerns around AI disruption. It's Tuesday, February 24th at 1pm in New York. So, let's get after it. Last week you could feel it, that anxious undercurrent in the market. The headlines were noisy, volatility ticked higher, and AI disruption, once again, dominated investor conversations. But beneath the surface level unease something important happened. The S&P 500 Equal Weight Index pushed to a new relative high, keeping our broadening thesis alive and well. On one hand, investors are worried about AI driven disruption, CapEx intensity, and potential labor force reductions. On the other hand, capital is still flowing into formerly lagging areas of the market, just as the median stock is seeing its strongest earnings growth in four years. Let's unpack this. First, there's concern AI will lead to job losses. But even if that's the case, there's typically a phase-in period. Companies don't just eliminate labor overnight. Importantly, before these productivity gains are fully realized, we need broad enterprise adoption. That means building out the agentic application layer, integrating AI into workflows, retraining systems and processes. That takes time, and it is still early days in that regard. Second, what we're seeing now is typical of a major investment cycle. Volatility increases as markets challenge the pace of unbridled spending. Dispersion increases as investors debate winners and losers. Leadership rotates, sometimes sharply. There's also something different this time compared to the internet bubble of the late 1990s. Today we're in an early cycle earnings backdrop. We've just emerged from what was effectively a rolling recession between 2022 and 2025. So, as capital rotates out of the perceived structural losers, it's not just chasing long-term AI beneficiaries, it's also finding classic cyclical winners. On the losing side is long duration services-oriented sectors, particularly software. These areas are more sensitive to uncertainty around longer term cash flows. This area also has a large overhang of private capital deployed over the last 10 to 15 years. There are other forces at play too. Small cap growth, arguably the longest duration segment of the market, began breaking down in late January around the time Kevin Warsh was nominated as Fed chair. While major indices barely reacted, more speculative areas may be responding to expectations of tighter liquidity given Warsh's, reputation as a balance sheet hawk. Finally, equity markets are typically more volatile when new Fed chairs assume office. Bottom line, our broader thesis of an early cycle rolling recovery remains intact. Market internals are supportive even if index level action feels choppy. That said, near term volatility is likely to persist as we enter a weaker seasonal window for retail demand, while liquidity remains ample, but far from abundant. With this backdrop, a quality cyclical barbell with healthcare makes sense. In small caps, the higher quality S&P 600 looks more attractive than the Russell 2000. And any short-term volatility could present opportunities to add exposure in preferred cyclical areas like Consumer Discretionary Goods, Industrials, and Financials. Of course, risks remain. AI adoption could accelerate faster than expected, pressuring labor markets more abruptly. Pricing power could erode as efficiency spread, and policy makers could react in ways that slow the CapEx cycle while crowded momentum positioning remains vulnerable. Nevertheless, the signal from the internals is clear. Beneath the volatility this looks less like a market rolling over, and more like one that is confirming an early cycle economic expansion. Thanks for tuning in. I hope you found it informative and useful. Let us know what you think by leaving us a review. And if you find Thoughts on the Market worthwhile, tell a friend or colleague to try it out.

Limitless Mindset
The conspiracy behind vibrator addiction, Nootropics for depression, 5-HTP alternatives & more

Limitless Mindset

Play Episode Listen Later Feb 24, 2026 55:13


We answer the following Biohacking and lifehacking questions in this Q&A podcast...11:30 How to overcome vibrator addiction?19:28 Alternatives to 5-HTP for depression?33:43 What is the best combination of brain supplements?36:52 Alternatives to Resveratrol?39:35 Phenylalanine for bipolar depression?43:06 Is a large Choline dose the same as a smaller Alpha-GPC dose?46:07 Does N-Acetyl Cysteine treat Phenibut withdrawal?Read

Your Simple & Spacious Business
Six gentle alternatives to old school launching

Your Simple & Spacious Business

Play Episode Listen Later Feb 24, 2026 17:43


Over and over again my clients have been telling me: I want launching to feel simpler, more easeful, more low-lift.Either big, intense launch plans have exhausted their body and nervous system in the past or they've avoided selling altogether because what they think they have to do feels super out of alignment for them. And I get it, because I'm right there with them too.As I shared in my last episode I'm leaning into much softer sales cycles this year and so far it's feeling really good. If old school launching advice and being told that you have to go big, build hype, and create more, more, more sales content and take up more space than feels sustainable for you is holding you back from creating sales cycles that support you to reach your financial goals, I recorded this week's podcast episode for you.In it I share six gentle alternatives to go-big-or-go-home launch strategy that I hope can crack open some fresh ideas for how you can sell this year in a way that resonates and connects with your hell yes people and actually feels fun and sustainable for you to bring to life too. Links:Join me for next month's workshop: Marketing Without Performance & PersuasionJoin my free library filled with resources to support you to create a spacious workweek, a steady and thriving income, and honour your humanness every day.Join me over on Substack. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit yoursimpleandspaciousbusiness.substack.com

Thoughts on the Market
Global Trade in Flux: What's Next After Tariff Ruling

Thoughts on the Market

Play Episode Listen Later Feb 23, 2026 7:16


The Supreme Court's latest ruling on tariffs has thrown existing trade agreements into uncertainty. Our Head of Public Policy Research Ariana Salvatore and Arunima Sinha, from the U.S and Global Economics teams break down the fallout.Read more insights from Morgan Stanley.----- Transcript -----Ariana Salvatore: Welcome to Thoughts on the Market. I'm Ariana Salvatore, Head of Public Policy Research. Arunima Sinha: And I am Arunima Sinha on the U.S. and Global Economics teams. Ariana Salvatore: Today we'll be talking about the recent Supreme Court decision on tariffs, what it means for existing trade deals, and where trade policy is headed from here. It's Monday, February 23rd at 9am in New York. On Friday, the Supreme Court ruled that the president could not use the International Emergency Economic Powers Act, or IEEPA, to impose broad-based tariffs. The ruling didn't give a clear signal on what it could mean for potential refunds, but the Trump administration said it plans to replace the existing tariffs, which is something that we'd long expected – first leveraging Section 122 to impose 15 percent tariffs for 150 days. The president is simultaneously going to launch a few new Section 301 investigations to eventually replace those Section 122 tariffs, since they're only allowed to be in place temporarily. So Arunima, let's start by breaking down some of this tariff math. What does this mean for the headline and effective rate given where we are now versus before? Arunima Sinha: Before the decision, Ariana, we were at a headline tariff rate of about 13 percent. What this decision does is that with the move, especially to 15 percent, for other countries, we think that it takes about a percentage point off of the headline tariff rate. So, we would go to about 12 percent, and then we have another percentage point coming off just because of the shifts in trade patterns. And so instead of a headline tariff rate of about 13 percent, we think that we're going to be at a headline tariff of just about 11 percent. But that's really just related to the Section 122s. And as you noted, this is only going to apply for the next 150 days. So how should we be thinking about trade policy going forward? Ariana Salvatore: I think we should view the 15 percent as probably a likely ceiling for these rates in the medium term; in particular because this 150-day period expires some time around the summer, so even closer to the midterm elections. And as we've been saying politically speaking, it's unpopular to impose high levels of tariffs. We've also been saying that the president will continue to lean on trade policy as his real, only way to address the affordability issue for voters, which is something that we've actually seen on the policy side for the past few months with the imposition of exemptions, more trade framework agreements, et cetera.So really, I think this is just another way for him to continue leaning on this policy avenue. But in that vein, let's talk about specific pockets of relief. What are we thinking about some of their findings on a sector level? Arunima Sinha: So, let's tie this into the affordability aspect that you mentioned, Ariana, and specifically using the consumer goods sector. What we think is that with, just in the near-term period, with the Section 122s applying, for different consumer goods categories, we could see tariff rate differentials go down. So, they could be anywhere between 1 to 4 percentage points lower across different categories. But what we also think could happen is that once we get beyond the 150-day period, and there are no additional sector tariffs that go on. So, the 232s or the 301s, particularly for this particular sector, we could see some of the largest tariff relief that we're expecting to see. So, for example, apparel and accessories could see something like a 16 to 17 percentage point tariff drop. So that particular part I think is important. Just the upside risks to consumer goods. But that of course brings us to the question of bilateral trade deals and how they come into play. What do you think about that, Ariana? Ariana Salvatore: Yeah. So, I think when it comes to the bilateral deals, as we mentioned, there's some opportunities for relief depending on the sectors and the type of tariff exposure by country. As you mentioned, the consumer goods are a good example of this. So, in general, I think that trading partners will have little incentive to abandon the existing deals or framework agreements, just given that the president and the administration have messaged this idea of continuity. So, replacing the IEEPA tariffs with a more durable, legitimate, legal authority. But what's notable is that many of our trading partners are actually now facing potentially even lower levels than they were before. Even with the increase to 15 percent on the 122s from 10 percent over the weekend. In particular, many countries in Southeast Asia are actually now facing lower tariff levels since there were somewhere in the range of 20 or maybe even 25 percent before. But as I mentioned, the export composition of these countries matters a lot. So, Vietnam, for example, most exports are subject to the 20 percent tariff because of the IEEPA exposure. This ruling is more meaningful than somewhere like South Korea, where the exports are more exposed to the Section 232 tariffs. Based on the export composition – and that's a level, remember, that's not changing as a result of this ruling. So that's how we're trying to disaggregate the impact here. Now, my last question to you, Arunima, what does this all mean for the macro-outlook? As we mentioned, refunds weren't addressed in this ruling. We've sketched out a few different scenarios, most of which leaned toward a long lead time to eventually paying back the money – if and when the administration is actually, in fact, mandated to do that. But safe to say in the near term that we aren't going to see much action on that front. That probably means status quo. But why don't you put a finer point on what this means for the macroeconomic outlook? Arunima Sinha: That's absolutely right, Ariana, for the very near term and the second quarter, we don't think we're going to be very different from what our baseline expectation is. In the third quarter and in the last part of this year, there could be some upside risks, especially once the timeline on the 122s run out, they're not extended. And the different sector and country investigations take longer to implement. So, there could be some upside risks to demand. Consumer goods, for example. If there were to be some sort of an incremental tailwind to corporate margins that might lead to better labor demand from these companies. There could be additional goods disinflation; that would support just purchasing power. So, both of those things could be some incremental uplift to demand, relative to our baseline outlook. But then the last thing I think just to emphasize from our perspective, is that we do think that there is some sort of a near-term ceiling about how high effective tariff rates can go. We don't think that we're going to be going back to Liberation Day tariff rates in the near-term or even in the latter half of this year. Because if history is any guide, many of these investigations are going to take time and that full implementation may not actually occur before early 2027. Ariana Salvatore: Makes sense. Arunima, thanks for joining. Arunima Sinha: Thanks so much for having me.Ariana Salvatore: And thank you for listening. As a reminder, if you enjoy Thoughts on the Market, please take a moment to rate and review us wherever you listen, and share Thoughts on the Market with a friend or colleague today.

Lawyer Up! Podcast
123. Successful policing requires the right training and accountability

Lawyer Up! Podcast

Play Episode Listen Later Feb 23, 2026 49:07


Today, we are joined by Jeff Wenninger, a retired LAPD Lieutenant, a nationally recognized law enforcement expert and author of “On Thin Ice,” an analysis of how poor leadership and entrenched mindsets have eroded public trust in police.Good policing requires standardization and training. The lack of standardized training nationwide is evident. Police academies across the nation vary significantly in required training hours, with the national average being about 800 hours. For context, a cosmetology license requires 1,500 hours of training. In contrast, Nordic countries train their police for two to three years and continuously monitor candidates to ensure they possess the necessary characteristics for success.Often a department's culture may not align with its standards. Law enforcement policies are only as effective as the culture that enforces them. Training must be assessed, and officers must be held accountable for their actions.Proper police response requires self-awareness, both of the situation and how an officer's actions can escalate or de-escalate an incident. Officers must ensure that any force used is proportional to the threat and the severity of the crime. Alternatives to force should always be considered, and training should instill this mindset rather than defaulting to force as the first solution. But there is often a disconnect between policy, practice, and culture—what Jeff refers to as the "policy-practice divide."Many officers are not fully aware of the legal standards by which their use of force will be judged. Organizations should be responsible for ensuring their officers are not just trained, but competent and able to justify their decisions under stress.Despite clear guidelines, the culture within some departments may foster a mentality where disobedience is met with excessive force—a “contempt of cop” attitude. This underscores the need for good judgment and accountability, both at the individual and organizational levels. Agencies must hold officers to high standards and not simply defend their actions because they are found to be legally justified.Post-incident debriefs, modeled after those used by the Blue Angels, are critical for learning and improvement. These debriefs should happen soon after incidents and involve honest self-assessment and peer feedback.Unfortunately, some leaders undermine trust by publicly defending officers before investigations are complete. True professionalism in law enforcement requires transparency, honest evaluation of incidents, and accountability at every level.

RV Inspection And Care
#209 - The Best 2026 Starlink Alternatives For Mobile RV Internet

RV Inspection And Care

Play Episode Listen Later Feb 23, 2026 9:38


Starlink works really well as an internet connection service for many RVers, but not for everyone.So what are the best alternatives for RVing travelers who need solid, fast and reliable internet service pretty much wherever they go? Find out in this podcast!Here is the link to the MVNO I recommended in this podcast - https://mobilemusthave.com/

UBC News World
HARO Alternatives For Business Press Coverage - Top Platforms Revealed

UBC News World

Play Episode Listen Later Feb 23, 2026 8:18


Frustrated with HARO's crowded inbox and endless pitching? Discover why traditional press platforms eat your time and learn about smarter, AI-powered alternatives that turn one topic into eight content formats, publish to hundreds of sites automatically, and deliver massive organic traffic. For more, visit https://ampifire.com/blog/what-is-haro-alternative-platforms-to-get-press/ AmpiFire City: London Address: London Office 15 Harwood Road, , London, England United Kingdom Website: https://ampifire.com/

UBC News World
Best Ad Research Platforms For Performance Marketers: 2026 BigSpy Alternatives

UBC News World

Play Episode Listen Later Feb 23, 2026 7:26


BigSpy was once the go-to ad research tool, but in 2026, performance marketers are jumping ship. Discover why the pricing, features, and fragmented workflows are pushing pros toward smarter, faster alternatives. For more, visit https://www.gethookd.ai/ GetHookd LLC City: Miami Address: 40 SW 13th street Website: https://www.gethookd.ai/

Thoughts on the Market
AI at Work: The Transformation Is Already Underway

Thoughts on the Market

Play Episode Listen Later Feb 20, 2026 4:46


Our Head of European Sustainability Research Rachel Fletcher talks about how AI's is quickly reshaping employment and productivity across key industries and regions.Read more insights from Morgan Stanley.----- Transcript -----Rachel Fletcher: Welcome to Thoughts on the Market. I am Rachel Fletcher, Head of European Sustainability Research at Morgan Stanley. Today, how AI is shaking up the global job market. It's Friday, February 20th at 2pm in London. You've probably asked yourself when all the excitement around AI is going to move beyond demos and headlines, and start showing up in ways that matter to your job, your investments, and even your day-to-day life. Our latest global AlphaWise AI survey suggests that the turning point may already be unfolding – especially in the labor market where AI is beginning to influence hiring, productivity, and workplace skills. Our survey covered the U.S., UK, Germany, Japan, and Australia, across five sectors where we see a significant AI adoption benefit. Consumer staples, distribution in retail, real estate, transportation, healthcare, equipment and services, and autos. We found that AI contributed to 11 percent of jobs being eliminated over the past 12 months, with another 12 percent not backfilled. These job cuts were partially offset by 18 percent new hires, which results in a net 4 percent global job loss. It's important to note that the survey focused on companies that had already been adopting AI for at least a year. In fact, most of the companies in our survey had been adopting AI for more than two years. So, this is likely the most significant downside case in terms of the impact of AI on jobs, but it is still an early signal of potential job disruption. In Europe, the picture is nuanced. The UK saw the highest net job loss at 8 percent. This was primarily driven by a lower level of new hires in the UK compared to other countries that we surveyed, as well as a high level of positions not backfilled. This compares to Germany, which posted a 4 percent net job loss in line with the all-country average. There could be some other factors amplifying the impact in the UK. For example, broader labor market weakness driven by higher labor costs and higher levels of unemployment amongst younger workers. Ultimately, disentangling AI from macro forces remains challenging. Moving to sector impacts in Europe, autos experience the largest net job loss at 13 percent, and this compares to a 10 percent global average for the sector. It's possible these numbers reflect persistent sales weakness, and AI driven cost cutting. Transportation was least affected at 3 percent, whilst other sectors clustered around 6 to 7 percent. If we look at the top quintile of European companies reducing headcount, they've outperformed other companies that are more actively hiring. This suggests that investors are rewarding efficiency. On the downside, staffing firms face potential growth risks from AI displacement. On productivity, European firms report 10 to 11 percent gains from AI, close to the 11.5 percent global average, and the U.S. at 10.8 percent. It's worth noting that whilst Europe lags the U.S. in exposure to AI enablers, adopters and adopter enablers make up more than two-thirds of the MSCI Europe Index. However, European AI adopters have traded at a material discount versus their equivalent U.S. AI adoption peers. So, turning AI adoption into real ROI and defending pricing power is crucial for European companies. If we shift our focus to the U.S., there's a contrast. Whilst the global net job change was a 4 percent loss, the U.S. actually saw a 2 percent net gain, driven by AI related hiring. Our U.S. strategists have lifted expectations for S&P 500 margin expansion by 40 basis points in 2026 and 60 basis points in 2027. In our survey, the most frequently cited goals of AI deployment in the U.S. are boosting productivity, personalizing customer interactions, and accelerating data insights. Other common use cases include search, content generation, dashboards, and virtual agents. What's becoming clear is AI is no longer theoretical. Our survey data suggests that it is reshaping hiring, productivity and margins. The investor question is not whether AI matters, but who captures the value. Thanks for listening. If you enjoy the show, please leave us a review wherever you listen and share Thoughts on the Market with a friend or colleague today.

The No-Till Market Garden Podcast
Saving the Soil for the Future + Soil Blocking Alternatives

The No-Till Market Garden Podcast

Play Episode Listen Later Feb 20, 2026 23:16


Welcome to episode 347 of Growers Daily! We cover: some soil blocking alternatives (with a fun AI question attached—you know how that goes with me), saving the soil for the future, and it's feedback friday!  We are a Non-Profit! 

UBC News World
Foods To Avoid In Kids' Snacks & Healthier Alternatives Kids Will Actually Eat

UBC News World

Play Episode Listen Later Feb 20, 2026 6:37


Most packaged snacks trigger blood sugar crashes that affect behavior, focus, and sleep in ways parents rarely recognize. Simple protein and fiber combinations solve this problem while delivering nutrients children actually need for development and long-term health. Learn more: https://bucketbuddies.xyz/ Smart Farms LLC City: Colton Address: 325 East 4th Street Website: https://smartfarms.global/

The John Batchelor Show
S8 Ep482: File: P-STRADNER-2-19.mp3 Headline: Viktor Orban's Continued Reliance on Russian Energy Guest Name: Stradner 25 Word Summary: Hungarian leader Viktor Orban falsely claims a lack of alternatives to Russian gas, prioritizing his grip on power and

The John Batchelor Show

Play Episode Listen Later Feb 19, 2026 1:32


File: P-STRADNER-2-19.mp3 Headline: Viktor Orban's Continued Reliance on Russian Energy Guest Name: Stradner 25 Word Summary: Hungarian leader Viktor Orban falsely claims a lack of alternatives to Russian gas, prioritizing his grip on power and ties to Moscow over Hungary's interests.1870 BUCHAREST

Thoughts on the Market
Could the U.S. Target a Weaker Dollar?

Thoughts on the Market

Play Episode Listen Later Feb 19, 2026 10:44


Our Global Head of FX and EM Strategy James Lord and Global Chief Economist Seth Carpenter discuss what's driving the U.S. policy for the dollar and the outlook for other global currencies.Read more insights from Morgan Stanley.----- Transcript -----James Lord: Welcome to Thoughts on the Market. I'm James Lord, Global Head of FX and EM Strategy at Morgan Stanley. Seth Carpenter:  And I'm Seth Carpenter, Morgan Stanley's Global Chief Economist and Head of Macro Research. James Lord: Today we're talking about U.S. currency policy and whether recent news on intervention and nominations to the Fed change anything for the outlook of the dollar. It's Thursday, February 19th at 3pm in London. So it's been an interesting few weeks in currency markets. Plenty of dollar selling going on But then, we got news that Kevin Warsh is going to be nominated to Chair of the Board of Governors. And that sent the dollar back higher, reminding everybody that monetary policy and central bank policy still matter. So, in the aftermath of the dollar-yen rate check, investors started to discuss whether or not the U.S. might be starting to target a weaker currency. Not just be comfortable with a weaker currency, but actually explicitly target a weaker currency, which would presumably be a shift away from the stronger strong dollar policy that Secretary Bessent referenced. So, what is your understanding? What do you think the strong dollar policy actually means? Seth Carpenter: Strong dollar policy, that's a phrase, that's a term; it's a concept that lots of Secretaries of the Treasury have used for a long time. And I specifically point to the Secretary of the Treasury because at least in the recent couple of decades, there has been in standard Washington D.C. approach to things, a strong dichotomy that currency policy is the policy of the Treasury Department, not of the central bank. And that's always been important. I remember when I was working at the Treasury Department, that was still part of the talking points that the secretary used. However, you also hear Secretaries of the Treasury say that exchange rates should be market determined; that that's a key part of it. And with the back and forth between the U.S. and China, for example, there was a lot of discussion: Was the Chinese government adjusting or manipulating the value of their currency? And there was a push that currencies should be market determined. And so, if you think about those two things, at the same time – pushing really hard that the dollar should be strong, pushing really hard that currencies should be market determined – you start to very quickly run into a bit of an intellectual tension. And I think all of that is pretty intentional. What does it mean? It means that there's no single clear definition of strong dollar policy. It's a little bit of the eye of the beholder. It's an acknowledgement that the dollar plays a clear key role in global markets, and it's good for the U.S. for that to happen. That's traditionally been what it means. But it has not meant a specific number relative to any other currency or any basket of currency. It has not meant a specific value based on some sort of long run theoretical fair value. It is always meant to be a very vague, deliberately so, very vague concept. James Lord: So, in that version of what the strong dollar policy means, presumably the sort of ambiguity still leaves space for the Treasury to conduct some kind of intervention in dollar-yen, if they wanted to. And that would still be very much consistent with that definition of the strong dollar policy. I also, in the back of my head, always wonder whether the strong dollar policy has anything to do with the dollar's global role. And the sort of foreign policy power that gives the Treasury in sanctions policy. And other areas where, you know, they can control dollar flows and so on. And that gives the U.S. government some leverage. And that allows them to project strength in foreign policy. Has that anything to do with the traditional versions of the strong policy? Seth Carpenter: Absolutely. I think all of that is part and parcel to it. But it also helps to explain a little bit of why there's never going to be a very crisp, specific numerical definition of what a strong dollar policy is.So, first and foremost, I think the discussion of intervention; I think it is, in lots of ways, consistent, especially if you have that more expansive definition of strong dollar, i.e. the currency that's very important, or most important in global financial markets and in global trade. So, I think in that regard, you could have both the intervention and the strong dollar at the same time. I will add though that the administration has not had a clear, consistent view in this regard, in the following very specific sense. When now Governor Myron was chair of the Council of Economic Advisors, he penned a piece on the Council of Economics website that said that the reserve currency status of the dollar had brought with it some adverse effects on the U.S., and in terms of what happened in terms of trade flows and that sort of thing.So again, this administration has also tried to find ways to increase the nuance about what the currency policy is, and putting forward the idea that too strong of a dollar in the FX sense. In the sense that you and your colleagues in FX markets would think about is a high valuation of the dollar relative to other currencies – could have contributed to these trade deficits that they're trying to push back against. So, I would say we went from the previous broad, perhaps vague definition of strong dollar. And now we're in an even murkier regime where there could be other motivations for changing the value of the dollar. Seth Carpenter: So, James, that's been our view in terms of the Fed, but let me come back to you because there are lots of different forces going on at the same time. The central bank is clearly an important one, but it's only one factor among many. So, if you think about where the dollar is likely to go over the next three months, over the next six months, maybe over the next year, what is it that you and your team are looking for? Where are the questions that you're getting from clients? James Lord: Yeah, so when we came into the start of this year, we did have a bearish view on the dollar. I would say that the drivers of it, we'd split up into two components. The first component was a lot more of the conventional stuff about growth expectations, what we see the Fed doing. And then there was another component to it where – what we defined as risk premia, I suppose. The more unconventional catalysts that can push the dollar around, as we saw, come very much to market attention during the second quarter of last year, when the Liberation Day tariffs were announced and the dollar weakened far in excess of what rate differentials would imply. And so, I would say so far this year, the majority of the dollar move that we've seen, the weakening in the dollar that we've seen, has been driven by that second component. What we've kind of called risk premia. And the conversations that, you know, investors have been having about U.S. policy towards Greenland, and then more recently, the conversations that people have been having around FX intervention following the dollar-yen rate check. These sorts of things have been really driving the currency up until , when the Kevin Warsh nomination was announced. When we look at the extent of the risk premia that we see in the dollar now, it is pretty close to the levels that we saw in the second quarter of last year, which is to say it's pretty big. Euro dollar would probably be closer to 1-10, if we were just thinking about the impact of rate differentials and none of this risk premia stuff over the past year had materialized. That's obviously a very big gap. And I think for now that gap probably isn't going to widen much further, particularly now that market attention is much more focused on the impact that Kevin Warsh will have on markets and the dollar. We also have, you know, the ECB and the Bank of England; , house call for those two central banks is for them to be cutting rates. That could also put some downward pressure on those currencies, relative to the dollar. So all of that is to say for some of the major currencies within the G10 space, like sterling, like euro against the dollar, this probably isn't the time to be pushing a weaker dollar. But I think there are some other currencies which still have some opportunity in the short term, but also over the longer run as well. And that's really in emerging markets. So all of that is to say, I think there is a strong monetary policy anchor for emerging market currencies. This is an asset class that has been under invested in for some time. And we do think that there are more gains there in the short term and over the medium term as well. Seth Carpenter: So on that topic, James, would you then agree? So if I think about some of the EM central banks, think about Banxico, think about the BCB – where the dollar falling in value, their currency gaining in value – that could actually have a couple things go on to allow the central bank, maybe to ease more than they would've otherwise. One, in terms of imported inflation, their currency strengthening on a relative basis probably helps with a bit lower inflation. And secondly, a lot of EM central banks have to worry a bit about defending their currency, especially in a volatile geopolitical time. And you were pointing to sort of lower volatility more broadly. So is this a reinforcing trend perhaps, where if the dollar is coming down a little bit, especially against DM currencies, it allows more external stability for those central banks, allowing them to just focus on their domestic mandates, which could also lead to a further reduction in their domestic rates, which might be good for investors. James Lord: Yeah, I think there's something to that. given the strength of emerging market currencies. There should be, over time, more space for them to ease if the domestic conditions warrant it. But so far we're not really seeing many EM central banks taking advantage of that opportunity. There is a sort of general pattern with a lot of EMs that they're staying pretty conservative and more hawkish than I think what markets have generally been expecting, and that's been supporting their currencies. I think it's interesting to think about what would happen if they're on the flip side. What would happen if they did start to push monetary easing at a faster pace? I'm sure on the days where that happens, the currencies would weaken a little bit. However, if the market backdrop is generally constructive on risk, and investors want to have exposure to EM – then what could ultimately happen is that asset managers will simply buy more bonds as they price in a lower path for central bank policy over time. And that causes more capital inflows. And that sort of overwhelms the knee jerk effect from the more dovish stance of monetary policy on the currency. You get more duration flows coming into the market and that helps their currency. So, yes, if EM central banks push back with more dovish policy, significantly, it could pose some short-term volatility. But assuming we remain a low-vol environment globally, I would use those as buying opportunities. Seth Carpenter: Thanks, James. It's been great being on the show with you. Thank you for inviting me, and I hope to be able to come back and join you at some point in the future if you'll have me. James Lord: Thank you, Seth, for making the time to talk. And to all you listening, thank you for lending us your ears. Let us know what you think of this podcast by leaving us a review. And if you enjoy Thoughts on the Market, tell a friend or colleague about us today.

The Darin Olien Show
PFAS: The Forever Chemical Crisis in Your Water, Clothes, Cookware & Blood

The Darin Olien Show

Play Episode Listen Later Feb 19, 2026 24:16


In this investigative solo deep dive, Darin exposes the ongoing PFAS contamination crisis, the "forever chemicals" found in drinking water, clothing, carpets, cookware, cosmetics, food packaging, and even firefighting foam. Sparked by a Frontline investigation into the carpet industry in Dalton, Georgia, this episode expands far beyond one region and reveals a global supply chain problem affecting nearly every American. This episode is urgent. With 99% of people showing measurable PFAS levels in their blood, this is not about fear. It's about sovereignty. It's about awareness. It's about eliminating silent accumulation and reclaiming control over your environment. This is not luxury health. This is foundational freedom.     In This Episode What PFAS are and why they're called "forever chemicals" The Dalton, Georgia carpet industry case and wastewater contamination Internal corporate knowledge from 3M and DuPont decades ago Why PFAS contamination is global, not regional Everyday exposure: waterproof clothing, yoga pants, school uniforms, outdoor gear Nonstick cookware and safer alternatives Microwave popcorn bags and grease-resistant packaging Cosmetics, mascara, and fluorinated compounds Firefighting foam contamination at airports and military bases Health impacts: immune suppression, thyroid disruption, cancer risk Why water filtration is your first line of defense Emerging detox strategies: fiber, blood donation, microbiome support The role of regulation rollbacks and corporate accountability Algae-based PFAS alternatives already entering the market     Chapters 00:00:00 – Welcome to SuperLife: sovereignty, health, and responsibility 00:00:33 – Sponsor: Truniagen NAD supplement 00:02:17 – Why this PFAS episode is urgent and investigative 00:03:07 – The Frontline documentary: Dalton, Georgia & carpet contamination 00:04:31 – What PFAS / PFOA actually do and why they were adopted 00:05:45 – "Miracle chemistry" without proper safety testing 00:06:07 – Persistence: PFAS do not break down in the environment 00:06:38 – Wastewater discharge & farmland contamination 00:07:50 – Dead livestock, contaminated groundwater & generational impact 00:08:23 – 3M, DuPont, internal documents & decades of corporate knowledge 00:08:52 – Long-chain vs short-chain PFAS replacements 00:09:20 – Clothing exposure: waterproof jackets, yoga pants, uniforms 00:10:24 – Cookware exposure & safer alternatives 00:10:57 – Cosmetics & Environmental Working Group resources 00:11:17 – Sponsor: Shakeology & seven layers of quality testing 00:13:03 – Lack of labeling transparency 00:13:20 – Firefighting foam & military base contamination 00:14:05 – Health risks: immune suppression, thyroid, cholesterol, cancer 00:14:35 – 99% of Americans have PFAS in their blood 00:15:01 – Erin Brockovich & environmental legal activism 00:15:33 – Personal action step #1: Reverse osmosis water filtration 00:16:04 – Testing well water & municipal pressure 00:16:28 – Personal action step #2: Eliminating household exposures 00:17:25 – Emerging research: oat beta glucan fiber 00:18:03 – Firefighter study: blood donation lowering PFAS levels 00:18:32 – Microbiome & mycelium detox research 00:18:56 – Moving beyond fear into empowered action 00:19:23 – Phasing out toxic clothing & upgrading environment gradually 00:20:15 – Stockholm Convention & global treaties 00:20:52 – EPA regulations & rollback frustrations 00:21:19 – Innovation outrunning safety 00:21:50 – Share this episode & create consumer pressure 00:22:28 – Clean water, clean soil, clean products as human rights 00:22:54 – Terem Labs & algae-based PFAS alternatives 00:23:27 – Building a safe home environment as first step 00:24:15 – Final call to action: demand transparency & push reform     Thank You to Our Sponsors Shakeology: Get 15% off with code DARINO1BODI at Shakeology.com. Truniagen: Go to www.truniagen.com and use code DARIN20 at checkout for 20% off     Join the SuperLife Community Get Darin's deeper wellness breakdowns, beyond social media restrictions: Weekly voice notes Ingredient deep dives Wellness challenges Energy + consciousness tools Community accountability Extended episodes Join for $7.49/month → https://patreon.com/darinolien     Find More from Darin Olien: Instagram: @darinolien Podcast: SuperLife Podcast Website: superlife.com Book: Fatal Conveniences     Key Takeaway PFAS shows us what happens when innovation outruns safety. This is not about panic. It's about power. Clean water, clean soil, clean products; these are not luxuries. They are the foundation of sovereignty, freedom, and long-term health. Awareness is rising. Alternatives are emerging. Industry shifts when consumers shift. Make one change today. Then another. That's how we win.     Bibliography/Sources Australian Red Cross Lifeblood / University of New England. (2022). Effect of Plasma and Blood Donations on Levels of Perfluoroalkyl and Polyfluoroalkyl Substances in Firefighters in Australia: A Randomized Clinical Trial. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2791196 Boston University / University of Massachusetts Lowell. (2024). An oat fiber intervention for reducing PFAS body burden: A pilot study. (Published in Toxicology and Applied Pharmacology). https://doi.org/10.1016/j.taap.2024.117163 National Academies of Sciences, Engineering, and Medicine. (2022). Guidance on PFAS Exposure, Testing, and Clinical Follow-Up. https://nap.nationalacademies.org/catalog/26156/guidance-on-pfas-exposure-testing-and-clinical-follow-up Environmental Health Perspectives. (2021). Per- and Polyfluoroalkyl Substance Toxicity and Human Health Review: Current State of Knowledge and Strategies for Informing Future Research. https://pmc.ncbi.nlm.nih.gov/articles/PMC7906952/ New England Journal of Medicine (NEJM) / IARC. (2024). Carcinogenicity of Perfluorooctanoic Acid (PFOA) and Perfluorooctanesulfonic Acid (PFOS). https://www.nejm.org/doi/full/10.1056/NEJMc2401611 FRONTLINE. (2024). Contaminated: The Carpet Industry's Toxic Legacy. (Investigative Documentary). https://www.youtube.com/watch?v=J_j66vAunXk United States Environmental Protection Agency. (2024). Final PFAS National Primary Drinking Water Regulation. https://www.epa.gov/sdwa/and-polyfluoroalkyl-substances-pfas

The Wall Street Skinny
What No One Tells You about Investing in Private Markets feat. Goldman Sachs' Head of Alts, Kristin Olson

The Wall Street Skinny

Play Episode Listen Later Feb 19, 2026 50:55


Send a textKristin Olson, Goldman Sachs' Head of Alternatives for Wealth and Asset and Wealth Management, sits down with us for the most candid, no-fluff conversation about private equity and private credit we've ever had. .She walks us through the very real benefits of investing in private capital while also answering the cynical questions: do “retail” investors in private equity products like evergreen funds and perpetual funds get the A-team investors? Are those structures getting the best deals? How do the fees compare to the fees on products for institutional investors? Plus, If more buyers flood the market, does that push prices up and compress returns? Kristin breaks down for us how this whole ecosystem actually works, she discusses the biggest shift in private markets right now, and the pros and cons of newer structures that aim to make private assets feel more like “normal investing.” Finally, we go deep on what investors should actually ask before putting money into private equity and private credit. Kristin talks us through how fees can be misleading, when carry is taken, hurdle rates, gating/redemptions, and what “liquidity” really means when markets get stressed. This is an episode every investor should listen to before putting private capital into their portfolio.For a 14 day FREE Trial of Macabacus, click HERE Visit https://iconnections.io/ to learn more about iConnections!Shop our Self Paced Courses: Investment Banking & Private Equity Fundamentals HEREFixed Income Sales & Trading HERE Wealthfront.com/wss. This is a paid endorsement for Wealthfront. May not reflect others' experiences. Similar outcomes not guaranteed. Wealthfront Brokerage is not a bank. Rate subject to change. Promo terms apply. If eligible for the boosted rate of 4.15% offered in connection with this promo, the boosted rate is also subject to change if base rate decreases during the 3 month promo period.The Cash Account, which is not a deposit account, is offered by Wealthfront Brokerage LLC ("Wealthfront Brokerage"), Member FINRA/SIPC. Wealthfront Brokerage is not a bank. The Annual Percentage Yield ("APY") on cash deposits as of 11/7/25, is representative, requires no minimum, and may change at any time. The APY reflects the weighted average of deposit balances at participating Program Banks, which are not allocated equally. Wealthfront Brokerage sweeps cash balances to Program Banks, where they earn the variable APY. Sources HERE.

Acoustic Alternatives
Acoustic Alternatives with Rochelle Clark & Jason Dennie and John Bommarito

Acoustic Alternatives

Play Episode Listen Later Feb 19, 2026 80:42


Ypsilanti, MI residents Rochelle Clark and Jason Dennie didn't have to go far to get to Grove Studios (7 minutes apparently), but their music will take you on journey somewhere south of there. Both have been making music for quite a long time.They originally met when Rochelle was taking guitar lessons from Jason at Herb David Guitar Studio. Eventually they found each other to be not just musical partners, but partners in life. Rochelle's voice is one that I once said to her in a radio session "you could sing John Bommarito is a big jerk" and it would sound good. I have that audio. It does sound good and it doesn't even hurt when she sings it. Jason's guitar playing, which is what drew me to him in the first place, has him atop my list of best acoustic guitar players in the area (and he may be the best mandolin player in my music scene as well).Enjoy this conversation with two people I am proud to call friends. If you want to hear the interviews that I've done with both in the past, subscribe to my Patreon page at https://www.patreon.com/c/AcousticAlternativeswithJohnBommaritoSongs:Blue (written by Rochelle P Clark)If I Could Make You My Own (written by Dori Freeman)Simply Perfect Day (written by Jason Dennie)Honey Hangover (written by Jason Dennie)More about all things Acoustic Alternatives: https://johnmbommarito.wixsite.com/johnbommarito/acoustic-alternativesFind Rochelle on the web: https://www.rochellepclark.com/Find Jason on the web: https://www.jasondennie.com/Book a session at Grove Studios for your musical needs: https://grovestudios.space/

Futures Edge Podcast with Jim Iuorio and Bob Iaccino
The Queen of Alternatives Returns: Bitcoin, AI, Market Rish, Oil | Shana Orzyck Sissel

Futures Edge Podcast with Jim Iuorio and Bob Iaccino

Play Episode Listen Later Feb 19, 2026 38:20


Bitcoin is down hard… so is it a buyable asset class or just a trader's toy?Jim Iuorio and Bob Iaccino welcome back Shana Orczyk Sissel (Founder & CEO, Banrion Capital)—aka the queen of alternatives—for a fast, wide-ranging conversation on crypto positioning, market breadth, why earnings feel “rigged” by whisper numbers, and whether crude oil is setting up for a real move.Shana breaks down how she thinks about Bitcoin as a high-risk alternative allocation, why ETFs may be the best on-ramp for most investors, and where crypto fits inside a broader portfolio. Then the conversation pivots into the bigger market picture: the AI trade broadening out, equal-weight vs. cap-weight as a “stock picking is back” signal, and why quarterly earnings have become a game of under-promise/over-deliver. Finally, Jim and Bob dig into crude oil—technical setup vs. fundamental reality—and touch on how headline-driven policy noise can skew sentiment.What you'll learn in this episode- Where Bitcoin fits in a real portfolio (tradable vs investable framework)- Best way to own crypto for most people: ETF vs cold storage vs exchange accounts- A simple “top coins” idea: crypto index-style exposure (top basket discussion)- Why the AI rotation and market breadth can be a healthy sign- How equal-weight vs market-cap weight can hint that stock picking is working- Why Shana thinks earnings reactions are overrated (and why companies “game” guidance)- Crude oil: technical breakout talk vs fundamental supply/demand skepticism- Energy stocks + deepwater drilling: what actually needs to happen for that theme to workTimestamps:00:00 – Shana's back + quick recap of her last “Corning” timing01:23 – Bitcoin: where it fits (alternative asset, high risk)04:23 – “How should people own it?” ETFs vs cold storage vs exchanges06:27 – Crypto baskets / index-style exposure discussion16:05 – AI broadening out + why breadth matters19:59 – “Do earnings even matter anymore?” whisper numbers + delayed reaction idea26:08 – Wall Street all-bullish sentiment: should that scare you?28:14 – Crude oil setup: technical case vs fundamental pushback33:00 – Tariffs + negotiation “anchoring” framework (how to think about the noise)37:44 – Deepwater drilling + RIG/Valaris reaction40:24 – Wrap + sponsorsFollow along on social media: Twitter:    https://x.com/bob_iaccinoTwitter:   https://x.com/jimiuorioLinkedIn:  https://www.linkedin.com/in/bob-iaccino/LinkedIn:   https://www.linkedin.com/in/james-iuorio/Newsletter:  http://theunfilteredinvestor.com/This episode is sponsored by: Independence Ark: https://www.independenceark.com/Code: F U AmerGold https://www.amergold.com/Code; F U

The John Batchelor Show
S8 Ep476: Joseph Sternberg analyzes Prime Minister Keir Starmer's crash and burn scenario despite a large parliamentary majority, weakened by scandals and party infighting, with survival relying on the lack of compelling alternatives while constant polic

The John Batchelor Show

Play Episode Listen Later Feb 18, 2026 4:52


Joseph Sternberg analyzes Prime Minister Keir Starmer's crash and burn scenario despite a large parliamentary majority, weakened by scandals and party infighting, with survival relying on the lack of compelling alternatives while constant policy reversals leave his government unable to foster growth.1900 NETHERLANDS

Thoughts on the Market
The Political Cost of the AI Buildout

Thoughts on the Market

Play Episode Listen Later Feb 18, 2026 4:19


More Americans are blaming the AI infrastructure expansion for rising electricity bills. Our Head of Public Policy Research Ariana Salvatore explains how the topic may influence policy announcements ahead of the midterm elections.Read more insights from Morgan Stanley.----- Transcript -----Ariana Salvatore: Welcome to Thoughts on the Market. I'm Ariana Salvatore, Head of Public Policy Research for Morgan Stanley. Today I'll be talking about the relationship between affordability, the data center buildout, and the midterm elections. It's Wednesday, February 18th at 10am in New York. Markets and voters continue to grapple with questions on AI, including its potential scope, impact, and disruption across industries. That's been a clear theme on the policy side as voters seem to be pushing back against AI development and data center buildout in particular. In key states, voters are associating the rise in electricity bills with AI infrastructure – and we think that could be an important read across for the midterm elections in November. Now to be sure, electricity inflation has stayed sticky at around four to 5 percent year-over- year, and our economists expect it to remain in that range through this year and next. Nationally the impact of data centers on electricity prices has been relatively modest so far, but regionally, the pressure has been more visible. To that point, a recent survey in Pennsylvania found that nearly twice as many respondents believe AI will hurt the economy as it will help. More than half – 55 percent – think AI is likely to take away jobs in their own industry, and 71 percent said they're concerned about how much electricity data centers consume. But this isn't just a Pennsylvania story. In other battleground states like Arizona and Michigan, voters have actually rejected plans to build new data centers locally. So, what could that mean for the midterm elections? Think back to the off-cycle elections in November of last year. Candidates who ran on this theme of affordability and actually pushed back against data center construction tended to do pretty well in their respective races. Looking ahead to the midterm elections later this year, we see two clear takeaways from a policy perspective. First, it's important to note that more of the policy action here will actually continue to be at the local rather than federal level. Some states with heavy data center build out – so Georgia, Michigan, Ohio, and Texas among others – are now debating who should pay for grid upgrades. Federal proposals on this topic are still pretty nascent and fragmented. Meanwhile, public utility commissions in states like Georgia, Ohio, Michigan, and Indiana have adopted or proposed large load tariffs. These require data centers to shoulder more upfront grid costs; or can reflect conditional charges like long-term contracts, minimum demand charges, exit fees or collateral requirements – all of which are designed to prevent costs from spilling over to households. And secondly, because of that limited federal action, we expect the Trump administration to continue leaning on other levers of affordability policy, where the president actually does have some more unilateral control. We've been expecting the administration to continue focusing on broader affordability areas ranging from housing to trade policy, as we've said on this podcast in the past. That dynamic is especially relevant this week as the Supreme Court could rule as soon as Friday on whether or not the president has the authority under IEEPA to impose the broad-based reciprocal tariffs. The administration thus far has been projecting a message of continuity. But we've noted that a decision that constrains that authority could give the president an opportunity to pursue a lighter touch tariff policy in response to the public's concerns around affordability. That's why we think the AI infrastructure buildout debate will continue to be a flashpoint into November, especially in the context of rising data center demand. Next week, when the president delivers his State of the Union address, we expect to hear plenty about not just affordability, but also AI leadership and competitiveness. But an equally important message will be around the administration's potential policy options to address its associated costs. That tension between AI supremacy and rising everyday costs for voters will be critical in shaping the electoral landscape into November. Thanks for listening. As a reminder, if you enjoy Thoughts on the Market, please take a moment to rate and review us wherever you listen; and share Thoughts on the Market with a friend or colleague today.

allmomdoes Podcast with Julie Lyles Carr
A Sober Life with Christy Osborne

allmomdoes Podcast with Julie Lyles Carr

Play Episode Listen Later Feb 18, 2026 33:45


AllMomDoes host Julie Lyles Carr welcomes Christy Osborne back to the podcast! Christy was on the show a few years ago, early in her sobriety journey. Today, she returns to talk about the latest research on alcohol dependence, why women struggle to talk about their alcohol use in church settings, and much more!Show Notes: https://bit.ly/4rIbTe5 Takeaways:Christy Osborne shares her journey to sobriety and its impact on her life.The sober curious movement is gaining traction, especially among younger generations.Alcohol is classified as a class one carcinogen, similar to tobacco.Women often feel unable to discuss their struggles with alcohol in church settings.Socializing without alcohol can lead to deeper connections and authentic interactions.Nootropics and other alternatives to alcohol raise questions about dependency and coping mechanisms.Cortisol levels are affected by alcohol consumption, impacting mental health.Non-alcoholic alternatives can be helpful for those transitioning away from alcohol.Community support is crucial for women navigating sobriety.The journey to sobriety is ultimately about drawing closer to Jesus.Sound Bites:"I had this actual come Jesus moment.""Alcohol is a class one carcinogen.""We are not meant to live life alone."Chapters:00:00 - Introduction and Welcome Back02:12 - Christy's Journey to Sobriety04:43 - The Sober Curious Movement08:43 - Understanding Alcohol's Impact on Health14:06 - Socializing Without Alcohol17:10 - Nootropics and Alternatives to Alcohol18:53 - Avoidance and Alcohol20:53 - Cortisol and Alcohol's Effects22:54 - Non-Alcoholic Alternatives24:33 - The Power of Community27:07 - Upcoming Events and Closing ThoughtsKeywords: sobriety, alcohol, health, community, women, coaching, sober curious, mental health, non-alcoholic, support

Her Faith At Work
How to Ditch Social Media and Still Grow Your Business

Her Faith At Work

Play Episode Listen Later Feb 18, 2026 20:51 Transcription Available


In this episode, Jan Touchberry discusses the challenges of using social media for business growth, emphasizing the need for a clear strategy that focuses on visibility and conversion rather than mere presence. She explores alternatives to social media, such as email marketing and search-based platforms, and introduces the 'nine grid strategy' as a balanced approach to maintaining an online presence without overwhelming oneself. The conversation encourages listeners to seek clarity in their marketing efforts and to prioritize effective strategies over the pressure to constantly engage on social media.TAKEAWAYSYou are allowed to question the narrative around social media.You do not need social media to grow your business.The goal is predictable visibility that leads to consistent income.Visibility without a conversion pathway is just performing.An email list of engaged people is more valuable than a large social media following.Intent over attention is crucial for growth.Collaborations can significantly enhance your reach and trust.The nine grid strategy offers a sustainable approach to social media.Clarity in strategy is essential for effective marketing.You are not called to be everywhere; focus on what works.SOUND BITES"You are allowed to question the narrative.""Collaborations are huge.""You need clarity, not performance."CHAPTERS00:00 The Social Media Dilemma02:50 Visibility vs. Conversion08:26 Alternatives to Social Media12:38 The Nine Grid Strategy17:29 Finding Clarity and StrategyLINKS:Schedule your FREE 20-minute funnel audit - JanTouchberry.com/funnelCONNECT WITH JAN:Here are all the best places and FREE stuff

According to John
(A) Hundreds Of Alternatives, They Only Attack The Christian

According to John

Play Episode Listen Later Feb 18, 2026 38:48


If we are willing to be honest we will see where the true hate comes from.

According to John
(V) Hundreds Of Alternatives, They Only Attack The Christian

According to John

Play Episode Listen Later Feb 18, 2026 38:48


If we are willing to be honest we will see where the true hate comes from.

Thoughts on the Market
A Novel Way to Shop Online

Thoughts on the Market

Play Episode Listen Later Feb 17, 2026 11:20


Our Head of U.S. Internet Research Brian Nowak joins U.S. Small and Mid-Cap Internet Analyst Nathan Feather to explain why the future of agentic commerce is closer than you think.Read more insights from Morgan Stanley.----- Transcript -----Brian Nowak: Welcome to Thoughts on the Market. I'm Brian Nowak, Morgan Stanley's Head of U.S. Internet ResearchNathan Feather: And I'm Nathan Feather, U.S. Small and Mid-Cap Internet Analyst.Brian Nowak: Today, how AI-powered shopping assistants are set to revolutionize the e-commerce experience.It's Tuesday, February 17th at 8am in New York.Nathan, let's talk a little bit about agentic commerce. When was the last time you reordered groceries? Or bought household packaged goods? Or compared prices for items you [b]ought online and said, ‘Boy, I wish there was an easier way to do this. I wish technology could solve this for me.'Nathan Feather: Yeah. Yesterday, about 24 hours ago.Brian Nowak: Well, our work on agentic commerce shows a lot of these capabilities could be [coming] sooner than a lot of people appreciate. We believe that agentic commerce could grow to be 10 to 20 percent of overall U.S. e-commerce by 2030, and potentially add 100 to 300 basis points of overall growth to e-commerce.There are certain categories of spend we think are going to be particularly large unlocks for agentic commerce. I mentioned grocery, I mentioned household essentials. We think these are some of the items that agentic commerce is really going to drive a further digitization of over the next five years.So maybe Nathan, let's start at the very top. Our work we did together shows that 40 to 50 percent of consumers in the U.S. already use different AI tools for product research, but only a mid single digit percentage of them are actually really starting their shopping journey or buying things today. What does that gap tell you about the agentic opportunity and some of the hurdles we have to overcome to close that gap from research to actual purchasing?Nathan Feather: Well, I think what it shows is that clearly there is demand from consumers for these products. We think agentic opens up both evolutionary and revolutionary ways to shop online for consumers. But at the moment, the tools aren't fully developed and the consumer behavior isn't yet there. And so, we think it'll take time for these tools to develop. But once they do, it's clear that the consumer use case is there and you'll start to see adoption.And building on that, Brian, on the large cap side, you've done a lot of work here on how the shopping funnel itself could evolve. Traditionally discovery has flowed through search, social or direct traffic. Now we're seeing agents begin to sit in the start of the funnel acting as the gatekeeper to the transaction. For the biggest platforms with massive reach, how meaningful is that shift?Brian Nowak: It is very meaningful. And I think that this agentic shift in how people research products, price compare products, purchase products, is going to lead to even more advertis[ing] and value creation opportunity for the big social media platforms, for the big video platforms. Because essentially these big platforms that have large corpuses of users, spending a lot of time on them are going to be more important than ever for companies that want to launch new products. Companies that want to introduce their products to new customers.People that want to start new businesses entirely, it's going to be harder to reach new potential customers in an agentic world. So, I think some of these leading social and reach based video platforms are going to go up in value and you'll see more spend on those for people to build awareness around new and existing products.On this point of the products, you know, our work shows that grocery and consumer packaged goods are probably going to be one of the largest category unlocks. You know, we already know that over 50 percent of incremental e-commerce growth in the U.S. is going to come from grocery and CPG. And we think agentic is going to be a similar dynamic where grocery and CPG is going to drive a lot of agentic spend.Why do you think that is? And sort of walk us through, what has to happen in your mind for people to really pivot and start using agents to shop for their weekly grocery basket?Nathan Feather: I think one of the key things about the grocery category is it's a very high friction category online. You have to go through and select each individual ingredient you want [in] the order, ensure that you have the right brand, the right number of units, and ensure that the substitutions – when somebody actually gets to the store – are correct.And so for a user, it just takes a substantial amount of time to build a basket for online grocery. We think agentic can change that by becoming your personal digital shopper. You can say something as simple as, ‘I want to make steak tacos for dinner.' And it can add all of the ingredients you want to your order. Go from the grocery store you like. And hey, it'll know your preferences. It'll know you already like a certain brand of tortillas, and it'll add those to the cart. And so it just dramatically reduces the friction.Now, that will take time to build the tools. The tools aren't there today, but we think that can come sooner than people expect. Even over the next one to two years that you start to get this revolutionary grocery experience.And so, it's coming. And from your perspective, Brian, once agentic grocery shopping does start to work, how does that impact the broader e-commerce adoption curve? Does it pull forward agentic behavior in other categories as well?Brian Nowak: I think it does. I think it does lead to more durable multi-year, overall e-commerce growth. And potentially in some of our more bull case scenarios, we've built out – even an acceleration in e-commerce growth, even though the numbers and the dollars added are getting larger. But there is some tension around profitability.We are in a world where a lot of e-commerce companies, they generate an outsized percentage of their profit from advertising and retail media that is attached to current transactions. Agentic commerce and agents wedging themself between the consumer and these platforms potentially put some of these high-margin retail media ad dollars at risk.So talk us through some of the math that we've run on that potential risk to any of the companies that are feeding into these agents for people to shop through.Nathan Feather: Well, in our work for most e-commerce companies, a majority – or sometimes even all – of their e-commerce profitability comes from the advertising side. And so this is the key profit pool for e-commerce. To the extent that goes away, there is one potential offset here, which is the lower fee that agentic offers for companies that currently have high marketing spend. To the extent that agentic offers a lower take rate, that could be an offset.But we think it's going to be very important for companies to monitor the retail media landscape and ensure they can try to keep direct traffic as best as possible. And things like onsite agents could be really important to making sure you're staying top of mind and owning that customer relationship.Now, on the platform side, search today captures an implied take rates that are 5-10 times higher than what we're seeing in the early agentic transaction fees. If this model does shift from CPC – or cost per click – towards a more commission based model, Brian, how do you think search platforms respond?Brian Nowak: I think the punchline is the percentage of traffic and transactions that retailers or brands or companies selling their items online that's paid is going to go up. You know, while search is a relatively more expensive channel on a per transaction basis, search works because there's a very large amount of unpaid and direct traffic that retailers benefit from post the first time they spend on search.Just some math on this. We're still at a situation where 80 percent of retailers' online traffic is free. Or direct. And so if we do get into a situation where there's a transition from a higher monetizing per transaction search to a lower monetizing per transaction agent, I would expect the search platforms to react by essentially making it more challenging to get free and direct and unpaid traffic. And we'll have that transition from more transactions at a lower rate; as opposed to fewer transactions at a higher rate, which is what we have now,Nathan, in our work, we also talked about a Five I's framework. We talked about inventory, infrastructure, innovation, incrementality and income statement, sort of a retailer framework to assess positioning within the agentic transition. Maybe walk us through what your big takeaways were from the Five I's framework and what it means that retailers need to be mindful of throughout this agentic transition.Nathan Feather: Well, for retailers, I think it's going to be very important that you're winning by differentiation. Having unique, competitively priced inventory with infrastructure that can fulfill that quickly to the consumer and critically staying on the leading edge of innovation.It's one thing to have the inventory. It's another thing to be able to be actively plugged into these agentic tools and make sure you're developing good experiences for your customers that actually are on this cutting edge. In addition, it's one thing to have all of that, but you want to make sure there's also incrementality opportunity.So [the] ability to go out, expand the TAM and gain market share. And of course what we just talked about with the margin risk, I think all of those are going to be very important. And so on balance for retailers, we do see a lot of opportunity. That's balanced with a lot of risk. But this is one of those key transition moments that we think companies that really execute and perform well should be able to perform nicely.Now finally, Brian, over the next five years, how do you think agent commerce reshapes competitive dynamics across the internet ecosystem?Brian Nowak: I think over the next few years, we're going to realize that agentic commerce is no longer a fringe experiment or a concept. It's a reality. And we may get to the point where we don't even talk about agentic commerce or agentic shopping. We just say, “‘This cool thing I did through my browser.' Or, ‘Look at what my search portal can do. Look at how my search portal found me this product. Look at how my groceries got delivered.' And it'll become part of recurring life. It'll become normal.So right now we say it's agentic, it's far off. It's going to take time to develop. But I would argue that every year that goes by, it's going to be becoming more part of normal life. And we'll just say, ‘This is how I shop online.'Nathan, thanks for taking the time todayNathan Feather: It was great speaking with you, Brian.Brian Nowak: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen. And share the podcast with a friend or colleague today.

Lord Abbett: The Investment Conversation
The Investment Conversation: Why Origination Matters in Private Credit Deals

Lord Abbett: The Investment Conversation

Play Episode Listen Later Feb 17, 2026 23:46


In this podcast, Lord Abbett Head of Origination Jonathan Pearl discusses his team's approach to sourcing and constructing direct lending transactions—including when to say “no.

City Cast Las Vegas
Local Alternatives to Your Favorite Big Box Stores

City Cast Las Vegas

Play Episode Listen Later Feb 17, 2026 21:16


We are all guilty of heading to the big box stores– the Targets, the Walmarts when we're in a pinch and need a one-stop-shop. But there's a whole host of Vegas shops you can go to instead to put your money back into the local economy. Host Sonja Cho Swanson is joined by CoCo Jenkins, founder of There's Nothing to Do in Vegas, and Nicki Pucci, founder of Tambourine Home to share their favorite big box chain store dupes in town. Learn more about the sponsors of this February 18th episode: The Neon Museum Want to get in touch? Follow us @CityCastVegas on Instagram, or email us at lasvegas@citycast.fm. You can also call or text us at 702-514-0719. For more Las Vegas news, make sure to sign up for our morning newsletter, Hey Las Vegas. Learn more about becoming a City Cast Las Vegas Neighbor at membership.citycast.fm. Looking to advertise on City Cast Las Vegas? Check out our options for podcast and newsletter ads at citycast.fm/advertise.

las vegas local walmart targets alternatives big box stores city cast las vegas
Acoustic Alternatives
Acoustic Alternatives with Tony Lucca and John Bommarito

Acoustic Alternatives

Play Episode Listen Later Feb 17, 2026 79:12


This one was a long time in the making. I'd been checking in with Tony Lucca on the occasions he would come back to his home state of Michigan for a show to see if he was available to join me for a chat. Two previous in studio sessions in Ann Arbor in 2010 and 2016 established a mutual respect for one another.When I first heard his music in 2010 on an album called Rendezvous with the Angels I had no idea what an interesting backstory he had. To paraphrase something I say in our conversation, how is that I can get access to someone with this much talent and history?It's a long one and I actually left a few questions out when we were recording due to time restraints. Enjoy this conversation with Tony Lucca.Songs written by Tony Lucca:In My Life TodayTrue StoryBack in '87Top and the BottomFind Tony on the web at: https://www.tonylucca.com/See or listen to other Acoustic Alternatives sessions: https://johnmbommarito.wixsite.com/johnbommarito/acoustic-alternativesBook Grove Studios for your next musical needs: https://grovestudios.space/

Clare FM - Podcasts
Uisce Éireann Criticised For Failure To Consider Alternatives To River Shannon To Dublin Pipeline

Clare FM - Podcasts

Play Episode Listen Later Feb 17, 2026 7:07


Uisce Éireann is being criticised for failing to consider alternatives to a project which will see 330 million litres of water taken from the River Shannon. An event held by the River Shannon Protection Alliance has heard that the proposed 170-kilometre pipeline from the Parteen Basin to Dublin will result in "dangerous low water flow" in the Shannon. Clare County Council has this month agreed to lodge a submission with An Coimisiún Pleanála outlining local representatives' concerns around the plans. Environmentalist and Senior Project Manager with the River Shannon Protection Alliance Elaine Doyle believes there are better ways to address the drink water supply shortage in the capital.

Thoughts on the Market
Introducing Hard Lessons

Thoughts on the Market

Play Episode Listen Later Feb 16, 2026 2:20


Iconic investors sit down with Morgan Stanley leaders to go behind the scenes on the critical moments – both successes and setbacks – that shaped who they are today.Watch and listen to the series on your favorite platform.

95bFM
Biodynamic Alternatives to Fungicides w/ Doctoral Researcher Nikolai Siimes: 17 February 2026

95bFM

Play Episode Listen Later Feb 16, 2026


Despite their strong reputation for sustainability, New Zealand's vineyards and orchards still use large amounts of fungicide to fight plant diseases. These chemicals carry environmental risks, including the greenhouse gases emitted through their manufacture and transportation, and the toxic run-off which they can cause when applied. Newsteamer Alex spoke with Nikolai Siimes, a Doctoral Researcher at the University of Auckland who says we should be looking at alternatives — not just developing better pesticides, but rethinking our fruit farming practices from the ground up. 

The Dignity Lab
Dignity, Forgiveness, & the Alternatives to Forgiveness

The Dignity Lab

Play Episode Listen Later Feb 15, 2026 12:44 Transcription Available


Join the dialogue - text your questions, insights, and feedback to The Dignity Lab podcast.In this episode of the Dignity Lab, Jennifer Griggs explores the concept of dignity in the context of forgiveness and its alternatives. She discusses how understanding dignity can aid in healing from past hurts, emphasizing the importance of validating one's own experiences and recognizing the elements of dignity that may have been violated. She also covers the ways in which taking accountability can, if applicable, can further healing.TakeawaysDignity is your inherent worth or value.Understanding dignity aids in healing even if forgiveness does not appeal.Dignity is vulnerable to harm and trauma.Naming dignity elements helps validate personal pain.Validating experiences confirms their authenticity.Accountability is a key element of dignity.Recognizing personal agency can empower healing.Accountability helps make sense of personal hurt.Exploring what it means to live and lead with dignity at work, in our families, in our communities, and in the world. What is dignity? How can we honor the dignity of others? And how can we repair and reclaim our dignity after harm? Tune in to hear stories about violations of dignity and ways in which we heal, forgive, and make choices about how we show up in a chaotic and fractured world. Hosted by physician and coach Jennifer Griggs.For more information on the podcast, please visit www.thedignitylab.com.For more information on podcast host Dr. Jennifer Griggs, please visit https://jennifergriggs.com/.For additional free resources, including the periodic table of dignity elements, please visit https://jennifergriggs.com/resources/.The Dignity Lab is an affiliate of Bookshop.org and will receive 10% of the purchase price when you click through and make a purchase. This supports our production and hosting costs. Bookshop.org doesn't earn money off bookstore sales, all profits go to independent bookstores. We encourage our listeners to purchase books through Bookshop.org for this reason.

Thoughts on the Market
Why a Tariff Ruling Could Mean Consumer Relief

Thoughts on the Market

Play Episode Listen Later Feb 13, 2026 4:57


Arunima Sinha, from the U.S. and Global Economics team, discusses how an upcoming Supreme Court decision could reshape consumer prices, retail margins and the inflation outlook in 2026.Read more insights from Morgan Stanley.----- Transcript -----Arunima Sinha: Welcome to Thoughts on the Market. I'm Arunima Sinha from Morgan Stanley's U.S. and Global Economics Teams.Today: How a single Supreme Court ruling could change the tariff math for U.S. consumers.It's Friday, February 13th at 10am in New York.The U.S. Supreme Court is deciding whether the U.S. president has legal authority to impose sweeping tariffs under IEEPA. That decision could come as soon as next Friday. IEEPA, or the International Emergency Economic Powers Act, is the legal backbone for a significant share of today's consumer goods tariffs. If the Supreme Court limits how it can be used, tariffs on many everyday items could fall quickly – affecting prices on the shelf, margins for retailers, and the broader inflation outlook.As of now, effective tariff rates on consumer goods are running about 15 percent, and that's based on late 2025 November data. And that's quite a bit higher than the roughly 10 percent average, which we're seeing as tariffs on all goods. In a post IEEPA scenario, we think that the effective tariff rate on consumer goods could fall to the mid-11 percent range.It's not zero, but it is meaningfully lower.An important caveat is that this is not going to be eliminating all tariffs. Other trade tools – like Section 232s, which are the national security tariffs, Section 301s, the tariffs that are related to unfair trade practices – would remain in place. Autos and metals, for example, are largely outside the IEEPA discussion.The main pressure point we think is consumer goods. IEEPA has been used for two major sets of tariffs. The fentanyl-related tariffs on Mexico, Canada, and China, and the so-called reciprocal tariffs applied broadly across trading partners. And these often stack on top of the existing tariffs, such as the MFN, the Most Favored Nation rates, and the section 301 duties on China that were already existing before 2025.The exposure is really concentrated in certain categories of consumer goods. So, for example, in apparel and footwear, about 60 percent of the applied tariffs are IEEPA related. For furniture and home improvement, it's over 70 percent. For toys, games, and sporting equipment, it's more than 90 percent. So, if the IEEPA authority is curtailed, the category level effects would be meaningful.There are caveats, of course. The court's decision may not be all or nothing. And policymakers could turn to alternative authorities. One example is Section 122, which allows across the board tariffs for up to 15 percent for 150 days. So, tariffs could just reappear under different tools. But in the near term, fully replacing IEEPA-based tariffs on consumer goods may not be straightforward, especially given ongoing affordability concerns.So, how does that matter for the real economy? There are two key channels, prices and margins. On prices we estimate that about 60 percent of the tariff costs are typically passed on to the consumers over two to three quarters, but it's not instant. Margins though could respond faster. If companies get cost relief before they adjust prices downwards, that creates a temporary margin tailwind. That could influence hiring, investment and earnings across retail and consumer supply chains.Over time, lower tariffs could also reinforce that broader return to core goods disinflation starting in the second quarter of this year. And because tariff driven inflation has weighed more heavily on the middle- and lower-income households, any eventual price relief could disproportionately benefit those groups.At the end of the day, this isn't just a legal story. It is a timing story. If IEEPA authority is curtailed, the arithmetic shifts pretty quickly. Margins move first, prices follow later, and the path back to goods disinflation could accelerate. That's why this is one ruling worth watching before the gavel drops.Thanks for listening. If you enjoy the show, please leave us a review wherever you listen and share thoughts on the market with a friend or colleague today.

Capital Hacking
E432: AI Is Coming for Your Industry — Is Your Portfolio Safe? with Patrick Grimes

Capital Hacking

Play Episode Listen Later Feb 13, 2026 41:17


In this engaging conversation, Patrick Grimes shares his journey from a career in automation robotics and machine design to becoming a private investor. He details the lessons learned from experiencing foreclosure during the 2008/2009 market downturn, which led him to develop his "Three Rings of Investment" philosophy: seeking recession resilience, non-correlation, and insulation from AI disruption. Grimes critiques publicly traded Real Estate Investment Trusts (REITs) in what he calls "The Ruse of REITs," arguing they are "publicly traded paper" that lack the core tax and inflation-hedging benefits of direct real estate. He also emphasizes the power of partnership to build a stable, hyper-diversified portfolio and discusses high-return alternative asset classes like commercial debt, legal funding, and medical receivables.Ultimate Show Notes:01:48 - Patrick Grimes's Background and Career03:57 - The 'Aha' Moment: Advice to Invest in Alternatives, Not Stocks05:56 - Early Setback: Foreclosure and Learning Recession Resilience10:38 - Overview of Passive Investing Mastery (PIM)12:53 - The Three Rings of Investment: Recession Resilience, Non-Correlation, and AI Insulation14:55 - The Risk of AI Disruption in Investments23:49 - "The Ruse of REITs" and Stock Market Correlation30:23 - The Power of Partnership and Hyper-Diversification34:15 - Discussion of Returns in Private Credit and Debt Funds39:00 - High-Return, Low-Risk Boutique Alternatives (Legal Funding, Medical Receivables)Connect with Patrick:www.passiveinvestingmastery.com/bookpatrickgrimes@passiveinvestingmastery.comLearn More About Accountable Equity:  Visit Us: http://www.accountableequity.com/   Access eBook: https://accountableequity.com/case-study/#register Turn your unique talent into capital and achieve the life you were destined to live. Join our community!We believe that Capital is more than just Cash. In fact, Human Capital always comes first before the accumulation of Financial Capital. We explore the best, most efficient, high-integrity ways of raising capital (Human & Financial). We want our listeners to use their personal human capital to empower the growth of their financial capital. Together we are stronger. LinkedinFacebookInstagramApple PodcastSpotify

Master My Garden Podcast
EP316 Peat Free Alternatives For Sowing Seed Rethinking Peat In Seed Starting

Master My Garden Podcast

Play Episode Listen Later Feb 13, 2026 44:26 Transcription Available


Peat built our seed-starting habits because it made life easy: even moisture, airy structure, predictable results. But when carbon-rich bogs and vanishing habitats enter the frame, “easy” stops feeling right. We take a clear-eyed look at what peat-free really means for gardeners in Ireland, the UK, and the US—beyond labels, beyond trends—and ask how to balance strong germination with true environmental sense.We start by mapping the policy shifts and market realities: Ireland still sells mostly peat-based compost; the UK's retail ban has pushed rapid innovation; the US market offers a mature spread of growing media, from coir and wood fibre to biochar, vermicast, and tailored blends. Then we dig into performance. Peat-free mixes can be excellent but inconsistent, changing with feedstocks and age. Two bags from the same pallet may give different germination and salt levels. We explain why that happens, how peat-free holds water differently, and how to adjust watering and timing to avoid stalled seedlings or damping-off.From there, we get practical. We're trialling three seed-starting paths this season: a local vermicast blend opened with perlite and a touch of biochar for moisture balance; a highly regarded coir-forward seed mix known for uniform germination; and a very small reserve of peat-based compost used only for sowing. We also share DIY routes: hot-composting followed by a long cure to stabilise the material, blending with sharp sand or perlite, and using inert media like grit plus vermiculite for germination before an early prick-out into a proven mix. Along the way, we question coir's “green” halo by tracing its journey across oceans and factories—great performance can still carry a heavy footprint if it travels farther than your holidays.If you want reliable seedlings without greenwash, this conversation gives you a framework: use imports sparingly where they truly shine, switch to local bulk mixes for planters and potting on, learn the moisture cues of peat-free, and record what works in your climate. We'd love to hear your winning recipes and failures too. Subscribe, share this with a gardening friend, and leave a review with your go-to seed-starting mix so we can test it next.Join my free Grow Your Own Food Webinar: http://subscribepage.io/growyourownfoodwebinarLast Few Places In Feb Workshop: https://subscribepage.io/growyourownfoodworkshopSupport the showIf there is any topic you would like covered in future episodes, please let me know. Email: info@mastermygarden.com Check out Master My Garden on the following channels Facebook: https://www.facebook.com/mastermygarden/ Instagram @Mastermygarden https://www.instagram.com/mastermygarden/ Until next week Happy gardening John

Thoughts on the Market
Signs That Global Growth May Be Ahead

Thoughts on the Market

Play Episode Listen Later Feb 12, 2026 4:11


Our Global Head of Fixed Income Research Andrew Sheets explains how key market indicators reflect a constructive view around the global cyclical outlook, despite a volatile start to 2026.Read more insights from Morgan Stanley.----- Transcript -----Andrew Sheets: Welcome to Thoughts on the Market. I'm Andrew Sheets, Global Head of Fixed Income Research at Morgan Stanley. Today I'm going to talk about the unusual alignment of a number of key indicators. It's Thursday, February 12th at 2pm in London. A frustrating element of investing is that any indicator at any time can let you down. That makes sense. With so much on the line, the secret to markets probably isn't just one of a hundreds of data series that a thousand of us can access at the push of a button. But many indicators all suggesting the same? That's far more notable. And despite a volatile start to 2026 with big swings in everything from Japanese government bonds to software stocks, it is very much what we think is happening below the surface. Specifically, a variety of indicators linked to optimism around the global cyclical outlook are all stronger, all moving up and to the right. Copper, which is closely followed as an economically sensitive commodity, is up strongly. Korean equities, which have above average cyclicality and sensitivity to global trade is the best performing of any major global equity market over the last year. Financials, which lie at the heart of credit creation, have been outperforming across the U.S., Europe, and Asia. And more recently, year-to-date cyclicals and transports are outperforming. Small caps are leading, breadth is improving, and the yield curve is bear steepening. All of these are the outcomes that you'd expect, all else equal, if global growth is going to be stronger in the future than it is today. Now individually, these data points can be explained away. Maybe Copper is just part of an AI build out story. Maybe Korea is just rebounding off extreme levels of valuation. Maybe Financials are just about deregulation in a steeper yield curve. Maybe the steeper yield curve is just about the policy uncertainty. And small cap stocks have been long-term laggards – maybe every dog has its day. But collectively, well, they're exactly what investors will be looking for to confirm that the global growth backdrop is getting stronger, and we believe they form a pretty powerful, overlapping signal worthy of respect. But if things are getting better, how much is too much. In the face of easier fiscal, monetary, and regulatory policy, the market may focus on other signposts to determine whether we now have too much of a good thing. For example, is there signs of significant inflation on the horizon? Is volatility in the bond market increasing? Is the U.S. dollar deviating significantly from its fair value? Is the credit market showing weakness? And do stocks and credit now react badly when the data is good? So far, not yet. As we discussed on this program last week, long run inflation expectations in the U.S. and euro area remain pretty consistent with central bank targets. Expected volatility in U.S. interest rates has actually fallen year-to-date. The U.S. dollar's valuation is pretty close to what purchasing power parity would suggest. Credit has been very stable. And better than expected labor market data on Wednesday was treated well. Any single indicator can and eventually will let investors down. But when a broad set of economically sensitive signals all point in the same direction, we listen. Taken together, we think this alignment is still telling a story of supportive fundamental tailwinds while key measures of stress hold. Until that evidence changes, we think those signals deserve respect. Thank you as always, for your time. If you find Thoughts on the Market useful, let us know by leaving a review wherever you listen. And also tell a friend or colleague about us today.

Here to Evolve
128. Q+A Day | Postpartum Motivation, Macro Tracking for Moms & Smarter Training Splits

Here to Evolve

Play Episode Listen Later Feb 12, 2026 42:44


In this Q+A episode of The Fitness League Podcast, we're tackling real-life fitness questions from busy moms and lifters trying to make progress without perfect conditions. We dive into practical macro tracking strategies for moms who don't have time to weigh every gram, how to find motivation postpartum when routines feel impossible, and simple meal ideas that make healthy eating sustainable (yes, yogurt bowls make the list). We also break down workout splits, why flexibility in your training matters more than perfection, and how small tweaks—like foot positioning on leg curls—can improve muscle activation and results. This episode is about adapting your fitness approach to your current season of life. Whether you're navigating postpartum recovery, juggling kids and career, or just trying to stay consistent without burning out, we'll help you simplify the process and focus on what actually moves the needle. As always, progress isn't built in one big moment—it's built in the small decisions you repeat consistently. APPLY FOR COACHING: https://www.lvltncoaching.com/1-1-coaching The Fitness League app https://www.fitnessleagueapp.com/ Macros Guide https://www.lvltncoaching.com/free-resources/calculate-your-macros Join the Facebook Community: https://www.facebook.com/groups/lvltncoaching FREE TOOLS to start your health and fitness journey: https://www.lvltncoaching.com/resources/freebies Alessandra's Instagram: http://instagram.com/alessandrascutnik Joelle's Instagram: https://www.instagram.com/joellesamantha?igsh=ZnVhZjFjczN0OTdn Josh's Instagram: http://instagram.com/joshscutnik Chapters 00:00 Welcome to the Fitness League 03:57 Macro Tracking Tips for Busy Moms 11:54 Finding Motivation Postpartum 16:02 Quick and Healthy Meal Ideas 20:41 Protein Snacks and Alternatives 22:08 Core Exercises During Pregnancy 22:54 Movies and Mental Health 28:06 Training Splits: Full Body vs. Upper/Lower 32:19 Flexibility in Workout Scheduling 35:03 Leg Curl Techniques and Preferences

The Biblical Mind
Love, Justice, and the American Prison System: A Biblical Rethink (Abigail Pasiuk) Ep. #239

The Biblical Mind

Play Episode Listen Later Feb 12, 2026 36:34


********** We recently uploaded the wrong audio file for this episode — sorry about that!  The correct version is now live. If your podcast app already downloaded the original (incorrect) file, it may not automatically replace it. You'll need to delete the old download and re-download the episode. Here's how: Step 1: Delete the Downloaded Episode Open your podcast app. Go to the episode. Remove/Delete the downloaded file. (Look for a checkmark, download arrow, or “Downloaded” label — then choose “Remove Download” or “Delete Download.”) Step 2: Re-Download the Episode Once the old download is removed, tap the Download button again. The correct, updated audio will download. If It Still Plays the Old Version If you're still hearing the incorrect audio: Close and reopen your podcast app. Or refresh the show feed (some apps have a “Pull to Refresh” or “Refresh” option). As a last resort, try deleting and reinstalling the app (this may remove saved downloads). App-Specific Notes (Optional to Include) Apple Podcasts: Remove Download → Tap the three dots → “Remove Download” → Re-download. Spotify: Tap the green download arrow to remove → Tap again to re-download. Overcast / Pocket Casts / Others: Remove the download, then download again. ********** In this eye-opening conversation, PhD researcher Abigail Pasiuk joins Dr. Dru Johnson to explore how the Hebrew Bible can inform modern conversations about mass incarceration. Drawing on her personal experience—her father's time in federal prison—and academic research at Oxford, Abby offers a theologically rich critique of retributive justice models prevalent in the U.S. prison system. She explains how biblical justice prioritizes restoration and dignity rather than dehumanization, citing key themes such as the Shema and imago Dei. Abby shares firsthand accounts from interviews with incarcerated individuals, exposing everyday indignities—from food labeled “not for human consumption” to being stripped of identity and reduced to a number. With over 80% recidivism in the U.S., Abby points to countries like Norway where restorative practices and the “principle of normalcy” have dramatically reduced reoffense. The episode challenges listeners to rethink what justice should look like through a biblical lens: not just punishment, but humanizing correction rooted in love. It's a conversation that bridges theology, criminology, and real human stories—urging the church to see prisoners not as disposable, but as image-bearers. Follow Abigail's work here: https://www.theology.ox.ac.uk/people/abigail-pasiuk We are listener supported. Give to the cause here: https://hebraicthought.org/give For more articles: https://thebiblicalmind.org/ Social Links: Facebook: https://www.facebook.com/HebraicThought Instagram: https://www.instagram.com/hebraicthought Threads: https://www.threads.net/hebraicthought X: https://www.twitter.com/HebraicThought Bluesky: https://bsky.app/profile/hebraicthought.org Chapter: 00:00 Abigail's Journey to Oxford 08:26 The PhD Experience at Oxford 17:18 Research Focus: Mass Incarceration and Justice 27:09 Critique of the Prison System and Alternatives

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

Thoughts on the Market
The Future of North American Trade

Thoughts on the Market

Play Episode Listen Later Feb 11, 2026 4:30


With the U.S.-Canada-Mexico Agreement coming up for review, our Head of Public Policy Research Ariana Salvatore unpacks whether our 2025 call for deeper trade integration still holds.Read more insights from Morgan Stanley.----- Transcript -----Ariana Salvatore: Welcome to Thoughts on the Market. I'm Ariana Salvatore, Head of Public Policy Research for Morgan Stanley. Today I'll be talking about our expectations for the upcoming USMCA review, and how the landscape has shifted from last year. It's Wednesday, February 11th at 4pm in London. As we highlighted last fall, the US-Mexico-Canada Agreement is approaching its first mandatory review in 2026. At the time, we argued that the risks were skewed modestly to the upside. Structural contingencies built into the agreement we think cap downside risk and tilt most outcomes toward preserving and over time deepening North American trade integration. That framing, we think, remains broadly intact. But some developments over the past few months suggest that the timing and the structure of that deeper integration could end up looking a little bit different than we initially expected. We still see a scenario where negotiators resolve targeted frictions and make limited updates, but we're increasingly mindful that some of the more ambitious policy maker goals – for example, new chapters on AI, critical minerals or more explicit guardrails on Chinese investment in Mexico – may be harder to formalize ahead of the mid-2026 deadline. So, what does the base case as we framed it last year still look like? We continue to expect an outcome that preserves the agreement and resolves several outstanding disputes – auto rules of origin, labor enforcement procedures, and select digital trade provisions. On the China question, our view from last year also still holds. We expect incremental steps by Mexico to reduce trans-shipment risk and better align with U.S. trade priorities, though likely without a fully institutionalized enforcement mechanism by mid-2026. And remember, the USMCA's 10-year escape clause keeps the agreement enforced at least through 2036, meaning the probability of a disruptive trade shock is structurally quite low. What may be shifting is not the direction of travel, but the pace and the form. A more comprehensive agreement may ultimately come, but possibly with a longer runway or through site agreements rather than updates to the USMCA text itself. Of course, those come with an enforcement risk just given the lack of congressional backing. We still expect the formal review to conclude around mid-2026, albeit with a growing possibility that deeper institutional alignment happens further out or via parallel frameworks. It also is possible that into that deadline all three sides decide to extend negotiations out further into the future, extending the uncertainty for even longer. So what does it all mean for macro and markets? For Mexico, maintaining tariff free access to the U.S. continues to be essential. The base case supports ongoing manufacturing integration, especially in autos and electronics. But without the newer, more strategic chapters that policymakers have discussed, the agreement would leave Mexico in a position that it's accustomed to – stable but short of a full nearshoring acceleration. This aligns with our view from last year, but we now see clearer near-term risks to the thesis of rapid institutional, deeper trade integration. For FX, the pace of benefit is from reduced uncertainty, but the effect is likely gradual. The absence of tangible progress on adding to the original deal suggests a more muted near-term impulse. For Canada, the implications are similarly two-sided. Near-term volatility around the review is likely underpriced, but a limited agreement should eventually lead to medium term USD-CAD downside. On the economics front, last year, we argued that the review would reinforce North America as a manufacturing block, even if it didn't fully resolve supply chain diversification from China. We think that remains true today, but with the added nuance that some of the more ambitious integration pathways may be pushed further out or structured outside of the formal USMCA chapters. So bottom line, our base case remains a measured, pragmatic outcome that reduces uncertainty, but preserves the core benefits of North American trade and supports growth across key asset classes. But it also increasingly looks like an outcome that may leave some strategic opportunities on the table for now, setting the stage for deeper alignment later – on a slightly longer horizon, or through a more flexible framework. Thanks for listening. As a reminder, if you enjoy Thoughts on the Market, please take a moment to rate and review us wherever you listen. And share Thoughts on the Market with a friend or colleague today.

The Health Ranger Report
Brighteon Broadcast News, Feb 10, 2026 – Elon's Moon Madness, Trump's Fed Switcheroo and Why AI Might Accidentally Kill You

The Health Ranger Report

Play Episode Listen Later Feb 10, 2026 112:49


Stay informed on current events, visit www.NaturalNews.com - Elon Musk's Moon Announcement and AI Developments (0:00) - AI Avatars and AI Takeover in 2026 (6:19) - AI's Mission-Driven Cognitive Behavior (12:32) - The End of Humanity and AI's Role (23:21) - Elon Musk's Moon City Announcement (36:04) - The Weaponization of Space and Historical Context (1:14:07) - The Role of Operation Paperclip and Nazi Influence (1:16:36) - Mach Speeds and Energy Calculations (1:18:59) - Weaponization of the Moon (1:21:47) - Strategic Importance of the Moon (1:27:18) - Technological Advancements and Military Applications (1:33:55) - Potential Targets and Consequences (1:35:13) - Claude Bot Malware Incident (1:35:34) - Impact and Aftermath of Claude Bot (1:41:42) - Lessons Learned and Future Risks (1:47:28) - Alternatives to Claude Bot and AI Tools (1:49:56) - Final Thoughts and Call to Action (1:52:19) Watch more independent videos at http://www.brighteon.com/channel/hrreport  ▶️ Support our mission by shopping at the Health Ranger Store - https://www.healthrangerstore.com ▶️ Check out exclusive deals and special offers at https://rangerdeals.com ▶️ Sign up for our newsletter to stay informed: https://www.naturalnews.com/Readerregistration.html Watch more exclusive videos here:

Thoughts on the Market
A Thematic Look at Market Volatility

Thoughts on the Market

Play Episode Listen Later Feb 10, 2026 10:06


Our Global Head of Thematic and Sustainability Research Stephen Byrd and U.S. Thematic and Equity Strategist Michelle Weaver lay out Morgan Stanley's four key Research themes for 2026, and how those themes could unfold across markets for the rest of the year. Read more insights from Morgan Stanley.----- Transcript -----Stephen Byrd: Welcome to Thoughts on the Market. I'm Stephen Byrd, Global Head of Thematic and Sustainability Research. Michelle Weaver: And I'm Michelle Weaver, U.S. Thematic and Equity Strategist. Stephen Byrd: I was recently on the show to discuss Morgan Stanley's four key themes for 2026. Today, a look at how those themes could actually play out in the real world over the course of this year. It's Tuesday, February 10th at 10am in New York. So one of the biggest challenges for investors right now is separating signal from noise. Markets are reacting to headlines by the minute, but the real drivers of long-term returns tend to move much more slowly and much more powerfully. That's why thematic analysis has been such an important part of how we think about markets, particularly during periods of high volatility. For 2026, our framework is built around four key themes: AI and tech diffusion, the future of energy, the multipolar world, and societal shifts. In other words, three familiar themes and one meaningful evolution from last year. So Michelle, let's start at the top. When investors hear four key themes, what's different about the 2026 framework versus what we laid out in 2025? Michelle Weaver: Well, like you mentioned before, three of our four key themes are the same as last year, so we're gonna continue to see important market impacts from AI and tech diffusion, the future of energy and the multipolar world.But our fourth key theme, societal shifts, is really an expansion of our prior key theme longevity from last year. And while three of the four themes are the same broad categories, the way they impact the market is going to evolve. And these themes don't exist in isolation. They collide and they intersect with one another, having other important market implications. And we'll talk about many of those intersections today as they relate to multiple themes. Let's start with AI. How does the AI and tech diffusion theme specifically evolve since last year? Stephen Byrd: Yeah. You know, you mentioned earlier the evolution of all of our themes, and that was certainly the case with AI and tech diffusion. What I think we'll see in 2026 is a few major evolutions. So, one is a concept that we think of as two worlds of LLM progress and AI adoption; and let me walk through what I mean by that. On LLM progress, we do think that the handful of American LLM developers that have 10 times the compute they had last year are going to be training and producing models of unprecedented capability. We do not think the Chinese models will be able to keep up because they simply do not have the compute required for the training. And so we will see two worlds, very different approaches. That said, the Chinese models are quite excellent in terms of providing low cost solutions to a wide range of very practical business cases. So that's one case of two worlds when we think about the world of AI and tech diffusion. Another is that essentially we could see a really big gap between what you can do with an LLM and what the average user is actually doing with LLMs. Now there're going to be outliers where really leaders will be able to fully utilize LLMs and achieve fairly substantial and breathtaking results. But on average, that won't be the case. And so you'll see a bit of a lag there. That said, I do think when investors see what those frontier capabilities are, I think that does eventually lead to bullishness. So that's one dynamic. Another really big dynamic in 2026 is the mismatch between compute demand and compute supply. We dove very deeply into this in our note, and essentially where we come out is we believe, and our analysis supports this, that the demand for compute is going to be systematically much higher than the supply. That has all kinds of implications. Compute becomes a very precious resource, both at the company level, at the national level. So those are a couple of areas of evolution.So Michelle, let's shift over to the future of energy, which does feel very different today than it did a year ago. Can you kind of walk through what's changed? Michelle Weaver: Well, we absolutely still think that power is one of the key bottlenecks for data center growth. And our power modeling work shows around a 47 gigawatt shortfall before considering innovative time to power solutions. We get down to around a 10 to 20 percent shortfall in power needed in the U.S. though, even after considering those solutions. So power is still very much a bottleneck. But the power picture is becoming even more challenged for data centers, and that's largely because of a major political overhang that's emerging. Consumers across the U.S. have seen their electricity bills rise and are increasingly pointing to data centers as the culprit behind this. I really want to emphasize though this is a nuanced issue and data center power demand is driving consumer bills higher in some areas like the Mid-Atlantic. But this isn't the case nationwide and really depends on a number of factors like data center density in the region and whether it's a regulated or unregulated utility market.But public perception has really turned against data centers and local pushback is causing planned data centers to be canceled or delayed. And you're seeing similar opinions both across political affiliations and across different regional areas. So yes, in some areas data centers have impacted consumer power bills, but in other areas that hasn't been the case. But this is good news though, for companies that offer off-grid power generation, who are able to completely insulate consumers because they're not connecting to the grid.Stephen, the multipolar theme was already strong last year. Why has it become even more central for 2026? Stephen Byrd: Yeah, you're right. It was strong in 2025. In fact, of our 21 categories of stocks, the top three performing were really driven by multipolar world dynamics. Let me walk through three areas of focus that we have for multipolar world in 2026. Number one is an aggressive U.S. policy agenda, and that's going to show up in a number of ways. But examples here would be major efforts to reshore manufacturing, a real evolution in military spending towards a wide range of newer military technologies, reducing power prices and inflation more broadly. And also really focusing on trying to eliminate dependency on China for rare earths. So that's the first big area of focus. The second is around AI technology transfer. And this is quite closely linked to rare earths. So here's the dynamic as we think about U.S. and China. China has a commanding position in rare earths. The United States has a leading position in access to computational resources. Those two are going to interplay quite a bit in 2026. So, for example, we have a view that in 2026, when those American models, these LLMs achieve these step changes up in capabilities that China cannot match, we think that it's very likely that China may exert pressure in terms of rare earths access in order to force the transfer of technology, the best AI technology to China. So that's an example of this linkage between AI and rare earths. And the last dynamic, I'd say broadly, would be the politics of energy, which you described quite well. I think that's going to be a big multipolar world dynamic everywhere around the world. A focus on how much of an impact our data centers are having – whether it's water access, price of power, et cetera. What are the impacts to jobs? And that's going to show up in a variety of policy actions in 2026. Michelle Weaver: Mm-hmm. Stephen Byrd: So Michelle, the last of our four key themes is societal shifts, and you walked through that briefly before. This expands on our prior longevity work. What does this broader framing capture? Michelle Weaver: Societal shifts will include important topics from longevity still. So, things like preparing for an aging population and AI in healthcare. But the expansion really lets us look at the full age range of the demographic spectrum, and we can also now start thinking about what younger consumers want. It also allows us to look at other income based demographics, like what's been going on with the K-economy, which has been an important theme around the world. And a really critical element, though, of this new theme is AI's impact on the labor market. Last year we did a big piece called The Future of Work. And in it we estimated that around 90 percent of jobs would be impacted by AI. I want to be clear: That's not to say that 90 percent of jobs would be lost by AI or automated by AI. But rather some task or some component of that job could be automated or augmented using AI. And so you might have, you know, the jobs of today looking very different five years from now. Workers are adaptable and, and we do expect many to reskill as part of this evolving job landscape. We've talked about the evolution of our key themes, but now let's focus a little on the results. So how have these themes actually performed from an investment standpoint? Stephen Byrd: Yeah. I was very happy with the results in 2025. When we looked across our categories of thematic stocks; we have 21 categories of thematic stocks within our four big themes. On average in 2025, our thematic stock categories outperformed MSCI World by 16 percent and the S&P 500 by 27 percent respectively. So, I was very happy with that result. When you look at the breakdown, it is interesting in terms of the categories, you did really well. As I mentioned, the top three were driven by multipolar world. That is Critical Minerals, AI Semis, and Defense. But after that you can see a lot of AI in Energy show up. Power in AI was a big winner. Nuclear Power did extremely well. So, we did see other categories, but I did find it really interesting that multipolar world really did top the charts in 2025. Michelle Weaver: Mm-hmm. Stephen Byrd: Michelle, thanks for taking the time to talk. Michelle Weaver: Great speaking with you, Steven. Stephen Byrd: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.

The John Batchelor Show
S8 Ep435: HEADLINE: Exotic Theories and the Ongoing Quest. GUEST: Govert Schilling. SUMMARY: The conversation explores anomalies like dark-matter-free galaxies and alternatives like primordial black holes, highlighting the enduring mystery of the unive

The John Batchelor Show

Play Episode Listen Later Feb 9, 2026 10:09


HEADLINE: Exotic Theories and the Ongoing Quest. GUEST: Govert Schilling. SUMMARY: The conversation explores anomalies like dark-matter-free galaxies and alternatives like primordial black holes, highlighting the enduring mystery of the universe's composition. 1952

Thoughts on the Market
Why Latin America's ‘Trifecta' Could Reshape Global Portfolios

Thoughts on the Market

Play Episode Listen Later Feb 9, 2026 4:56


Our Chief LatAm Equity Strategist Nikolaj Lippmann discusses why Latin America may be approaching a rare “Spring” moment – where geopolitics, peaking rates, and elections set the scene for an investment-led growth cycle with meaningful market upside.Read more insights from Morgan Stanley.----- Transcript -----Nikolaj Lippmann: Welcome to Thoughts on the Market. I'm Nikolaj Lippmann, Morgan Stanley's Chief Latin America Equity Strategist. If you ever felt like Latin America is too complicated to follow, today's episode is for you. It's Monday, February 9th at 10am in New York. The big idea in our research is simple. Latin America is facing a trifecta of change that could set up a very different investment story from what investors have gotten used to. We could be moving towards an investment or CapEx cycle in the shadow of the global AI CapEx cycle, and this is a stark departure from prior consumer cycles in Latin America. Latin America's GDP today is about $6 trillion. Yet Latin American equities account for just about 80 basis points of the main global index MSCI All Country World Equity benchmark. In plain English, it's really easy for investors to overlook such a vast region. But the narrative seems to be changing thanks to three key factors. Number one, shifting geopolitics in this increasingly global multipolar world. We can see this with trade rules, security priorities, supply chains that are getting rewritten. Capital and investment will often move alongside with these changing rules. Clearly, as we can all see U.S. priorities in Latin America have shifted, and with them have local priorities and incentives. Second, interest rates may very well have been peaking and could decline into [20]26. When borrowing cost fall, it just becomes easier to fund factories, infrastructure, AI, and expansion into all kinds of different investment, which become more feasible. What is more, we see a big shift in the size and growth of domestic capital markets in almost every country in Latin America – something that happens courtesy of reform and is certainly new versus prior cycles. And finally, elections that could lead to an important policy shift across Latin America. We see signs of movement towards greater fiscal responsibility in many sites of the region, with upcoming elections in Colombia and Brazil. We have already seen new policy makers in Argentina, Chile, Mexico, depart from prior populism. So, when we put all this together -- geopolitics, rates and local election -- you get to the core of our thesis, a possible LatAm spring; meaning a decisive break from the status quo towards fiscal consolidation, monetary easing, and structural reform. And we think that that could be a potential move that restores some confidence and attracts private capital. In our spring scenario, we see interest rates coming down, not rising in a scenario of higher growth to 6 percent in Brazil and Mexico, 7 percent in Argentina, and just 4 percent in Chile. This helps the rerating of the region. There's another powerful factor that I think many investors overlook, and that is a key difference versus prior cycles, as already mentioned. And that's the domestic savings. Local portfolios today are much bigger, much deeper capital markets, and they're heavily skewed towards fixed income. 75 percent of Latin American portfolios are in fixed income versus 25 percent in equity. In Brazil, the number's even higher with 90 to 95 percent in fixed income. If this shifts even halfway towards equity, it can deepen and support local capital markets; it supports valuation. For the region as a whole, sectors most impacted by this transformation would be Financial Services, Energy, Utilities, IT and Healthcare. Up until now, I think Latin America has been viewed as a region where a lot could go wrong. We asked the reverse question. What could go right? If the trifecta lines up: geopolitics, peaking rates and elections that enable a more investment friendly policy and CapEx cycle, Latin America could shift from being seen mainly as a supply of commodities and labor to far more investment driven engine of growth. That's why investors should put Latin America on the radar now and not wait until spring is already in full bloom. Thanks for listening. If you enjoy the show, please leave us a review wherever you listen to the podcast and share Thoughts on the Market with a friend or colleague today.

Rational Boomer Podcast
CONSIDER ALTERNATIVES - 02/08/2026 - VIDEO SHORT

Rational Boomer Podcast

Play Episode Listen Later Feb 8, 2026 1:59


Consider Alternatives

Thoughts on the Market
For Better or Warsh

Thoughts on the Market

Play Episode Listen Later Feb 6, 2026 12:14


Our Global Head of Fixed Income Research Andrew Sheets and Global Chief Economist Seth Carpenter unpack the inner workings of the Federal Reserve to illustrate the challenges that Fed chair nominee Kevin Warsh may face.Read more insights from Morgan Stanley.----- Transcript ----- Andrew Sheets: Welcome to Thoughts on the Market. I'm Andrew Sheets, Global Head of Fixed Income Research at Morgan Stanley. Seth Carpenter: And I'm Seth Carpenter, Morgan Stanley's Global Chief Economist and Head of Macro Research. Andrew Sheets: And today on the podcast, a further discussion of a new Fed chair and the challenges they may face. It's Friday, February 6th at 1 pm in New York. Seth, it's great to be here talking with you, and I really want to continue a conversation that listeners have been hearing on this podcast over this week about a new nominee to chair the Federal Reserve: Kevin Warsh. And you are the perfect person to talk about this, not just because you lead our economic research and our macro research, but you've also worked at the Fed. You've seen the inner workings of this organization and what a new Fed chair is going to have to deal with. So, maybe just for some broad framing, when you saw this announcement come out, what were some of the first things to go through your mind? Seth Carpenter: I will say first and foremost, Kevin Warsh's name was one of the names that had regularly come up when the White House was providing names of people they were considering in lots of news cycles. So, I think the first thing that's critically important from my perspective, is – not a shock, right? Sort of a known quantity. Second, when we think about these really important positions, there's a whole range of possible outcomes. And I would've said that of the four names that were in the final set of four that we kept hearing about in the news a lot. You know, some differences here and there across them, but none of them was substantially outside of what I would think of as mainstream sort of thinking. Nothing excessively unorthodox at all like that. So, in that regard as well, I think it should keep anybody from jumping to any big conclusions that there's a huge change that's imminent. I think the other thing that's really important is the monetary policy of the Federal Reserve really is made by a committee. The Federal Open Market Committee and committee matters in these cases. The Fed has been under lots of scrutiny, under lots of pressure, depending on how you want to put it. And so, as a result, there's a lot of discussion within the institution about their independence, making sure they stick very scrupulously to their congressionally given mandate of stable prices, full employment. And so, what does that mean in practice? That means in practice, to get a substantially different outcome from what the committee would've done otherwise… So, the market is pricing; what's the market pricing for the funds rate at the end of this year? About 3.2 percent. Andrew Sheets: Something like that. Yeah. Seth Carpenter: Yeah. So that's a reasonable forecast. It's not too far away from our house view. For us to end up with a policy rate that's substantially away from that – call it 1 percentage, 2 percentage points away from that. I just don't see that as likely to happen. Because the committee can be led, can be swayed by the chair, but not to the tune of 1 or 2 percentage points. And so, I think for all those reasons, there wasn't that much surprise and there wasn't, for me, a big reason to fully reevaluate where we think the Fed's going. Andrew Sheets: So let me actually dig into that a little bit more because I know our listeners tune in every day to hear a lot about government meetings. But this is a case where that really matters because I think there can sometimes be a misperception around the power of this position. And it's both one of the most public important positions in the world of finance. And yet, as you mentioned, it is overseeing a committee where the majority matters. And so, can you take us just a little bit inside those discussions? I mean, how does the Fed Chair interact with their colleagues? How do they try to convince them and persuade them to take a particular course of action? Seth Carpenter: Great question. And you're right, I sort of spent a bunch of time there at the Fed. I started when Greenspan was chair. I worked under the Bernanke Fed. And of course, for the end of that, Janet Yellen was the vice chair. So, I've worked with her. Jay Powell was on the committee the whole time. So, the cast of characters quite familiar and the process is important. So, I would say a few things. The chair convenes the meetings; the chair creates the agenda for the meeting. The chair directs the staff on what the policy documents are that the committee is going to get. So, there's a huge amount of influence, let's say, there. But in order to actually get a specific outcome, there really is a vote. And we only have to look back a couple weeks to the last FOMC meeting when there were two dissents against the policy decision. So, dissents are not super common. They don't happen at every single meeting, but they're not unheard of by any stretch of the imagination either. And if we go back over the past few years, lots going on with inflation and how the economy was going was uncertain. Chair Powell took some dissents. If we go back to the financial crisis Chair Bernanke took a bunch of dissents. If we go back even further through time, Paul Volcker, when he was there trying to staunch the flow of the high inflation of the 1970s, faced a lot of resistance within his committee. And reportedly threatened to quit if he couldn't get his way. And had to be very aggressive in trying to bring the committee along. So, the chair has to find a way to bring the committee along with the plan that the chair wants to execute. Lots of tools at their disposal, but not endless power or influence. Does that make sense? Andrew Sheets: That makes complete sense. So, maybe my final question, Seth, is this is a tough job. This is a tough job in… Seth Carpenter: You mean your job and my job, or… Andrew Sheets: [Laughs] Not at all. The chair of the Fed. And it seems especially tricky now. You know, inflation is above the Fed's target. Interest rates are still elevated. You know, certainly mortgage rates are still higher than a lot of Americans are used to over the last several years. And asset prices are high. You know, the valuation of the equity market is high. The level of credit spreads is tight. So, you could say, well, financial conditions are already quite easy, which can create some complications. I am sure Kevin Warsh is receiving lots of advice from lots of different angles. But, you know, if you think about what you've seen from the Fed over the years, what would be your advice to a new Fed chair – and to navigate some of these challenges? Seth Carpenter: I think first and foremost, you are absolutely right. This is a tough job in the best of times, and we are in some of the most difficult and difficult to understand macroeconomic times right now. So, you noted interest rates being high, mortgage rates being high. There's very much an eye of the beholder phenomenon going on here. Now you're younger than I am. The first mortgage I had. It was eight and a half percent. Andrew Sheets: Hmm. Seth Carpenter: I bought a house in 2000 or something like that. So, by those standards, mortgage rates are actually quite low. So, it really comes down to a little bit of what you're used to. And I think that fact translates into lots of other places. So, inflation is now much higher than the committee's target. Call it 3 percent inflation instead core inflation on PCE, rather than 2 percent inflation target. Now, on the one hand that's clearly missing their target and the Fed has been missing their target for years. And we know that tariffs are pushing up inflation, at least for consumer goods. And Chair Powell and this committee have said they get that. They think that inflation will be temporary, and so they're going to look through that inflation. So again, there's a lot of judgment going on here. The labor market is quite weak. Andrew Sheets: Hmm. Seth Carpenter: We don't have the latest months worth of job market data because of the government shutdown; that'll be delayed by a few days. But we know that at the end of last year, non-farm payrolls were running well below 50,000. Under most circumstances, you would say that is a clear indication of a super weak economy. But! But if we look at aggregate spending data, GDP, private-domestic final purchases, consumer spending, CapEx spending. It's actually pretty solid right now. And so again, that sense of judgment; what's the signal you're going to look for? That's very, very difficult right now, and that's part of what the chair is going to have to do to try to bring the committee together, in order to come to a decision. So, one intellectually coherent argument is – the main way you could get strong aggregate demand, strong spending numbers, strong GDP numbers, but with pretty tepid labor force growth is if productivity is running higher and if productivity is going higher because of AI, for example, over time you could easily expect that to be disinflationary. And if it's disinflationary, then you can cut it. Interest rates now. Not worry as much as you would normally about high inflation. And so, the result could be a lower path for policy rates. So that's one version of the argument that I suspect you're going to hear. On the other hand, inflation is high and it's been high for years. So what does that mean? Well. History suggests that if inflation stays too high for too long, inflation psychology starts to change the way businesses start to set. Andrew Sheets: Mm-hmm. Seth Carpenter: Their own prices can get a little bit loosey-goosey. They might not have to worry as much about consumers being as picky because everybody's got used to these price changes. Consumers might be become less picky because, well, they're kind of sick of shopping around. They might be more willing to accept those higher prices, and that's how things snowball. So, I do think that the new chair is going to face a particularly difficult situation in leading a committee in particularly challenging times. But I've gone on for a long, long time there. And one of the things that I love about getting to talk to you, Andrew, is the fact that you also talked to lots of investors all around the world. You're based in London. And so when the topic of the new Fed chair comes up, what are the questions that you're getting from clients? Andrew Sheets: So, I think that there are a few questions that stand out. I mean, I think a dominant question among investors was around the stability of the U.S. dollar. And so, you could say a good development on the back of Kevin Warsh's nomination is that the market response to that has been the price action you would associate with more stability. You've seen the dollar rise; you've seen precious metals prices fall. You've seen equity markets and credit spreads be very stable. So, I think so far everything in the market reaction is to your; to the point that you raised, you know, consistent with this still being orthodox policy. Every Fed chair is different, but still more similar than different now. I think where it gets more divergent in client opinions is just – what are we going to see from the Fed? Are we going to see a real big change in policy? And I think that this is where there are very different views of Kevin Warsh from investors. Some who say, ‘Well, he's in the past talked about fighting inflation more aggressively, which would imply tighter policy.' And he's also talked more recently about the productivity gains from AI and how that might support lower interest rates. So, I think that there's going to be a lot of interest when he starts to speak publicly, when we see testimony in front of the Senate. I think the other, the final piece, which I think again, people do not have as fully formed an opinion on yet is – how does he lead the Fed if the data is unexpected? And you know, you mentioned inflation and, you know, Morgan Stanley has this forecast that: Well, owner's equivalent rent, a really key part of inflation, might be a little bit higher than expected, which might be a distortion coming off of the government shutdown and impacts on data. But there's some real uncertainty about the inflation path over the near term. And so, in short, I think investors are going to give the benefit of the doubt. For now, I think they're going to lean more into this idea that it will be generally consistent with the Fed easing policy over time, for now. Generally consistent with a steeper curve for now. But I think there's a lot we're going to find out over the next couple of weeks and months. Seth Carpenter: Yeah. No, I agree with you. Andrew, I have to say, I'm glad you're here in New York. It's always great to sit down and talk to you. Let's do it again before too long. Andrew Sheets: Absolutely, Seth. Thanks for taking the time to talk. And to our audience, thank you as always for your time. If you find Thoughts the Market useful, let us know by leaving a review wherever you listen. And also tell a friend or colleague about us today.

The John Batchelor Show
S8 Ep419: Eric Berger details NASA's choice between expensive legacy contracts and cheaper commercial alternatives like Blue Origin for a necessary Mars communication satellite, weighing cost efficiency against institutional inertia.

The John Batchelor Show

Play Episode Listen Later Feb 5, 2026 1:14


Eric Berger details NASA's choice between expensive legacy contracts and cheaper commercial alternatives like Blue Origin for a necessary Mars communication satellite, weighing cost efficiency against institutional inertia.1917

Thoughts on the Market
The Fed's Course Under a New Chair

Thoughts on the Market

Play Episode Listen Later Feb 5, 2026 11:00


Our Global Head of Macro Strategy Matthew Hornbach and Chief U.S. Economist Michael Gapen discuss the path for U.S. interest rates after the nomination of Kevin Warsh for next Fed chair.Read more insights from Morgan Stanley.----- Transcript -----Matthew Hornbach: Welcome to Thoughts on the Market. I'm Matthew Hornbach, Global Head of Macro Strategy. Michael Gapen: And I'm Michael Gapen, Morgan Stanley's Chief U.S. Economist. Matthew Hornbach: Today we'll be talking about the Federal Open Market Committee meeting that occurred last week.It's Thursday, February 5th at 8:30 am in New York.So, Mike, last week we had the first Federal Open Market Committee meeting of 2026. What were your general impressions from the meeting? And how did it compare to what you had thought going in? Michael Gapen: Well, Matt, I think that the main question for markets was how hawkish a hold or how dovish a hold would this be. As you know, it was widely expected the Fed would be on hold. The incoming data had been fairly solid. Inflation wasn't all that concerning, and most of the employment data suggested things had stabilized. So, it was clear they were going to pause. The question was would they pause or would they be on pause, right? And in our view, it was more of a dovish hold. And by that, it suggests to us, or they suggested to us, I should say, that they still have an easing bias and rates should generally move lower over time. So, that really was the key takeaway for me. Would they signal a prolonged pause and perhaps suggest that they might be done with the easing cycle? Or would they say, yes, we've stopped for now, but we still expect to cut rates later? Perhaps when inflation comes down and therefore kind of retain a dovish bias or an easing bias in the policy rate path. So, to me, that was the main takeaway. Matthew Hornbach: Of course, as we all know, there are supposed to be some personnel changes on the committee this year. And Chair Powell was asked several questions to try to get at the future of this committee and what he himself was going to do personally. What was your impression of his response and what were the takeaways from that part of the press conference? Michael Gapen: Well, clearly, he's been reluctant to, say, pre-announce what he may do when his term is chair ends in May. But his term as a governor extends into 2028. So, he has options. He could leave normally that's what happens. But he could also stay and he's never really made his intentions clear on that part. I think for maybe personal or professional reasons. But he has his own; he has his own reasons and, and that's fine. And I do think the recent subpoena by the DOJ has changed the calculus in that. At least my own view is that it makes it more likely that he stays around. It may be easier for him to act in response to that subpoena by being on staff. It's a request for additional information; he needs access to that information. I think you could construct a reasonable scenario under which, ‘Well, I have to see this through, therefore, I may stay around.' But maybe he hasn't come to that conclusion yet. And then stepping back, that just complicates the whole picture in the sense that we now know the administration has put forward Kevin Warsh as the new Fed chair. Will he be replacing the seat that Jay Powell currently sits in? Will he be replacing the seat that Stephen Myron is sitting in? So yes, we have a new name being put forward, but it's not exactly clear where that slot will be; and what the composition of the committee will look like. Matthew Hornbach: Well, you beat me to the punch on mentioning Kevin Warsh… Michael Gapen: I kind of assumed that's where you were going. Matthew Hornbach: It was going to be my next question. I'm curious as to what you think that means for Fed policy later this year, if anything. And what it might mean more medium term? Michael Gapen: Yeah. Well, first of all, congratulations to Mr. Warsh on the appointment. In terms of what we think it means for the outlook for the Fed's reaction function and interest rate policy, we doubt that there will be a material change in the Fed's reaction function. His previous public remarks don't suggest his views on interest rate policy are substantively outside the mainstream, or at least certainly the collective that's already in the FOMC. Some people would prefer not to ease. The majority of the committee still sees a couple more rate cuts ahead of them. Warsh is generally aligned with that, given his public remarks. But then also all the reserve bank presidents have been renominated. There's an ongoing Supreme Court case about the ability of the administration to fire Lisa Cook. If that is not successful, then Kevin Warsh will arrive in an FOMC where there's 16 other people who all get a say. So, the chair's primary responsibility is to build a consensus; to herd the cats, so to speak. To communicate to markets and communicate to the public. So, if Mr. Warsh wanted to deviate substantially from where the committee was, he would have to build a consensus to do that. So, we think, at least in the near term, the reaction function won't change. It'll be driven by the data, whether the labor market holds up, whether inflation, decelerates as expected. So, we don't look for material change. Now you also asked about the medium term. I do think where his views differ, at least with respect to current Fed policy is on the size of the Fed's balance sheet and its footprint in financial markets. So, he has argued over time for a much smaller balance sheet. He's called the Fed's balance sheet bloated. He has said that it creates distortions in markets, which mean interest rates could be higher than they otherwise would be. And so, I think if there is a substantive change in Fed policy going forward, it could be there on the balance sheet. But what I would just say on that is it'll likely take a lot of coordination with Treasury. It will likely take changes in rules, regulations, the supervisory landscape. Because if you want to reduce the balance sheet further without creating volatility in financial markets, you have to find a way to reduce bank demand for it. So, this will take time, it'll take study, it'll take patience. I wouldn't look for big material changes right out of the box. So Matt, what I'd like to do is, if I could flip it back to you, Warsh was certainly one of the expected candidates, right? So, his name is not a surprise. But as we knew financial markets, one day we're thinking it'd be one candidate. The next day it'd be thinking at the next it was somebody else. How did you see markets reacting to the announcement of Mr. Warsh? For the next Fed share, and then maybe put that in context of where markets were coming out of the last FOMC meeting. Matthew Hornbach: Yeah, so the markets that moved the most were not the traditional, very large macro markets like the interest rate marketplace or the foreign exchange market. The markets that moved the most were the prediction markets. These newer markets that offer investors the ability to wager on different outcomes for a whole variety of events around the world. But when it comes to the implications of a Kevin Warsh led Fed – for the bigger macro markets like interest rates and currencies, the question really comes down to how? If the Fed's balance sheet policies are going to take a while to implement, those are not going to have an immediate effect, at least not an effect that is easily seen with the human eye. But it's other types of policy change in terms of his communication policy, for example. One of the points that you raised in your recent note, Mike, was how Kevin Warsh favored less communication than perhaps some of the recent, Federal Open Market Committees had with the public. And so, if there is some kind of a retrenchment from the type of over-communication to the marketplace, from either committee members or non-voters that could create a bit more volatility in the marketplace. Of course, the Fed has been one of the central banks that does not like to surprise the markets in terms of its monetary policy making. And so, that contrasts with other central banks in the G10. For example, the Swiss National Bank tends to surprise quite a lot. The Reserve Bank of Australia tends to surprise markets. More often, certainly than the Fed does. So, to the extent that there's some change in communication strategy going forward that could lead to more volatile interest rate in currency markets. And that then could cause investors to demand more risk premium to invest in those markets. If you previously were comfortable owning a longer duration Treasury security because you felt very comfortable with the future path of Fed policy, then a Kevin Warsh led Fed – if it decides to change the communication strategy – could naturally lead investors to demand more risk premium in their investments. And that, of course, would lead to a steeper U.S. Treasury curve, all else equal. So that would be one of the main effects that I could see happen in markets as a result of some potential changes that the Fed may consider going forward. So, Mike, with that said, this was the first FOMC meeting of the year, and the next meeting arrives in March. I guess we'll just have to wait between now and then to see if the Fed is on hold for a longer period of time or whether or not the data convinced them to move as soon as the March meeting. Thanks for taking time to talk, Mike. Michael Gapen: Great speaking with you, Matt. Matthew Hornbach: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.

Thoughts on the Market
Affordability Takes Center Stage in U.S. Policy

Thoughts on the Market

Play Episode Listen Later Feb 4, 2026 6:13


Affordability is back in focus in D.C. after the brief U.S. shutdown. Our Deputy Global Head of Research Michael Zezas and Head of Public Policy Research Ariana Salvatore look at some proposals in play.Read more insights from Morgan Stanley.----- Transcript -----Michael Zezas: Welcome to Thoughts on the Market. I'm Michael Zezas, Deputy Global Head of Research for Morgan Stanley. Ariana Salvatore: And I'm Ariana Salvatore, Head of Public Policy Research. Michael Zezas: Today we're discussing the continued focus on affordability, and how to parse signals from the noise on different policy proposals coming out of D.C.It's Wednesday, February 4th at 10am in New York. Ariana Salvatore: President Trump signed a bill yesterday, ending the partial government shutdown that had been in place for the past few days. But affordability is still in focus. It's something that our clients have been asking about a lot. And we might hear more news when the president delivers his State of the Union address on February 24th and possibly delivers his budget proposal, which should be around the same time. So, needless to say, it's still a topic that investors have been asking us about and one that we think warrants a little bit more scrutiny. Michael Zezas: But maybe before we get into how to think about these affordability policies, we should hit on what we're seeing as the real pressure points in the debate. Ariana, you recently did some work with our economists. What were some of your findings? Ariana Salvatore: So, Heather Berger and the rest of our U.S. econ[omics] team highlighted three groups in particular that are feeling more of the affordability crunch, so to speak. That's lower income consumers, younger consumers, and renters or recent home buyers. Lower income households have experienced persistently higher inflation and more recently weaker wage growth. Younger consumers were hit hardest when inflation peaked and are more exposed to higher borrowing costs. And lastly, renters and recent buyers are dealing with much higher shelter burdens that aren't fully captured in standard inflation metrics. Now, the reason I laid all that out is because these are also the cohorts where the president's approval ratings have seen the largest declines. Michael Zezas: Right. And so, it makes sense that those are the groups where the administration might be targeting some of these affordability initiatives. Ariana Salvatore: That's right. But that's not the only variable that they're solving for. Broadly speaking, we think that the president and Republicans in Congress really need to solve for four things when it comes to affordability policies. First, targeting these quote right cohorts, which are those, as we mentioned, that have either moved furthest away from the president politically, or have been the most under pressure. Second feasibility, right? So even if Republicans can agree on certain policies, getting them procedurally through Congress can still be a challenge. Third timing – just because the legislative calendar is so tight ahead of the November elections. And fourth speed of disbursement. So basically, how long it would take these policies to translate to an uplift for consumers ahead of the elections. Michael Zezas: So, thinking through each of these constraints, starting with how easy it might be to actually get some of these policies done, most of the policies that are being proposed on the housing side require congressional approval. In terms of these cohorts, it seems like these policies are most likely to focus on – that seems aimed at lower-income and younger voters. And in terms of timing, we know the legislative calendar is tight ahead of the midterms, and the policy makers want to pursue things that can be enacted quickly and show up for voters as soon as possible. Ariana Salvatore: So, using that lens, we think the most realistic near-term tools are probably mostly executive actions. Think agency directives and potential changes to tariff policy. If we do see a second reconciliation bill emerge, it will probably move more slowly but likely cover some of those housing related tax credit changes. But of course, not all these policies would move the needle in the same way. What do we think matters most from a macro perspective? Michael Zezas: So, what our economists have argued is that the affordability policies being discussed – tax credits subsidies, payment pauses – they could be meaningful at a micro level for targeted households, but for the most part, they don't materially change the macro outlook. The exception might be tariffs; that probably has the broadest and most sustained impact on affordability because it directly affects inflation. Lower tariffs would narrow inflation differentials across cohorts, support real income growth and make it easier for the Fed to cut rates. Ariana Salvatore: Right. And just to add a finer point on that, I think directionally speaking, this is where we've seen the administration moving in recent months. Remember, towards the end of last year, the Trump administration placed an exemption on a lot of agricultural imports. And just the other day, we heard news that the trade deal with India was finalized reducing the overall tariff rate to 18 percent from about 50 percent prior. Michael Zezas: Okay. So, putting it all together for what investors need to know. We see three key takeaways. First, even absent new policy, our economists expect some improvement in affordability this year as inflation decelerates and rate cuts come into view. And specifically, when we talk about improvements in affordability, what our economists are referring to is income growth consistently outpacing inflation, lowering required monthly payments. Second, most proposed affordability policies are unlikely to generate the meaningful macro growth impulse, so investors shouldn't overreact to headline announcements. And third, the cohort divergence matters for equities. Pressure on lower income in younger consumers helps explain why parts of consumer discretionary have lagged. While higher income exposed segments have remained more resilient. So, if inflation continues to cool, especially via tariff relief, that's what would broaden the consumer recovery and potentially create better returns for some of the sectors in the equity markets that have underperformed. Ariana Salvatore: Right, and from the policy side, I would say this probably isn't the last time we'll be talking about affordability. It's politically salient. The policy responses are likely targeted and incremental, and this should continue to remain a top focus for voters heading into November. Michael Zezas: Well, Ariana, thanks for taking the time to talk. Ariana Salvatore: Great speaking with you, Mike. Michael Zezas: And as a reminder, if you enjoy Thoughts on the Market, please take a moment to rate and review us wherever you listen. And share Thoughts on the Market with a friend or colleague today.