Podcasts about Foundation series

Series of science-fiction books by Isaac Asimov

  • 121PODCASTS
  • 1,005EPISODES
  • 28mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 12, 2025LATEST
Foundation series

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Foundation series

Latest podcast episodes about Foundation series

Our Big Dumb Mouth
OBDM1285 - From Star Wars to Shower Heads | Assassination Plots and Alien Diversions

Our Big Dumb Mouth

Play Episode Listen Later Apr 12, 2025 119:03


00:00:00 – Audio Setup, Star Wars Fan Debate Show starts with discussion about audio stream issues and Joe's absence due to birthday plans. Hosts joke about birthday priorities, algorithmic rage culture, and how Joe gets his news. Transition to Star Wars fandom; debate about original Star Wars theatrical cut and George Lucas's revisions. 00:10:00 – Star Wars Original Cut Return & Box Office Potential Conversation about the 1977 Star Wars cut being screened in London. Discussion on fan-made restorations, black market versions, and speculation on financial success if Disney re-released original cuts. Debate on how Disney could regain fan goodwill and how current Star Wars content has fared. 00:20:00 – Andor, Mandalorian, and Disney Star Wars Strategy Talk shifts to upcoming Star Wars projects like the Andor season and potential Mandalorian/Grogu movie. Hosts analyze Disney's strategy of rebuilding trust with legacy content. Praise for Andor's adult storytelling and deeper political themes. 00:30:00 – Foundation Series on Apple TV Discussion on Isaac Asimov's "Foundation" series and Apple TV's adaptation. Themes include psychohistory, AI, identity, consciousness, and predictions of societal collapse. Hosts note how the show echoes modern algorithmic culture and explore its sci-fi world-building. 00:40:00 – Attempted Trump Assassinations Sudden pivot to multiple recent alleged plots to assassinate Trump. Detailed reports of one involving RPGs and Ukrainian connections, and another from a radicalized teen in Wisconsin. Discussion of FBI involvement, lack of transparency, and media underreporting. 00:50:00 – More Trump Plot Details & Cryptid in Colorado A bizarre plot by a teenager to start a white supremacist revolution discussed. Hosts cover a cryptid sighting in Colorado captured on video—speculating if it's a mangy bear, raccoon, or hybrid creature. Debunking official explanations, speculating on cryptid nature. 01:00:00 – Bill Maher's Dinner with Trump Recap of Bill Maher visiting Trump at the White House, arranged by Kid Rock. Maher reflects on how Trump was unexpectedly gracious, self-aware, and down-to-earth. Reaction from Maher's audience and criticism from liberals for “humanizing” Trump. 01:10:00 – Maher's Final Thoughts, Hotel Living Trend in China Maher defends his experience with Trump, stressing honest reporting over partisan spin. Hosts pivot to a trend in China where young people live full-time in hotels due to high rent, convenience, and mental health reasons. Questions raised about the affordability and long-term viability of this lifestyle. 01:20:00 – Hotel Life Logistics & Open Phones Continued discussion on hotel-living logistics, shared bathrooms, and budgeting. Personal anecdotes about sleeping under sinks while touring with bands. Start of call-in segment: light banter about gaming, coffee alternatives like yerba mate, and general chatter with listeners. 01:30:00 – JFK Files, UAPs in New Jersey, Local Politics Callers discuss disappointment with the newly released JFK assassination files. Speculation about UFO sightings in New Jersey and government cover-ups. Final topics touch on financial markets, disdain for DC and California politics, and the importance of local engagement. 01:40:00 – Dave Smith vs Douglas Murray on Israel Recap of a recent podcast featuring libertarian Dave Smith and conservative Douglas Murray debating Israel. Hosts critique Murray's weak arguments and inability to engage meaningfully, despite Smith being poised and informed. Commentary on how the Israel debate is often one-sided and emotionally charged, even among self-proclaimed free speech advocates. 01:50:00 – Trump's Water Pressure Obsession Segment dives into Trump's long-running feud with low-flow toilets and weak shower heads. Trump signed an executive order to reverse Obama-era regulations, wanting “strong showers” for washing his “beautiful hair.” Hosts riff on the absurdity and comedy of these priorities, imagining Trump's rants in Seinfeld-style voiceovers. 02:00:00 – Episode Wrap Bye Bye!  Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research ▀▄▀▄▀ CONTACT LINKS ▀▄▀▄▀ ► Phone: 614-388-9109 ► Skype: ourbigdumbmouth ► Website: http://obdmpod.com ► Twitch: https://www.twitch.tv/obdmpod ► Full Videos at Odysee: https://odysee.com/@obdm:0 ► Twitter: https://twitter.com/obdmpod ► Instagram: obdmpod ► Email: ourbigdumbmouth at gmail ► RSS: http://ourbigdumbmouth.libsyn.com/rss ► iTunes: https://itunes.apple.com/us/podcast/our-big-dumb-mouth/id261189509?mt=2   send obdm bitcoin: 14DGZFByT5U35ZVVvo9SpzbJV6bHuNVJRa send obdm ether: 0x9A16c85CcB3A1B3c8073376b316Cd45F4B359413 send obdm steller: GB3LGRWRLLPCWPKJSYNGMUQIZWCQ35UD3LCQIZJRPTFJOHHM7G4AOOKI send obdm DogeCoin: D6XLEX89ybc55B4eQqz4cyfoctSaorFK9w  

Life Church - RVA

In Week 8 of the Foundation Series, Pastor Thompson shared this valuable teaching on generosity, WHY WE GIVE. There is such a blessing in discovering the power of giving generously and cheerfully and allowing God's blessing to flow through us allowing us to be blessed so that we can be a blessing to others.

Life Church - RVA
Understanding The Bible

Life Church - RVA

Play Episode Listen Later Feb 9, 2025


During week 6 of our Foundation Series, Pastors Buddy & Robin Thompson joined together to share the power of Understanding the Bible as they gave principles to help us get a grasp on God's Word and the power it has when included in our daily lives.

Out of Spec Podcast
Tesla Lowers Cybertruck Lease Price, Gets $7,500 Tax Credit & Gives Free Wraps

Out of Spec Podcast

Play Episode Listen Later Feb 4, 2025 11:24


Tesla is rolling out new incentives for the Cybertruck, making it more appealing than ever. Buyers can now take advantage of a $7,500 tax credit, lower lease prices, and even free supercharging and free XPEL wraps for Foundation Series models. But are these deals a sign of growing demand or an effort to move excess inventory? Shoutout to our sponsors for more information find their links below:- Fort Collins Kia: Visit focokia.com for full details. Disclaimer: *Delivery covers up to $1,000.Find us on all of these places:YouTube: https://www.youtube.com/outofspecpodcastApple Podcasts: https://podcasts.apple.com/us/podcast/out-of-spec-podcast/id1576636119Spotify: https://open.spotify.com/show/0tKIQfKL9oaHc1DLOTWvbdAmazon: https://music.amazon.com/podcasts/473692b9-05b9-41f9-9b38-9f86fbdabee7/OUT-OF-SPEC-PODCASTFor further inquiries please email podcast@outofspecstudios.com Hosted on Acast. See acast.com/privacy for more information.

Electrek
Tesla self-driving computer failure, Cybertruck issues, Honda/Nissan merger, and more

Electrek

Play Episode Listen Later Dec 20, 2024 61:12


In the Electrek Podcast, we discuss the most popular news in the world of sustainable transport and energy. In this week's episode, we discuss Tesla's issues self-driving computer failure, Cybertruck is also having some problem, Honda/Nissan merger, and more. The show is live every Friday at 4 p.m. ET on Electrek's YouTube channel. As a reminder, we'll have an accompanying post, like this one, on the site with an embedded link to the live stream. Head to the YouTube channel to get your questions and comments in. After the show ends at around 5 p.m. ET, the video will be archived on YouTube and the audio on all your favorite podcast apps: Apple Podcasts Spotify Overcast Pocket Casts Castro RSS We now have a Patreon if you want to help us avoid more ads and invest more in our content. We have some awesome gifts for our Patreons and more coming. Here are a few of the articles that we will discuss during the podcast: Tesla is having major issue with its self-driving computer inside new cars Tesla is buffing Foundation Series badges off Cybertrucks to sell them as regular trucks Tesla finds ‘cell-dent' issues in Cybertrucks, starts replacing battery packs Tesla delivers electric semi trucks to another customer, confirms efficiency Tesla (TSLA) sales in Europe are down 14% year-to-date and it's time to worry Tesla Shanghai starts mass production of Model Y refresh in Jan, says local media Tesla looks for new comms manager, is a PR dept coming back? Rivian (RIVN) made a ‘secret' deal with the UAW: Here's what that means for the EV maker Cadillac's new Vistiq electric SUV starts at under $80,000, but higher trims can get pricey Honda and Nissan close in on an EV merger to survive the industry's rapid shift to electric NIO's low-cost Onvo EV will arrive in its first European market in early 2025 Here's the live stream for today's episode starting at 4:00 p.m. ET (or the video after 5 p.m. ET): https://www.youtube.com/watch?v=fgSoluS3bGA

Quick Charge
Cybertruck gets un-founded, Tesla computers fail, and Trump has a plan

Quick Charge

Play Episode Listen Later Dec 17, 2024


On today's episode of Quick Charge, it's bad news for Tesla as the nonexistent demand for its Foundation Series Cybertruck models leads Musk to desperate tactics. Meanwhile, the company's new AI4 computers have major safety concerns, and Trump is moving to kill EV tax credits. We've also got record-breaking sales from BYD to prove that the US President doesn't matter when it comes to global EV adoption, a record-setting electric semi truck order from Canada, and a big solar project coming online in America's dairyland. Source Links Tesla is buffing Foundation Series badges off Cybertrucks to sell them as regular trucks Tesla is having major issue with its self-driving computer inside new cars Trump transition team has fleshed out its plan to destroy the US EV market BYD's largest production plant has already built 1 million EVs in 2024 as annual sales soar Labatt Breweries places single largest Volvo VNR Electric truck order in Canada Wisconsin's most powerful solar farm just got the go-ahead Prefer listening to your podcasts? Audio-only versions of Quick Charge are now available on Apple Podcasts, Spotify, TuneIn, and our RSS feed for Overcast and other podcast players. New episodes of Quick Charge are recorded, usually, Monday through Thursday (and sometimes Sunday). We'll be posting bonus audio content from time to time as well, so be sure to follow and subscribe so you don't miss a minute of Electrek's high-voltage daily news! Got news? Let us know!Drop us a line at tips@electrek.co. You can also rate us on Apple Podcasts and Spotify, or recommend us in Overcast to help more people discover the show READ MORE: Nikola hydrogen semi seems belly-up as production halt rumors swirl.

The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier
Tesla's Cybertruck Canada Push, Longest Range EVs, Brick and Mortar Boom

The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier

Play Episode Listen Later Dec 16, 2024 14:12 Transcription Available


Shoot us a Text.With barely 2 weeks left in the year, we're covering the extreme measures Tesla is taking to sell Cybertrucks. Plus, we look at the EVs with the longest range and explore how brick and mortar experiences are booming.Show Notes with links:Tesla is taking unprecedented steps to address waning demand for its Cybertruck by repurposing its premium Foundation Series models and turning its attention to the Canadian market.Foundation Series Cybertrucks, the first produced, were priced $20K higher and bundled with premium features, including laser-etched badging.Many of these trucks remain unsold, prompting Tesla to remove the exclusive features to sell them as standard models at a reduced price.Tesla is shipping over 800 Cybertrucks to Canada for homologation (certifying that a product or vehicle meets the necessary standards to be used in a specific region or country), believing the Canadian market offers stronger sales prospects than the U.S.The company is also leveraging discounts, adding the Cybertruck to its referral program, and slashing lease prices in a bid to boost interest.These efforts are straining Tesla's service and collision centers, leading to longer wait times and logistical challenges for existing customers.Range anxiety continues to be a significant barrier for many EV shoppers, but manufacturers are addressing these concerns with innovative designs and longer ranges, positioning EVs as practical and compelling options for mainstream consumers.Mainstream consumers are shifting focus from sustainability narratives to tangible benefits like lower ownership costs, charging convenience, and exciting driving experiences, with range emerging as the decisive factor in EV adoption.The top 10 EVs with the longest range are: Lucid Air Grand Touring - 516 miles, Chevy Silverado EV First Edition RST - 440 miles, Rivian R1T Adventure Dual Max - 420 miles, Rivian R1S Adventure Dual Max - 410 miles, Tesla Model S AWD - 402 miles, Tesla Model 3 Long Range Rear-Wheel Drive - 363 miles, Hyundai IONIQ 6 - 361 miles, Mercedes-EQ EQS Sedan - 352 miles, Mercedes-EQ EQS SUV - 339 miles, Tesla Model X AWD - 335 miles.In-store shopping isn't just back—it's booming. Strengthened by year-round innovations and experiences, brick-and-mortar stores are delivering unmatched value for both consumers and brands.In-store holiday sales are projected to grow 2.5%-3.5% this year, with total spending reaching up to $989 billion, a significant increase from 2023.Immersive attractions, like Natick Mall's Santa's Elevator Express, create unique opportunities for brand discovery. Over 5,000 families booked visits before the event even opened, boosting foot traffic by 50%.Experiences like batting cages at Dick's House of Sport and lifestyle zones like The Green at Oxmoor Center keep shoppers engaged beyond the holidays.Gen Z, the “omnishopping” generationHosts: Paul J Daly and Kyle MountsierGet the Daily Push Back email at https://www.asotu.com/ JOIN the conversation on LinkedIn at: https://www.linkedin.com/company/asotu/ Read our most recent email at: https://www.asotu.com/media/push-back-email

The Real Moms Playbook
13 Christ and Coffee

The Real Moms Playbook

Play Episode Listen Later Nov 25, 2024 27:38


Ashleigh and Stacey, owners of the Breakfast Club Coffee Co, share their business, struggles they've overcome, the power of prayer and how to be a part of the Share Jesus Movement.Good Foundations Roadmap Call - grab this if you get stuck this seasonALIGNDEFINEFLOWGROWThe Foundation Series helps you begin your intentional transformation towards a peaceful and calm motherhood. The Foundation Series includes ALIGN (values), DEFINE (home - stressful spaces), FLOW (adjusting your time for freedom) and GROW (your start to financial fitness) at a discount.Join The Tribe Waitlist to get into the community that helps you simplify and succeed with accountability and support!Our Sponsor -The Breakfast Club Coffee CoFoundations Flight Coffee SamplerHighlander GroggThe Lou BrewThe HillPrayer Requests

The Real Moms Playbook
12 Peace, Purpose and Productivity in Motherhood

The Real Moms Playbook

Play Episode Listen Later Nov 18, 2024 8:56 Transcription Available


On today's episode, Lisa brings the whole foundation series together.Good Foundations Roadmap Call - grab this if you get stuck this seasonALIGNDEFINEFLOWGROWThe Foundation Series helps you begin your intentional transformation towards a peaceful and calm motherhood. The Foundation Series includes ALIGN (values), DEFINE (home - stressful spaces), FLOW (adjusting your time for freedom) and GROW (your start to financial fitness) at a discount.Join The Tribe Waitlist to get into the community that helps you simplify and succeed with accountability and support!Our Sponsor -The Breakfast Club Coffee CoFoundations Flight Coffee SamplerHeavenly Hazelnut

The Real Moms Playbook
11 Harmony with Kids

The Real Moms Playbook

Play Episode Listen Later Nov 11, 2024 8:30 Transcription Available


On today's episode, Lisa shares 3 tips on creating harmony with your kids.Good Foundations Roadmap Call - grab this if you get stuck this seasonAccountability with KidsThe Foundation Series helps you begin your intentional transformation towards a peaceful and calm motherhood. The Foundation Series includes ALIGN (values), DEFINE (home - stressful spaces), FLOW (adjusting your time for freedom) and GROW (your start to financial fitness) at a discount.Join The Tribe Waitlist to get into the community that helps you simplify and succeed with accountability and support!Our Sponsor -The Breakfast Club Coffee CoFoundations Flight Coffee Sampler

The Real Moms Playbook
10 Accountability Makeover with Kids

The Real Moms Playbook

Play Episode Listen Later Nov 11, 2024 10:50 Transcription Available


Show Notes:On today's episode, Lisa shares how to begin accountability with your kids.Good Foundations Roadmap Call - grab this if you get stuck this seasonAccountability with KidsThe Foundation Series helps you begin your intentional transformation towards a peaceful and calm motherhood. The Foundation Series includes ALIGN (values), DEFINE (home - stressful spaces), FLOW (adjusting your time for freedom) and GROW (your start to financial fitness) at a discount.Join The Tribe Waitlist to get into the community that helps you simplify and succeed with accountability and support!Our Sponsor -The Breakfast Club Coffee CoFoundations Flight Coffee SamplerWhat's in my Cup? Highlander Grogg

The Real Moms Playbook
9 Simple Money Habits for Clarity and Calm

The Real Moms Playbook

Play Episode Listen Later Oct 28, 2024 7:46 Transcription Available


On today's episode, Lisa shares three strategy steps for good money habits.Good Foundations Roadmap Call - grab this if you get stuck this seasonGrab GROW (your foundation for financial success) or get the whole Foundation Series at a discount!The Foundation Series helps you begin your intentional transformation towards a peaceful and calm motherhood. The Foundation Series includes ALIGN (values), DEFINE (home - stressful spaces), FLOW (adjusting your time for freedom) and GROW (your start to financial fitness) at a discount.Join The Tribe Waitlist to get into the community that helps you simplify and succeed with accountability and support!Our Sponsor -The Breakfast Club Coffee CoFoundations Flight Coffee SamplerWhat's in my Cup? Java Estates - Eden Collection

Tesla Welt - Der deutschsprachige Tesla Podcast
Tesla Welt - 386 - Gerücht: Model Y Juniper Produktion, DHL plant Einführung des Tesla Semi Truck, Optimus update und mehr

Tesla Welt - Der deutschsprachige Tesla Podcast

Play Episode Listen Later Oct 23, 2024 34:22


0:00 Intro & Dankeschön 1:33 DHL goes Semi Truck?! 9:02 Tesla Semi Truck: Aktueller Stand Zu Hinrichs Zane: https://x.com/HinrichsZane 10:28 Nur ein Partytrick? Was der Tesla Bot Optimus wirklich kann. Tesla veröffentlicht Update! 19:03 Foundation Series des Cybertrucks eingestellt 20:24 Testproduktion des neuen Model Y “Juniper” gestartet? 26:03 Tesla Effizienz bei Wireless Charging 28:08 Tesla Safety Report ist da 33:10 Outro Ihr könnt meine Arbeit mit dem Tesla Welt Podcast unterstützen indem Ihr folgende Partnerlinks benutzt: Davids Tesla Referral Code: https://ts.la/david63148 SHOP4TESLA: Erhalte 5% Rabatt mit dem Code "teslawelt" auf jetzt alle Produkte: https://www.shop4tesla.com/?ref=TeslaWelt * HOLY: Erhalte 10% Rabatt mit dem Code "TESLAWELT" auf alle Produkte: https://de.weareholy.com/?ref=teslawelt * CARBONIFY: THG Quoten Prämie. Transparent und fair : https://carbonify.de/?utm_source=youtube&utm_medium=video&utm_campaign=Teslawelt * Oder Ihr holt euch ein Shirt aus dem Tesla Welt Merchshop: https://teslawelt.myspreadshop.de/ Zur englischen Elon Musk Biografie von Walter Isaacson: https://amzn.to/3sETBBi * Hier zur deutschen Version: https://amzn.to/45HZfkF * Die mit * gekennzeichneten Links sind Affiliate-Links. Es handelt sich hierbei um bezahlte Werbung. Ein Kauf über einen Affiliate-Link unterstützt den Kanal und für euch entstehen dabei selbstverständlich keinerlei Mehrkosten! Für direkte Unterstützung werdet Tesla Welt Kanalmitglied und erhalte exklusive Vorteile: https://www.youtube.com/channel/UCK0nQCNCloToqNKhbJ1QGfA/join oder direkt per PayPal: an feedback@teslawelt.de Folgt mir gerne auch auf X (Twitter): https://twitter.com/teslawelt Musik: Titel: My Little Kingdom Autor: Golden Duck Orchestra Source Licence Download (MB)

The Real Moms Playbook
8 Realigning Your Money Mindset

The Real Moms Playbook

Play Episode Listen Later Oct 21, 2024 7:58 Transcription Available


On today's episode, Lisa shares three tips for realigning your money mindset.Good Foundations Roadmap Call - grab this if you get stuck this seasonGrab GROW (your foundation for financial success) or get the whole Foundation Series at a discount!The Foundation Series helps you begin your intentional transformation towards a peaceful and calm motherhood. The Foundation Series includes ALIGN (values), DEFINE (home - stressful spaces), FLOW (adjusting your time for freedom) and GROW (your start to financial fitness) at a discount.Join The Tribe Waitlist to get into the community that helps you simplify and succeed with accountability and support!Our Sponsor -The Breakfast Club Coffee CoFoundations Flight Coffee SamplerWhat's in my Cup? The Lou Brew

Tailosive EV
Ep. 199 - Ending Foundation Series

Tailosive EV

Play Episode Listen Later Oct 19, 2024 78:06


Join Drew, Randy, and Mike as they discuss Cybertruck price and range officially dropping, the CyberCab going against the Aptera, and Randy's tires getting screwed. Randy's Channel: https://www.youtube.com/@RandyNexus Published: 10-19-2024, Recorded: 10-17-2024

The Real Moms Playbook
7 Time Management Mindset: 3 Shifts You Can Make for More Productive Days

The Real Moms Playbook

Play Episode Listen Later Oct 14, 2024 10:14 Transcription Available


On today's episode, Lisa shares 3 shifts to make today for your time management.Good Foundations Roadmap Call - grab this if you get stuck this seasonGrab FLOW (adjusting your time for freedom) to begin creating your time for freedom or get the whole Foundation Series at a discount!The Foundation Series helps you begin your intentional transformation towards a peaceful and calm motherhood. The Foundation Series includes ALIGN (values), DEFINE (home - stressful spaces), FLOW (adjusting your time for freedom) and GROW (your start to financial fitness) at a discount.Join The Tribe Waitlist to get into the community that helps you simplify and succeed with accountability and support!Our Sponsor -The Breakfast Club Coffee CoFoundations Flight Coffee SamplerWhat's in my Cup? Highlander Grogg

Cornerstone Brighton Sermons
Foundations Week 5 Kingdom Mission Sunday Service

Cornerstone Brighton Sermons

Play Episode Listen Later Oct 9, 2024 33:20


Pastor Chris Winans concludes our Foundation Series with our fourth core value of Kingdom Mission. Using scripture pastor Chris discusses how to discern the will of God which aligns us with Jesus Christ. Specifically, we need to see like Jesus sees and work as Jesus works: we are called to “see” every single individual regardless of the labels that society has placed on them, and we are called to love them as Christ loves them, being fully aware of the spiritual nature of every individual- including our own selves.www.cornerstonebrighton.com

The Real Moms Playbook
6 Rhythms for Success in your Routine

The Real Moms Playbook

Play Episode Listen Later Oct 7, 2024 12:42 Transcription Available


On today's episode, Lisa shares peaceful and purposeful rituals for busy moms.Good Foundations Roadmap Call - grab this if you get stuck this seasonGrab FLOW (adjusting your time for freedom) to begin creating your time for freedom or get the whole Foundation Series at a discount!The Foundation Series helps you begin your intentional transformation towards a peaceful and calm motherhood. The Foundation Series includes ALIGN (values), DEFINE (home - stressful spaces), FLOW (adjusting your time for freedom) and GROW (your start to financial fitness) at a discount.Join The Tribe Waitlist to get into the community that helps you simplify and succeed with accountability and support!Our Sponsor -The Breakfast Club Coffee CoFoundations Flight Coffee SamplerWhat's in my Cup? Camp Grounds

The Real Moms Playbook
5 Home Systems that Work

The Real Moms Playbook

Play Episode Listen Later Sep 23, 2024 16:53 Transcription Available


On today's episode, Lisa shares why systems will be your success within the home and how to create them.Good Foundations Roadmap CallGrab DEFINE or get the whole Foundation Series at a discount!The Foundation Series helps you begin your intentional transformation towards a peaceful and calm motherhood. The Foundation Series includes ALIGN (values), DEFINE (home - stressful spaces), FLOW (adjusting your time for freedom) and GROW (your start to financial fitness) at a discount.Join The Tribe Waitlist to get into the community that helps you simplify and succeed with accountability and support!Our Sponsor -The Breakfast Club Coffee CoFoundations Flight Coffee SamplerWhat's in my Cup? The Hill - St. Louis Collection

The Real Moms Playbook
4 Creating Peace in Your Home

The Real Moms Playbook

Play Episode Listen Later Sep 16, 2024 13:53 Transcription Available


On today's episode, Lisa shares why stuff is stressful and how to begin creating peace within your home.Good Foundations Roadmap CallWhy Mess Causes StressGrab DEFINE or get the whole Foundation Series at a discount!The Foundation Series helps you begin your intentional transformation towards a peaceful and calm motherhood. The Foundation Series includes ALIGN (values), DEFINE (home - stressful spaces), FLOW (adjusting your time for freedom) and GROW (your start to financial fitness) at a discount.Join The Tribe Waitlist to get into the community that helps you simplify and succeed with accountability and support!Our Sponsor -The Breakfast Club Coffee CoFoundations Flight Coffee SamplerWhat's in my Cup? Mexican Altura - The Eden Collection

The Real Moms Playbook
3 Establishing Your Priorities

The Real Moms Playbook

Play Episode Listen Later Sep 9, 2024 11:01 Transcription Available


On today's episode, Lisa shares how to establish your priorities in a simple, sustainable way.Good Foundations Roadmap CallLisa's Monday Task ListGrab ALIGN or get the whole Foundation Series at a discount!The Foundation Series helps you begin your intentional transformation towards a peaceful and calm motherhood. The Foundation Series includes ALIGN (values), DEFINE (home - stressful spaces), FLOW (adjusting your time for freedom) and GROW (your start to financial fitness) at a discount.Join The Tribe Waitlist to get into the community that helps you simplify and succeed with accountability and support!Our Sponsor -The Breakfast Club Coffee CoFoundations Flight Coffee SamplerWhat's in my Cup? The Lou Brew

The Real Moms Playbook
1 Refill Your Motherhood Cup: Let's Build Good Foundations

The Real Moms Playbook

Play Episode Listen Later Sep 9, 2024 9:38 Transcription Available


On today's episode, Lisa shares why your motherhood cup is empty and how you will transform your actions this season.Good Foundations Roadmap CallReady to dive in and take action?Grab the Foundation Series to begin your intentional transformation towards a peaceful and calm motherhood. The Foundation Series includes ALIGN, DEFINE, FLOW and GROW at a discount.Join The Tribe Waitlist to get into the membership community that helps you simplify and succeed with accountability and support!Our Sponsor -The Breakfast Club Coffee CoFoundations Flight Coffee SamplerWhat's in my Cup? Highlander Grogg

The Real Moms Playbook
2 Our Top Values + Leading with Them

The Real Moms Playbook

Play Episode Listen Later Sep 9, 2024 15:24 Transcription Available


On today's episode, Lisa shares how we determine what's most important to us and set the stage in order to lead your daily life in a purposeful way.Good Foundations Roadmap CallGrab ALIGN or get the whole Foundation Series at a discount!The Foundation Series helps you begin your intentional transformation towards a peaceful and calm motherhood. The Foundation Series includes ALIGN (values), DEFINE (home - stressful spaces), FLOW (adjusting your time for freedom) and GROW (your start to financial fitness) at a discount.Join The Tribe Waitlist to get into the community that helps you simplify and succeed with accountability and support!Our Sponsor -The Breakfast Club Coffee CoFoundations Flight Coffee SamplerWhat's in my Cup? Java Estates

The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier
Fain vs Trump, Who Actually Wants a Cybertruck?, Rivian & Lucid, Generational Sock Insights

The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier

Play Episode Listen Later Aug 9, 2024 14:01


Shoot us a Text.We've got that little something extra to help you through your Friday, as we're talking about Rivian and Lucid's Q2 struggles (did someone say dealer network?), Tesla's Cybertruck struggles, Shawn Fain vs Donald Trump and a little fun to launch you into the weekend with some sock anecdotes.Rivian and Lucid continue to face significant financial challenges in Q2, despite receiving major investments. Both EV makers report substantial losses, raising concerns about their future sustainability amid competitive pressure from Tesla.Rivian lost $32,705 per vehicle delivered and reported a $1.46 billion net loss in Q2, a 21% increase from last year, despite a 9% rise in vehicle sales. Lucid lost a whopping $112,688 and posted a $790 million net loss in Q2,. Despite record sales of its Air sedan, the company's financial performance remains deeply concerning.Tesla's Cybertruck has been a subject of fascination and debate since its unveiling, with over 1 million reservations in the bag. However, the reality of converting those reservations into sales is proving to be more challenging.Tesla lowered the deposit requirement to $100, raising concerns about the seriousness of these reservations.The "Foundation Series" Cybertruck is currently being sold at $100,000, bundling all options together.Reports indicate Tesla is struggling to find buyers for these high-priced trucks, even reaching out to recent reservation holdersIn a heated exchange of words, UAW President Shawn Fain takes aim at Donald Trump, calling him a "scab" who "doesn't know sh*t about the auto industry." Meanwhile, Trump has seemingly shifted his stance on electric vehicles after Elon Musk's endorsement.UAW President Shawn Fain publicly denounced Trump at a speech in Detroit, calling him a “scab” and accusing him of being clueless about the auto industry. At a weekend rally, Trump admits he supports electric vehicles because “"I'm for electric cars. I have to be, because Elon endorsed me very strongly. So I have no choice," Trump also noted that he supports electric vehicles "for a small slice." He went on to add: "You want to have gas-propelled cars, you want to have hybrids, you want to have every kind of a car imaginable."A new YouGov survey reveals intriguing generational differences in sock preferences, particularly among Gen Z. Here's what the survey found about the sock choices of 5,015 U.S. adults:Ankle socks are the most popular overall, with 41% of Americans preferring them.No-show socks are favored by 15%, knee-high socks by 3%, with 5% of respondents prefer going sockless.Tall socks have a strong fan base among Gen Z, with 41% fHosts: Paul J Daly and Kyle MountsierGet the Daily Push Back email at https://www.asotu.com/ JOIN the conversation on LinkedIn at: https://www.linkedin.com/company/asotu/ Read our most recent email at: https://www.asotu.com/media/push-back-email

Ride the Lightning: Tesla Motors Unofficial Podcast
Episode 466: Tesla's Q2 Production and Delivery Numbers Are In

Ride the Lightning: Tesla Motors Unofficial Podcast

Play Episode Listen Later Jul 7, 2024 105:30


Tesla posts its vehicle production and delivery numbers for the just-completed second quarter, and the news is very good. Plus: Tesla gives an update on Cybertruck's Foundation Series program…sort of, Quicksilver paint is now available on Model 3's built at Giga Shanghai, and more! Special guest cohost: Larry Hryb If you enjoy the podcast and would like to support my efforts, please check out my Patreon at https://www.patreon.com/teslapodcast and consider a monthly pledge. Every little bit helps and there are stacking bonuses in it for you at each pledge level, like early access to each episode at the $5 tier and the weekly Lightning Round bonus mini-episode (AND the early access!) at the $10 tier! And don't forget to leave a message on the Ride the Lightning hotline anytime with a question, comment, or discussion topic for next week's show! The toll-free number to call or Skype is 1-888-989-8752. The Tesla raffle from ChesedChicago is back! For your chance to win a Tesla (including a Cybertruck!) or $50,000 cash, head to https://ccraffle.com, where you can get $25 off two tickets or $500 off of 15 tickets by using the promo code “RTL” (without the quotes). Get your tickets before July 11 to be included in the bonus early bird raffle! Go to xcelerateauto.com/xcare to find the extended warranty policy that's right for you and your Tesla, and don't forget to use the discount code “Lightning” for $100 off your purchase. P.S. Get 15% off your first order of awesome aftermarket Tesla accessories at AbstractOcean.com by using the code RTLpodcast at checkout. Grab the SnapPlate front license plate bracket for any Tesla at https://everyamp.com/RTL/ (don't forget the coupon code RTL). 

Ride the Lightning: Tesla Motors Unofficial Podcast
Episode 458: Cybertruck Gets New Interior and Wheel/Tire Options

Ride the Lightning: Tesla Motors Unofficial Podcast

Play Episode Listen Later May 12, 2024 78:25


Cybertruck adds two significant new options in the Design Studio for Foundation Series buyers, Tesla appears to be prepping a Siri-like voice assistant for the fleet, more big companies get their hands on the Tesla Semi, and more! If you enjoy the podcast and would like to support my efforts, please check out my Patreon at https://www.patreon.com/teslapodcast and consider a monthly pledge. Every little bit helps and there are stacking bonuses in it for you at each pledge level, like early access to each episode at the $5 tier and the weekly Lightning Round bonus mini-episode (AND the early access!) at the $10 tier! And don't forget to leave a message on the Ride the Lightning hotline anytime with a question, comment, or discussion topic for next week's show! The toll-free number to call or Skype is 1-888-989-8752. Go to xcelerateauto.com/xcare to find the extended warranty policy that's right for you and your Tesla, and don't forget to use the discount code “Lightning” for $100 off your purchase. P.S. Get 15% off your first order of awesome aftermarket Tesla accessories at AbstractOcean.com by using the code RTLpodcast at checkout. Grab the SnapPlate front license plate bracket for any Tesla at https://everyamp.com/RTL/ (don't forget the coupon code RTL). 

The Korea Society
A Conversation with Min Jin Lee - Y. T. Hwang Family Foundation Series on Ethics & Common Values

The Korea Society

Play Episode Listen Later May 9, 2024 63:27


May 8, 2024 - With the ever-growing need to understand ourselves and humanity as a whole, it is necessary to examine the concepts of morality, ethics and universal values as guiding principles of the human condition. With generous support from Y.T. Hwang Family Foundation, The Korea Society presents a Series on Ethics and Common Values. This series promotes the understanding of central themes of our human existence - morality, ethics, personal responsibility, compassion and civility - through a series of lectures by distinguished speakers and conversation with extraordinary individuals who exemplify the universal values in line with the mission of Y. T. Hwang Family Foundation and The Korea Society. The Korea Society and Y. T. Hwang Family Foundation is proud to present Min Jin Lee in a conversation with Kyung B. Yoon. Min Jin Lee is the author of Free Food for Millionaires and Pachinko, a finalist for the National Book Award. Lee is the recipient of the 2022 Manhae Grand Prize for Literature, the Bucheon Diaspora Literary Award, and the Samsung Happiness for Tomorrow Award for Creativity. She has received fellowships in Fiction from the Guggenheim Foundation, the Radcliffe Institute of Advanced Study at Harvard, and the New York Foundation for the Arts. Lee has been inducted into the Hall of Fame for the New York Foundation for the Arts, New York State Writers, and the Bronx High School of Science. She has been honored by the Columbia University Weatherhead East Asian Institute, the Asian American Journalists Association, the Korean American Community Foundation, the Council of Korean Americans, the Queens Public Library, and the Korean Community Center. Her essays have appeared in The New York Times, The Wall Street Journal, The New Yorker, The New York Review of Books, The Chosun Ilbo, Vogue, and Food & Wine. She has introduced the Penguin Classics edition of The Great Gatsby. In 2023, Lee served as the Editor of the The Best American Shorts Stories. She is at work on her third novel, American Hagwon and a nonfiction work, Name Recognition. She is a Writer-in-Residence at Amherst College and serves as a trustee of PEN America and a director of the Authors Guild. Lee lives in Harlem with her family. Kyung B. Yoon is the President and CEO (as well as co-founder) of the Korean American Community Foundation (KACF), the first and largest philanthropic organization in the U.S. dedicated to strengthening Korean American communities. Her career in poverty alleviation, development economics, and media encompasses her roles as the Executive Producer of Television at the World Bank Institute and a correspondent for WNYW-Fox Channel 5 where she made history as the first Korean American broadcast reporter in NYC. Kyung is currently a contributing reporter to CUNY-TV's Asian American Life, which is broadcast nationally on PBS stations and for which she received an Emmy nomination. She has previously served as the board chair of Philanthropy New York and Asian Americans and Pacific Islanders in Philanthropy, as a trustee of the New York Foundation, and as a board member of the United Way of New York City. For more information, please visit the link below: https://www.koreasociety.org/arts-culture/item/1817-y-t-hwang-family-foundation-series-on-ethics-common-values-a-conversation-with-min-jin-lee

Speaking of Teens
#131: The Foundation Series Day 2 - Your Teen's Emotional Behavior

Speaking of Teens

Play Episode Listen Later Apr 23, 2024 15:05


This is Day 2 of an 8-part series to explain the foundational principles of parenting teens. These are the "basics" that you need to understand to parent your teenagers with less conflict and more connection (plus have more influence in their lives and even change their behavior). Today, I continue talking about the changes occurring  in your teenager's brain. Specifically, I'll explain how their brain's reward system is in hyperdrive, which can cause them to become involved in activities and situations that you never would have thought possible when they were younger. Show Notes for other resources and sourcesTranscriptFind our FREE Parenting Guides Here"I just wanted to let you know that I'm so thankful for your podcast! ...I'm so happy I discovered it!" Speaking of Teens Listener^If you feel the same way, please consider rating and reviewing my show! This helps people know the show is worth their time to listen. Tap here, to go to Apple podcasts, and scroll down until you see the STARS to tap on the last star, then tap on “Write a Review” and let me know what you love about the show. If you're listening in Spotify, you can also rate the show by going to the main episode page and tap the 3 dots to the right of the follow button, tap rate show and tap the 5th star!Thank you in advance for helping me help more parents!I drop new episodes every Tuesday and Friday so please tap Follow on the main episode page, so they'll be ready for you in your app.You can reach out to me with ideas for the show or guest suggestions here. Thanks so much for listening!Check out PARENT CAMP - a monthly membership where you will learn how to strengthen your relationship and decrease the conflict with your teens and tweens (while improving their behavior.) Plus, expert advice on everything from drug use to screen time and everything in between. Join our Facebook Group for Free Support for Parents and others who care for Teens (and get immediate access to all the parenting guides above!) Connect with us on Facebook or Instagram Get the FREE GUIDE, "Emotional Awareness Strategies"

Speaking of Teens
#130: The Foundation Series Day 1 - Your Teen's Brain Is The Problem

Speaking of Teens

Play Episode Listen Later Apr 19, 2024 14:13


Do you ever think, "Why would they do that?" "Why are they so irrational?" "Why can't they just do what they're supposed to do when they're supposed to do it?"You and every other parent with a kid between the ages of 10 and 25 have, no doubt, had these thoughts several times a day, most days of the week! It's baffling and frustrating to think your teen or tween could do better and chooses not to. But is that actually what's going on?Join me in this first episode of an 8-part series to explain the foundational principles you need to understand to parent your teenagers with less conflict and more connection, plus have more influence in their lives and even change their behavior. I kick off the series explaining why your teen has less control over their thoughts, feelings and behavior than you think. Show Notes for other resources and sourcesTranscriptFind our FREE Parenting Guides Here"I just wanted to let you know that I'm so thankful for your podcast! ...I'm so happy I discovered it!" Speaking of Teens Listener^If you feel the same way, please consider rating and reviewing my show! This helps people know the show is worth their time to listen. Tap here, to go to Apple podcasts, and scroll down until you see the STARS to tap on the last star, then tap on “Write a Review” and let me know what you love about the show. If you're listening in Spotify, you can also rate the show by going to the main episode page and tap the 3 dots to the right of the follow button, tap rate show and tap the 5th star!Thank you in advance for helping me help more parents!I drop new episodes every Tuesday and Friday so please tap Follow on the main episode page, so they'll be ready for you in your app.You can reach out to me with ideas for the show or guest suggestions here. Thanks so much for listening!Check out PARENT CAMP - a monthly membership where you will learn how to strengthen your relationship and decrease the conflict with your teens and tweens (while improving their behavior.) Plus, expert advice on everything from drug use to screen time and everything in between. Join our Facebook Group for Free Support for Parents and others who care for Teens (and get immediate access to all the parenting guides above!) Connect with us on Facebook or Instagram Get the FREE GUIDE, "Emotional Awareness Strategies"

Sense of Soul Podcast
Dream Empowerment

Sense of Soul Podcast

Play Episode Listen Later Apr 5, 2024 40:39


Today on Sense of Soul we have, Megan Mary is a dreamworker that specializes in the analysis of women's dreams to promote transformative personal growth and enlightenment. Founder of Women's Dream Analysis and the Women's Dream Enlightenment podcast, she is an intuitive, introvert, mystic and author. After being diagnosed with three chronic illnesses, she experienced a spiritual awakening. Megan works with clients all over the world offering dream interpretation, women's empowerment, transformative journeys and spiritual awakening guidance. She hold a Master of Arts and Bachelor of Arts degree in English and have studied psychology, theory, astronomy, religion and philosophy at the collegiate level. She is a member of the International Association for the Study of Dreams and currently is part of their Dream Study Group and Foundation Series (2023-2024). She has also been featured as an expert source in Authority Magazine, Parade Magazine and VeryWell Mind. She is passionate about helping other women connect with the inner guidance and wisdom in their own dreams in a safe and compassionate space. She believes the symbolic imagery in our dreams is a key, once unlocked we can tap into our own innate transformative power, evolve our consciousness and ultimately discover our life purpose. She is also the host of the podcast called Women's Dream Enlightenment, a podcast for women, which features deep discussions, dream interpretation and spiritual stories of awakening. You can visit her website at www.womensdreamanalysis.com for more information about Megan and her services. Pinterest: https://www.pinterest.com/womensdreamanalysis/ Youtube : https://www.youtube.com/@womensdreamanalysis Linkedin: https://www.linkedin.com/company/womensdreamanalysis/   End of show notes:   Learn more about Sense of Soul Podcast: https://www.senseofsoulpodcast.com Check out my NEW affiliate deals! https://www.mysenseofsoul.com/sense-of-soul-affiliates-page Follow Sense of Soul on Patreon, and join to get ad free episodes, circles, mini series and more! https://www.patreon.com/senseofsoul

The Basketball Podcast
Episode 304: John Krikorian, Game Philosophy, Core Principles, and Foundation Series

The Basketball Podcast

Play Episode Listen Later Jan 24, 2024 52:42


Guest: John Krikorian, Christopher Newport Head CoachChristopher Newport head coach John Krikorian shares insights on game philosophy, core principles, and his foundation series.Krikorian led Christopher Newport to remarkable success, including its first three Final Four appearances and first National Championship in the 2022-2023 season. Krikorian has led the Captains to nine NCAA Tournaments in 12 seasons, including the last seven tournaments. CNU is the only school in the nation to win at least one game in each of the last seven NCAA tourneys, and Krikorian now has a career mark in tournament games of 23-7. His overall record at CNU is 290-65 (.817) through his first dozen years at the helm. He was named the National Coach of the Year by the NABC (National Association of Basketball Coaches) following his national title in 2023.Krikorian's teams at CNU have won at least 18 games per season since he took over, and have advanced to the conference championship game every year for the last nine seasons in the Capital and Coast-To-Coast Athletic Conference.The National Championship team of 2022-2023 tied the school mark for victories with 30, finishing 30-3, and ending the year on a 15-game winning streak. Krikorian's coaching resume now includes six seasons as an NCAA Division I assistant and 16 as a Division III head coach. His overall head coaching record is 355-107.Breakdown1:00 - National Championship3:30 - Game Competitive Spirit6:00 - Cliques and Outliers10:00 - Empowering Everyone13:00 - Stereotype of Leadership18:00 - Risk Tolerance22:00 - Empowering Players as Leaders25:56 - 27:08 - Hoopsalytics ADS27:08 - Breathing Practices30:00 - Balance32:00 - Foundation Series36:00 - Defensive Philosophy38:00 - Adjustments on Defense41:00 - Drop Coverage43:00 - Offensive Philosophy45:00 - Spacing Template47:00 - ConclusionJohn Krikorian's Bio:Bio: https://en.wikipedia.org/wiki/John_KrikorianLinkedin: https://www.linkedin.com/in/john-krikorian-96345a183/ Basketball ImmersionWebsite: http://basketballimmersion.com/Twitter: https://twitter.com/bballimmersion?lang=enYouTube: https://www.youtube.com/user/basketballimmersionFacebook: https://facebook.com/basketballimmersionImmersion Videos:Check out all our all-access practice and specialty clinics: https://www.immersionvideos.com

Catch The Fire - Weekly Sermon Podcast (Toronto Airport)

In this Podcast Ash Smith, the Senior Pastor of Cath The Fire Toronto is wrapping up with the “Foundation Series” of 2023.

Catch The Fire - Weekly Sermon Podcast (Toronto Airport)

This is a part of our “Foundation Series” that was shared by Robert MacIntosh. Enjoy!

The Korea Society
Y. T. Hwang Family Foundation Series on Ethics & Common Values Inaugural Lecture by Dr. Jim Yong Kim

The Korea Society

Play Episode Listen Later Jan 11, 2024 71:39


January 10, 2024 - With the ever-growing need to understand ourselves and humanity as a whole, it is necessary to examine the concepts of morality, ethics and universal values as guiding principles of the human condition. With generous support from Y.T. Hwang Family Foundation, The Korea Society is launching a new lecture and conversation series titled Series on Ethics and Common Values. This series promotes the understanding of central themes of our human existence - morality, ethics, personal responsibility, compassion and civility - through a series of lectures by distinguished speakers and conversation with extraordinary individuals who exemplify the universal values in line with the mission of Y. T. Hwang Family Foundation and The Korea Society. The Korea Society and Y. T. Hwang Family Foundation is proud to present Dr. Jim Yong Kim, who will deliver the inaugural lecture of the new series. For more information, please visit the link below: https://www.koreasociety.org/arts-culture/item/1767-y-t-hwang-family-foundation-series-on-ethics-common-values-inaugural-lecture-by-dr-jim-yong-kim

Start Building Podcast
08. Foundation Series - From Lawyer to Leader (w/ Connor Crook, Diamondback Tool Co.)

Start Building Podcast

Play Episode Listen Later Jan 10, 2024 40:06


The Foundation Series are interviews with leaders that are focused on topics related to building a strong foundation for your business. In this episode, we chat with Connor Crook, CEO of Diamondback Tool Co. We talk about his journey from Lawyer to Leader, and how his leadership style has helped build the right team as they've grown to be the most sought after tool belt on the market. Diamondback Tool Co. is a manufacturer of premium construction work gear including tool belts, vests, bags and more. Their modular system delivers value through increased productivity, comfort and durability to customers around the world. Be sure to check out ⁠www.startbuildingsomething.com⁠. Thank you to our Sponsors! ⁠Kantent⁠ | augmented reality made easy. Kantent WebAR allows customers to view your products in their space quickly and easily on any device without the need to download an app. Need 3D models? Kantent can help you with that! Kantent is WebAR for everyone. Starting at just $25/product per month, you can have a web visualizer on your website that shows your products in 3D and allows consumers to view that product in Augmented Reality in their own home. Augmented Reality is proven to increase sales and decrease returns. Kantent makes it easy, affordable and fun to add Augmented Reality to your marketing and sales playbook. Want to learn more? Email: hello@kantent.co ⁠aly⁠ | photo-realistic 3D renderings ALY has streamlined and simplified the process for custom interactive and shareable photo-realistic 3D renderings of interior and exterior spaces. Whether you are a custom home builder, a multi-family developer, a restaurant or commercial developer, interior designer, or simply anyone who wants to see their project in high-fidelity photo realistic images, ALY is the marketing and sales solution you've been looking for. ⁠www.alyvr.com ⁠Want to see a demo or start a quote on a project? Email: tim@alyvr.com

Influencers & Revolutionaries
Tom Lombardo ‘Review: The Most Inspiring/Informative Futures Books'

Influencers & Revolutionaries

Play Episode Listen Later Dec 29, 2023 60:48


Series Four This episode of The New Abnormal podcast features the renowned futurist Tom Lombardo, Director at the Center for Future Consciousness, Exec Board Member of the World Futures Studies Federation, and Editor at the Journal of Futures Studies. He returns to the series to give an overview of his choices of the top futures books in science fiction (author/topic clustered) which are roughly chronologically sequenced as follows: Late Nineteenth Century Classics (Future of Human Society): Albert Robida: The Twentieth Century, Jules Verne: Paris in the Twentieth Century, & John Jacob Astor: A Journey in Other Worlds...Systematic/Philosophical Futures - SF/FS Synthesis: H.G. Wells: The Time Machine, The Sleeper Awakes, Men Like Gods, & The Shape of Things to Come...Cosmic Futures: Olaf Stapledon: Last and First Men & Star Maker... Early Twentieth Century Classics: Aldous Huxley: Brave New World, Yevgeny Zamyatin: We, Laurence Manning: The Man Who Awoke, & William Hope Hodgson The Night Land...Heinlein & Asimov Futures: Beyond this Horizon, Waldo, & The Past through Tomorrow Series & The Foundation Series...Robot Futures: Isaac Asimov: The Caves of Steel and Jack Williamson The Humanoids...Alien Futures: Adrian Tchaikovsky: Children of Time, Abraham Merritt The Metal Monster, Sheri Tepper Grass, China Miéville Embassytown & Jeff Vandermeer Annihilation...Transcendent Poignant Futures: Clifford Simak: City & Walter Miller A Canticle for Leibowitz...New Wave Futures: John Brunner (Future of Everything): Stand on Zanzibar & Robert Silverberg (Psychedelic Future) Son of Man...Cyberpunk Futures: Bruce Sterling: Schismatrix & Rudy Rucker The Ware Tetralogy...Human Futures: Greg Bear: Queen of Angels & Darwin's Radio, Stapledon's Odd John, & Alfred Bester The Demolished Man... Outer Space Futures: Doc Smith: The Skylark and Chronicles of the Lensmen Series, Larry Niven: Ringworld, Vernor Vinge: A Fire Upon the Deep, Alastair Reynolds: Revelation Space , S. A. Corey Leviathan Wakes & Iain Banks Matter...Future of Everything: Dan Simmons: The Hyperion Cantos (Others See Below)...Cosmic/Scientific Futures: Stephen Baxter: Vacuum Diagrams (The Xeelee Saga) & The Time Ships...Philosophical/Scientific/Technological High Powered Futures: Greg Egan: Permutation City, Diaspora, & Schild's Ladder...Cultural Futures/Future of Everything: Ian McDonald: River of Gods, Brasyl, and The Dervish House & Cixin Liu The Three-Body Problem Trilogy...Singularity Hi-Tech Future: Charles Stross: Accelerando & Ernest Cline Ready Player One...David Brin Futures: Earth, The Uplift War, and Existence... Ecological/Comprehensive/Utopian Constructive Future: Kim Stanley Robinson: Mars Trilogy/2312 & The Ministry of the Future...Neal Stephenson Futures: Snow Crash, The Diamond Age, and Seveneves...So…we hope you enjoy the podcast!

Electrek
Electrek's Vehicle of the Year, Cybertruck Foundation Series, GM Bolt EUV only, and more

Electrek

Play Episode Listen Later Dec 8, 2023 62:48


On the Electrek Podcast, we discuss the most popular news in the world of sustainable transport and energy. In this week's episode, we discuss Electrek's Vehicle of the Year, Cybertruck Foundation Series, GM Bolt EUV coming back, and more. Sponsored by Electrek's own merch store: we are launching a brand new merch store with some Electrek swag just in time for the holidays! The show is live every Friday at 4 p.m. ET on Electrek's YouTube channel. As a reminder, we'll have an accompanying post, like this one, on the site with an embedded link to the live stream. Head to the YouTube channel to get your questions and comments in. After the show ends at around 5 p.m. ET, the video will be archived on YouTube and the audio on all your favorite podcast apps: Apple Podcasts Spotify Overcast Pocket Casts Castro RSS We now have a Patreon if you want to help us avoid more ads and invest more in our content. We have some awesome gifts for our Patreons and more coming. Here are a few of the articles that we will discuss during the podcast: Tesla Model Y is Electrek's vehicle of the year Tesla starts selling fully-loaded Cybertruck ‘Foundation Series' for $120,000 Elon Musk: low-cost Tesla is advanced, manufacturing is going to be revolutionary Tesla Holiday update leaks, and it's a bit of a weak one Tesla's head of Dojo supercomputer is out, possibly over issues with next-gen Tesla shares 48V architecture with other automakers to move the industry Tesla's Swedish strike is spreading through Europe Tesla is officially losing half $7,500 tax credit on two Model 3 trims Ford Mustang Mach-E to lose EV tax credit GM says next-gen Bolt will be EUV-only as the SUV virus infects the industry Lucid Motors (LCID) updates its 2024 model year Airs, including lower prices and a RWD Pure Toyota unveils new Urban electric SUV to rival Volvo's EX30 as affordable EV option Fisker (FSR) dials back its 2023 production targets yet again as it fights to keep going Here's the live stream for today's episode starting at 4:00 p.m. ET (or the video after 5 p.m. ET): https://www.youtube.com/watch?v=GU22Ptqyx_o

Electrek
Electrek's Vehicle of the Year, Cybertruck Foundation Series, GM Bolt EUV only, and more

Electrek

Play Episode Listen Later Dec 8, 2023 62:48


On the Electrek Podcast, we discuss the most popular news in the world of sustainable transport and energy. In this week's episode, we discuss Electrek's Vehicle of the Year, Cybertruck Foundation Series, GM Bolt EUV coming back, and more. Sponsored by Electrek's own merch store: we are launching a brand new merch store with some Electrek swag just in time for the holidays! The show is live every Friday at 4 p.m. ET on Electrek's YouTube channel. As a reminder, we'll have an accompanying post, like this one, on the site with an embedded link to the live stream. Head to the YouTube channel to get your questions and comments in. After the show ends at around 5 p.m. ET, the video will be archived on YouTube and the audio on all your favorite podcast apps: Apple Podcasts Spotify Overcast Pocket Casts Castro RSS We now have a Patreon if you want to help us avoid more ads and invest more in our content. We have some awesome gifts for our Patreons and more coming. Here are a few of the articles that we will discuss during the podcast: Tesla Model Y is Electrek's vehicle of the year Tesla starts selling fully-loaded Cybertruck ‘Foundation Series' for $120,000 Elon Musk: low-cost Tesla is advanced, manufacturing is going to be revolutionary Tesla Holiday update leaks, and it's a bit of a weak one Tesla's head of Dojo supercomputer is out, possibly over issues with next-gen Tesla shares 48V architecture with other automakers to move the industry Tesla's Swedish strike is spreading through Europe Tesla is officially losing half $7,500 tax credit on two Model 3 trims Ford Mustang Mach-E to lose EV tax credit GM says next-gen Bolt will be EUV-only as the SUV virus infects the industry Lucid Motors (LCID) updates its 2024 model year Airs, including lower prices and a RWD Pure Toyota unveils new Urban electric SUV to rival Volvo's EX30 as affordable EV option Fisker (FSR) dials back its 2023 production targets yet again as it fights to keep going Here's the live stream for today's episode starting at 4:00 p.m. ET (or the video after 5 p.m. ET): https://www.youtube.com/watch?v=GU22Ptqyx_o

InsideEVs - Electric Vehicle News
196: 1.2 Million-Mile Tesla Model S, Cybertruck Foundation Series Pricing Announced

InsideEVs - Electric Vehicle News

Play Episode Listen Later Dec 8, 2023 66:29


InsideEVs is proud to present episode 189 of its weekly podcast. Available on the InsideEVs YouTube channel and all major podcast platforms – Apple Podcasts, Spotify, Google Podcasts, iHeart Radio, and Tune In. We also stream the show live on Facebook, Twitch, Twitter, and YouTube on Friday at 9:30 AM EST. Appearing on this episode is Laycee “Miss GoElectric,” an insightful veteran of the InsideEVs Podcast and her own media empire, Hazel Southwell who has been doing science-y deep thinking and reporting for outlets ranging from ESPN to Ars Technica, Alex Goy who is an all-around motoring person and a talented presenter, and Patrick George, Editor in Chief of InsideEVs. This week we will discuss the 1.2 million-mile Tesla Model S, the pricing for the Cybertruck Foundation Series, as well as GM confirming that the next-gen Chevy Bolt will be available as an EUV-only. 

胡聊科技
Cybertruck 有第四個隱藏版本 Foundation Series? 特斯拉不到百萬的車. Toyota 新電動車. 特斯拉上海超級工廠擴建.

胡聊科技

Play Episode Listen Later Dec 6, 2023 8:26


Toyota 全新小型電動休旅車, 2026? Fiat 500e 回歸美國! 才 $33K。 特斯拉 $25K 美金,最便宜的電動車 特斯拉上海超級工廠擴建 特斯拉 Cybertruck 有第四個隱藏版本? 買杯咖啡支持我喔. http://buymeacoff.ee/bosshu YouTube: 胡老闆 BossHu IG: @master_bosshu FB: 胡老闆 --- Send in a voice message: https://podcasters.spotify.com/pod/show/bosshu/message Support this podcast: https://podcasters.spotify.com/pod/show/bosshu/support

The Podium
Foundation Series: Recap Q&A

The Podium

Play Episode Listen Later Oct 30, 2023 47:00


From exercise and training, to sleep and nutrition, Patrick Morris and Dr. Kevin Sprouse have covered it all in the Foundation Series. In this episode, they answer burning questions from our listeners to fill in any gaps in your training. Patrick and Dr. Sprouse discuss fueling strategies tailored for endurance athletes, the intricacies of periodizing nutrition and weight throughout a training year, the science behind using melatonin to tackle shift work sleep issues. Plus, gain a deeper understanding of how standard blood test results should be interpreted for athletes. If you've been following the Foundations Series, this episode ties it all together! Tune in to learn actionable takeaways and practical suggestions for easy wins in each area.Ready to level up your performance nutrition? Check out Kodiak Cakes' new Peak Oatmeal. Head over to kodiakcakes.com/podium to get your bundle and reach your peak!Patrick Morris on Instagram | CoachingDr. Sprouse on Instagram- - - - - - - - -Check us out at Podium Sports Medicine Website | InstagramSubscribe: Apple Podcast |  SpotifyShow Produced by Palm Tree Pod Co.

head series recap sprouse foundation series kodiak cakes kevin sprouse patrick morris palm tree pod co
The Podium
Foundation Series: Nutrition

The Podium

Play Episode Listen Later Oct 16, 2023 57:23


What role does nutrition play in creating a solid performance foundation? The answer might be more complex than you think. Hosts Dr. Kevin Sprouse and Patrick Morris tackle the essential metrics used in nutritional scoring, from calorie balance and protein intake to fiber consumption and Omega-3 levels assessed through the Omega-3 index test. They also shed light on body composition measured with DEXA scans, continuous glucose monitoring metrics, and fasting insulin levels. However, rather than fixating on micromanaging individual nutrients, Kevin and Patrick underscore the importance of adopting a holistic approach, emphasizing the significance of quality whole foods. Learn how small behavioral or intake changes in your nutrition can lead to significant long-term health improvements, all through the lens of their unique scoring system.Ready to level up your performance nutrition? Check out Kodiak Cakes' new protein waffles. Head over to kodiakcakes.com/podium to get your bundle and reach your peak!Patrick Morris on Instagram | CoachingDr. Sprouse on Instagram- - - - - - - - -Check us out at Podium Sports Medicine Website | InstagramSubscribe: Apple Podcast |  SpotifyShow Produced by Palm Tree Pod Co.

techzing tech podcast
386: TZ Discussion - Going Tribal

techzing tech podcast

Play Episode Listen Later Sep 21, 2023 118:45


Justin and Jason discuss walking as a weight-loss strategy and Justin's health-journey hiccups, Isaacson's Elon Musk biography, why Justin changed his Discord photo, the latest with Math Academy, why Justin hasn't been working on List/Nitro and whether he's planning to continue with the project, Balaji's latest Gray Tribe ideas, what Colby accomplished over the summer with Galactic Conquerors and how he's been preparing to take some advanced, for-credit math exams, and Apple's streaming Foundation Series. Artwork by https://sonsofcrypto.com. Join our Discord, chat with us and fellow listeners! https://discord.gg/2EbBwdHHx8

The Living Joyfully Podcast
LJ027: Self-Awareness: Assume Positive Intent [Conflicts]

The Living Joyfully Podcast

Play Episode Listen Later Sep 21, 2023 20:39


We're back with a new episode in our Conflicts series and we're talking about assuming positive intent. It's so common to take someone's words or actions personally and assume that they are trying to irritate, thwart, or hurt us. This happens because we naturally see things from our own perspective. But going into a conversation with those assumptions is pretty much guaranteed to put the other person on the defensive, making productive conversation and connection basically impossible. Assuming positive intent means assuming everyone is doing the best they can in the moment, and that mindset shift can improve our communication and strengthen our relationships.We hope today's episode sparks some fun insights for you and we invite you to dive deeper with our Episode Questions. Join us on Instagram or YouTube to continue the conversation and share your reflections.Let's dig deep, challenge paradigms, choose connection, and live joyfully!You can follow us on Instagram or YouTube. Explore our courses and coaching at https://livingjoyfullyshop.com/.EPISODE QUESTIONS1. Think back to a time when someone gave you the benefit of the doubt and contrast that with a time when someone assumed the worst in you. How did you feel? How did you react? How did it impact your relationship with that person moving forward?2. Think of some recent exchanges - were you feeling defensive? Did you notice the other person defending? Think about how assuming positive intent could have changed that. 3. This week, notice the stories you're telling yourself about other people's actions.  How often are you assuming positive intent? Do you find it hard to do? Why?4. Think of a recent exchange with someone in which you felt defensive. Did you notice the other person defending in response? How long were you stuck there? How might have assuming positive intent and holding space to learn more changed how things played out? 5. Are there particular people in your life to whom you don't typically give the benefit of the doubt? Try on assuming positive intent for the next while. How does that shift things?TRANSCRIPTANNA: Hello and welcome to the Living Joyfully Podcast. Navigating relationships can sometimes be challenging because people are so different. Thanks for joining us as we dive into tools, strategies, and paradigm shifts to help you decrease conflicts and increase connection in your most important relationships.If you're new to the podcast, we encourage you to go back and listen from the beginning, particularly the episodes in our Foundation Series. In them, we talk about our favorite fundamental relationship ideas and tools. If you hear us mentioning a concept over and over again, chances are it has its own episode in the Foundation Series. You can also visit our shop and find the Foundation Series in a podcast collection bundle to be emailed to you weekly, including transcripts and questions.You can find the link in the show notes, or you can go to livingjoyfullyshop.com. There you can also find information about our coaching, as well, so if you'd like to talk through things that are happening in your relationship and find a healing path forward, that's the place to go. We both work with individuals and couples and again, link in the show notes, or you can go to livingjoyfullyshop.com.So, this episode is part of our Conflict Series and our mini-series inside of that about developing our self-awareness. So, today we're diving into assuming positive intent. This principle is a quick tool that helps us stay connected and open.I think, culturally, we tend to assign negative intent. Our first thought is that someone is doing something to thwart us or irritate us or that they don't have a clue. But so often, that's not the case. And whether it is or isn't, going into a conversation with those assumptions is pretty much guaranteed to put the other person on the defensive, which makes having any sort of productive and connecting conversation basically impossible.And as with so many things we talk about, this plays into being the person we want to be in the world. I want to assume the best in people, because I've seen when I do that, it's often what I find. We are all doing the best we can at any given moment. And that best can change dramatically based on the contextual pieces of life.When we are under-resourced, our ability to think clearly and act with intention is clouded. I want to be a person that allows space and grace for that, because I know there have been plenty of times that I've been there and I've needed that from others. So, assuming positive intent can be assuming that the person is doing the best they can in this moment with the circumstances as they are.PAM: Absolutely. I just love that piece about how doing their best can look very different from one day to the next, or one moment to the next. It's not about thinking what their theoretical best looks like, measuring them in this moment against what they would do if they were feeling fully resourced, fully rested, fully fed, in a great frame of mind, and so on. They really are doing their best in this moment. This is what it looks like. It's the best they can muster. Let's meet them there with as much grace and compassion as we can muster.Over the years, assuming positive intent has become such a helpful touchstone for me when it comes to relationships, particularly with partners, kids, longtime friends, where we have a history. And I can be quick to assume I understand them and tell myself a story about why they're saying or doing something.And as you said, I am apt to tell a negative version of the story about the situation or to feel put upon or ignored or misjudged. And it's not surprising. We are looking at the world through our eyes and evaluating what's happening around us through that lens. How does this affect me? But what assuming positive intent does is remind me that there is almost always more to the story than just my perspective, knowing that they're doing the best they can right now, whatever that looks like to me at first, encourages me to widen my lens and get curious. So, so, so many times over the years, this buffer step has saved me from actively jumping in, misinterpreting things, blaming others, which all create even more rifts in our relationship that need repair.ANNA: I mean, it is such a great reminder to look through their eyes, which I know we talk about a lot, but it's so important. It just helps so much.And we're going to make some assumptions. But starting from that place of assuming the best, or at the very least, giving the benefit of the doubt, just sets the stage for us to learn more and to not fall into that blaming or writing stories that can get us off track.And another piece I think that helps with assuming positive intent is to understand that underneath every behavior is a need. We had an episode on this idea as it relates to parenting, episode 25, and as we mentioned there, this is true for everyone. We try to meet our needs through our behaviors, and while sometimes the somewhat linear process, "I'm thirsty and I'm going to get a drink now," sometimes it's a bit harder to recognize, especially from the outside.But part of assuming positive intent is understanding that the person you're dealing with is trying to meet a need. At that particular moment, your needs might not be aligned, but if we can slow things down and give some space to find the underlying needs, that's the space where we can find solutions. That surface-level conflict that seems insurmountable and at complete odds, that can just melt away as we figure out the needs involved and address those.So, let's say if a person's working for you and they haven't turned in a report, instead of assuming that they're irresponsible or don't care, look for that underlying need. Have an open conversation with the energy of wanting to understand. Maybe you find out that they've had several fires they've been putting out that took priority, or they didn't understand the request, or that they were waiting for some information from a third party before they could finish it.Being open and not jumping to conclusions gives you a chance to find out what's happening under the behavior of not turning in the report, and then you can both work together to solve the problem at hand instead of creating friction or a rupture by making a harsher assumption. And there may be things that need to be addressed or systems that need to be changed, but you're only going to get there if you can have that open conversation where the person's not on the defensive and really telling you what's going on.And part of what we can practice with our partners, children, and the people in our lives is providing additional information. But the space for that to feel safe is in the space of assuming positive intent. There, we can have these clarifying conversations. We can explain how things are feeling to us and really hear what the other person's experiencing.PAM: Yeah, exactly. Because if we assume that the first story that pops into our head is the right one, so often what we're doing is putting the other person in the position of having to correct us. And that is a hard thing to do in any relationship, whether with a loved one or a supervisor, or even a newer acquaintance.So, assuming positive intent helps us cultivate that space for further conversation where we can just learn more about what's up, where we can discover the underlying needs they were trying to meet with whatever words, action, behavior they used. The valuable thing about focusing on the needs is that there are often multiple ways to meet them, some of which may have less negative impact on others.So, we can also share our needs in this context and all this bigger picture information helps us work towards a plan that everyone involved is reasonably comfortable with.And I wanted to mention, while it may seem that assuming positive intent and having these conversation takes up precious time we don't feel we have, not doing it is likely to take up maybe even more time down the road, as we continue to butt heads, because we're missing some fundamental understanding of each other's needs and goals. Then you add the time to repair the relationship. Or if you don't, the extra time things take in the future because one or both of you are dragging your feet because you just want to avoid engaging with each other in the first place.ANNA: Oh my gosh, so much. I'd much rather spend the time upfront in a connecting conversation with an eye to understanding each other, rather than dealing with hurt feelings and misunderstandings on the back end.And I really think, in the end, it's more efficient, because we're actually getting to the needs and solving any roadblocks, versus pressing ahead with made up stories and assigning malicious intent that ends up creating these huge disconnects that take time and effort to heal and we still may not be addressing the need underneath. And so, it just keeps repeating.Another big aspect of this is releasing any defensiveness on our part. A person's actions say way more about them than about us. They give us a clue as to what's going on for them, and we can assume positive intent. They have the space and the desire to let us in on what those things are, but if we react with defensiveness, communication just shuts down every time and it becomes this attack and defend tit-for-tat dynamic or a stalemate, and then we're stuck. So, we aren't learning anymore about the needs driving the behavior or what contextual pieces might be at play. We're not learning anything about those pieces that are so critical. And all of this draws out the conflict and doesn't move us towards solutions.So, assuming positive intent leaves space to get to the bottom of things faster without sparking that defensiveness in the other person and we can own our own pieces, too, to not get defensive. And I think we can all think of how nice it feels when someone gives us the benefit of the doubt and doesn't assume the worst, even if we're not at our best, or especially if we've made a mistake. Because usually, we're so hard on ourselves. We're beating ourselves up about the mistakes. So, then having that compounded just creates this cycle. Recognizing that's at play, it just makes it easier for me to give that gift to other people in my life, whether we're in a close relationship or it's just transactional.For me, again, it boils down to being the person I wanna be in the world. And the bonus is that it really just makes everything go so much more smoothly. We move through and often avoid conflict, and we get to the root of things without that defensiveness that can feel so unpleasant and without those misunderstandings that can cause a lot of hurt feelings.PAM: Yeah, so much. Things unfold more smoothly and often more quickly when people aren't feeling judged and defensive. And it makes sense. Getting stuck in that repetition of attack, defend, attack, defend, slows things down so much, while also not getting to the root of the issue or the underlying needs.And along those lines, I find it helpful to remember that assuming positive intent isn't about, instead telling myself a positive story and acting from there, because that is still making it about me and my interpretation, my need to infer a story and to be right about it.But as you said, Anna, their actions really are all about them. It's their story. So instead, for me, assuming positive intent is more about knowing there's a story and not jumping to conclusions, particularly the negative ones, because that just makes moving through the moment even more challenging. Getting curious instead of getting stuck in defensiveness helps create that space for the kinds of honest, non-judgmental conversations that will help everyone better understand the needs at play and find interesting ways to meet them.ANNA: Yeah, I think that assuming positive intent, it's just a way to give some space around things. We aren't writing a story at all. We're acknowledging that there's more to the situation than just what we're seeing. There always is more. There just always is. And leaving space for that. Asking for clarification without any negative energy or agenda just puts us in the best position to learn more and move forward.And to say it again, we are all doing the best we can in any given moment. Keeping that in mind, assuming positive intent helps us uncover the needs that are driving the behaviors that we're seeing.All of which helps us stay connected to the important people in our life and avoid unnecessary conflict with them or anyone we come across.PAM: I just go back to that for the nth time already, but doing the best we can in any given moment, I think it can be challenging for people to believe. Like, "I've seen them handle this so much better before."ANNA: Or, "They should be able to," when we catch ourselves saying, "They should be able to," that's a red flag.PAM: That's always a great clue. But also when, in our mind we're like, "Okay, I could do this, which would be like better. But I do this other thing anyway. It's what I reach for." So, even if theoretically we could choose something better in the moment and we don't, that's still okay. We may not be able to express why we made the choice in the moment. But we made that choice in the moment. And maybe these conversations after will help us better understand ourselves, better understand what was going on in that moment.It might help us recognize some other weight we were carrying or some other thing that was going on that we just couldn't take that extra 10 seconds to think of something else to do and we just needed to do this thing in the moment. So, we don't need to judge things as best. We don't need to figure out any scale or spectrum of what could be better, better, better, better. This is what happened in the moment, and oh my gosh, I can meet you there. And we can just have conversations.ANNA: And figure out the next steps, because we never know, and there's so many contextual pieces. I'll just say it over and over again. We cannot judge a relationship without taking into account these contextual pieces that changes peoples behaviors because of a myriad of reasons. We see it in ourselves, like you said. And so, just watching for those words, the shoulds or the judgment or the kind of standing back and then realizing like, hey, that's really disconnecting and I'm not getting the full story. And when we open up for those conversations, that's when we can learn. Do we have a systems problem here? Do we have a communications problem here? Do we just have a, we're all hungry problem here? Let's get some food and then we'll tackle this afterwards.It can be from the simple to the complex, but you're never going to get at what it is if you don't assume the positive intent, start having the space for the conversation, and then have that clear communication between one another.PAM: Yeah, exactly. And back to what you say, the person that I want to be in the world. And as far as I can reach for that in the moment, giving myself that same grace and compassion we want to give to the other person.ANNA: For sure. Okay, so, we're going to give some questions to reflect on this week.So, number one, think back to a time when someone gave you the benefit of the doubt and contrast that with a time when someone assumed the worst in you. How did you feel? How did you react? How did it impact your relationship with that person moving forward? Because we've all gotten both sides of this, and so, I think we can all think of some examples and just really sit with, "Hey, how did that feel and how would it have felt differently?"And number two, think of some recent exchanges where you or the other person was feeling defensive. Think about how assuming positive intent could have changed that. And so, for me, defensiveness is just that red flag either on their part, or if I'm recognizing it in someone else or seeing it in myself, it's like, okay, we can change that energy. We can change the way this conversation is going, because neither one of us need to feel defensive. We're here to understand.PAM: Defensiveness is such a great clue.ANNA: Yes. Such a great clue.PAM: It's pretty easy to feel once you're starting to look for it. So, that's what we're trying to encourage here, is just to start noticing these things even just that little bubble of oof, there it is. ANNA: Right. It's just that little, there it is. And even if you can't make that change in that moment, recognizing it to reflect on it later, then you can notice like, okay, I see what's getting me there. Now maybe I can think of some steps to not go to that place of defensiveness.Okay. So, this week, number three, notice the stories you're telling yourself about other people's actions. How often are you assuming positive intent? Do you find it hard to do? And why? Are you writing some stories? Are you assigning some more malicious intent? I think that will be really interesting to just see, because I think, like we talked about earlier, it comes pretty naturally. We're just running through and it happens. And so, just that awareness gives us that little pause, that little space. Okay. And four, think of a recent exchange with someone in which you felt defensive. Did you notice that the other person was defending in response? How long were the two of you stuck there? How might have assuming positive intent and holding space to learn more changed how that played out and how that tit-for-tat was going?And number five, are there particular people in your life to whom you don't typically give the benefit of the doubt? Try on assuming positive intent for the next bit and just see, does that shift things in what can be some difficult relationships or some areas that you get stuck? It's just something to play with and again, will give you more information about that relationship and about some ways that maybe you can tweak a few things.PAM: To me, that trying on things, seeing how they go, just doing it for a little while and seeing how things unfold, that is such a valuable approach for me. Rather than like, oh, I should be assuming positive intent. I'm going to do this all the time or I've failed. None of that helps me either as I'm learning this stuff and trying to figure it out and play with it. I need the experiences, the gathering of experiences for me to understand how it's working. Because when I see something, like you said, you have seen this over the years, we both have, play out in such a sense that it's something we've chosen to adopt because we found it as a helpful tool. So, we're sharing it as a helpful tool, not as a rule that you must do this now.ANNA: There are no edicts or "have to," it really is play with it and see if it shifts things, because it also may just open up to other ideas that shift things or other conversations with the people in your life where you're learning more about one another. And to me, that's the goal. Learning about ourselves, learning about one another, and just improving our relationships along the way.All right, so thank you so much for listening, and we will see you next time. Take care.PAM: Bye!

The Podium
Foundation Series: Training and Exercise

The Podium

Play Episode Listen Later Sep 18, 2023 67:40


How do exercise and training impact overall health and performance? Join Kevin and Patrick as they delve into the crucial distinction between exercise and training, and explore the significance of key metrics. From tracking your exercise hours per week to maximizing your VO2 max, strength training, endurance training, recovery, and minimizing sedentary time - all the game-changing insights you need for optimal results are right here in our Foundations Series kickoff. Check out our sponsor PERC Coffee | Use code PODIUM15 to receive 15% off your next order PERC Coffee Instagram Patrick Morris on Instagram | CoachingDr. Sprouse on Instagram- - - - - - - - -Check us out at Podium Sports Medicine Website | InstagramSubscribe: Apple Podcast |  SpotifyShow Produced by Palm Tree Pod Co.

The Hermetic Hour
Doc Smith's Lensmen -- The Original Jedi (rebroadcast)

The Hermetic Hour

Play Episode Listen Later Sep 15, 2023 68:00


On Thursday April 30th, 2020 the Hermetic Hour with host Poke Runyon will present a discussion and review of the 1934 to 1954 science-fiction Lensman series, by Edward Elmer Smith, PhD, a scientist in the food industry specializing in pastry, whose major accomplishment in food engineering was making powdered sugar adhere to doughnuts, and whose major accomplishment in science fiction writing was the creation of a sub-genre called “Space Opera” His Lensman series and its concepts and themes influenced Frank Herbert's Dune, Roddenberry's Star Trek, and Lucas' Star Wars. It even re-influenced the screen version of one of Smith's inspirations The 1912 Burroughs' John Carter, when Burroughs's Therns were rewritten by Andrew Stanton as Smith's evil “Eddorians” and their medallions given the powers of an Arisian Lens. Smith developed the concept of the Multiverse, laser and particle weapons and super computers years before they appeared. His concept of the Lensmen as an incorruptible Galactic police force, guided by secret masters from a hidden planet seems to have been inspired by Theosophy's “Ascended Masters” from Tibet, and King Arthur's knights of the Round Table and the Holy Grail. The Lensmen are obviously the origin of the Star War's Jedi.  Another imitator of Doc Smith was Issac Asimov with his Foundation Series. Asimov was so successful with his Foundation series that he beat out Doc Smith for the 1966 Hugo award for “the best all time science-fiction series.” But at least they declared that Doc's epic was runner up. So if you would like to look deeper into this and even review what happened when Doc ran one of his Lensmen for president and how Clarissa MacDoughil became the first Lenswoman, tune in and we'll activate the lens. 

Entrepreneur Perspectives
Foundation | Balancing Summer Work, Estate Planning Questions, and LTC Insights

Entrepreneur Perspectives

Play Episode Listen Later Jun 15, 2023 31:35


Engaging in your work while also enjoying the summer (the main topic!), exploring the intricacies of life insurance, and navigating through the current state of the Long Term market, are all something of a balancing act. We have to talk about it… this is a Foundation series episode of Entrepreneur Perspectives. How do you manage ... Read more The post Foundation | Balancing Summer Work, Estate Planning Questions, and LTC Insights appeared first on KazSource.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
MPT-7B and The Beginning of Context=Infinity — with Jonathan Frankle and Abhinav Venigalla of MosaicML

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later May 20, 2023 66:43


We are excited to be the first podcast in the world to release an in-depth interview on the new SOTA in commercially licensed open source models - MosiacML MPT-7B!The Latent Space crew will be at the NYC Lux AI Summit next week, and have two meetups in June. As usual, all events are on the Community page! We are also inviting beta testers for the upcoming AI for Engineers course. See you soon!One of GPT3's biggest limitations is context length - you can only send it up to 4000 tokens (3k words, 6 pages) before it throws a hard error, requiring you to bring in LangChain and other retrieval techniques to process long documents and prompts. But MosaicML recently open sourced MPT-7B, the newest addition to their Foundation Series, with context length going up to 84,000 tokens (63k words, 126 pages):This transformer model, trained from scratch on 1 trillion tokens of text and code (compared to 300B for Pythia and OpenLLaMA, and 800B for StableLM), matches the quality of LLaMA-7B. It was trained on the MosaicML platform in 9.5 days on 440 GPUs with no human intervention, costing approximately $200,000. Unlike many open models, MPT-7B is licensed for commercial use and it's optimized for fast training and inference through FlashAttention and FasterTransformer.They also released 3 finetuned models starting from the base MPT-7B: * MPT-7B-Instruct: finetuned on dolly_hhrlhf, a dataset built on top of dolly-5k (see our Dolly episode for more details). * MPT-7B-Chat: finetuned on the ShareGPT-Vicuna, HC3, Alpaca, Helpful and Harmless, and Evol-Instruct datasets.* MPT-7B-StoryWriter-65k+: it was finetuned with a context length of 65k tokens on a filtered fiction subset of the books3 dataset. While 65k is the advertised size, the team has gotten up to 84k tokens in response when running on a single node A100-80GB GPUs. ALiBi is the dark magic that makes this possible. Turns out The Great Gatsby is only about 68k tokens, so the team used the model to create new epilogues for it!On top of the model checkpoints, the team also open-sourced the entire codebase for pretraining, finetuning, and evaluating MPT via their new MosaicML LLM Foundry. The table we showed above was created using LLM Foundry in-context-learning eval framework itself!In this episode, we chatted with the leads of MPT-7B at Mosaic: Jonathan Frankle, Chief Scientist, and Abhinav Venigalla, Research Scientist who spearheaded the MPT-7B training run. We talked about some of the innovations they've brought into the training process to remove the need for 2am on-call PagerDutys, why the LLM dataset mix is such an important yet dark art, and why some of the traditional multiple-choice benchmarks might not be very helpful for the type of technology we are building.Show Notes* Introducing MPT-7B* Cerebras* Lottery Ticket Hypothesis* Hazy Research* ALiBi* Flash Attention* FasterTransformer* List of naughty words for C4 https://twitter.com/code_star/status/1661386844250963972* What is Sparsity?* Hungry Hungry Hippos* BF16 FPp.s. yes, MPT-7B really is codenamed LLongboi!Timestamps* Introductions [00:00:00]* Intro to Mosaic [00:03:20]* Training and Creating the Models [00:05:45]* Data Choices and the Importance of Repetition [00:08:45]* The Central Question: What Mix of Data Sets Should You Use? [00:10:00]* Evaluation Challenges of LLMs [0:13:00]* Flash Attention [00:16:00]* Fine-tuning for Creativity [00:19:50]* Open Source Licenses and Ethical Considerations [00:23:00]* Training Stability Enhancement [00:25:15]* Data Readiness & Training Preparation [00:30:00]* Dynamic Real-time Model Evaluation [00:34:00]* Open Science for Affordable AI Research [00:36:00]* The Open Approach [00:40:15]* The Future of Mosaic [00:44:11]* Speed and Efficiency [00:48:01]* Trends and Transformers [00:54:00]* Lightning Round and Closing [1:00:55]TranscriptAlessio: [00:00:00] Hey everyone. Welcome to the Latent Space podcast. This is Alessio partner and CTO-in-Residence at Decibel Partners. I'm joined by my co-host, Swyx, writer and editor of Latent Space.Swyx: Hey, and today we have Jonathan and Abhi from Mosaic ML. Welcome to our studio.Jonathan: Guys thank you so much for having us. Thanks so much.Swyx: How's it feel?Jonathan: Honestly, I've been doing a lot of podcasts during the pandemic, and it has not been the same.Swyx: No, not the same actually. So you have on your bio that you're primarily based in Boston,Jonathan: New York. New York, yeah. My Twitter bio was a probability distribution over locations.Swyx: Exactly, exactly. So I DMd you because I was obviously very interested in MPT-7B and DMd you, I was like, for the 0.2% of the time that you're in San Francisco, can you come please come to a podcast studio and you're like, I'm there next week.Jonathan: Yeah, it worked out perfectly. Swyx: We're really lucky to have you, I'll read off a few intros that people should know about you and then you can fill in the blanks.So Jonathan, you did your BS and MS at Princeton in programming languages and then found your way into ML for your PhD at MiT where you made a real splash with the lottery ticket hypothesis in 2018, which people can check up on. I think you've done a few podcasts about it over the years, which has been highly influential, and we'll talk about sparse models at Mosaic. You have also had some side [00:01:30] quest. You taught programming for lawyers and you did some law and privacy stuff in, in DC and also did some cryptography stuff. Um, and you've been an assistant professor at Harvard before earning your PhD.Jonathan:  I've yet to start.Swyx: You, you yet to start. Okay. But you just got your PhD.Jonathan:. I technically just got my PhD. I was at Mosaic which delayed my defense by about two years. It was, I was at 99% done for two years. Got the job at Harvard, Mosaic started, and I had better things to do than write my dissertation for two years. Swyx: You know, you know, this is very out of order.Jonathan: Like, oh, completely out of order, completely backwards. Go talk to my advisor about that. He's also an advisor at Mosaic and has been from the beginning. And, you know, go talk to him about finishing on time.Swyx: Great, great, great. And just to fill it out, Abhi, you did your BS and MS and MIT, you were a researcher at Cerebras, and you're now a research scientist at Mosaic. Just before we go into Mosaic stuff, I'm actually very curious about Cereus and, uh, just that, that space in general. Um, what are they doing that people should know about?Abhinav: Yeah, absolutely. Um, I think the biggest thing about CEREUS is that they're really building, you know, kind of the NextGen computing platform beyond, like GPUs.Um, they're trying to build a system that uses an entire wafer, you know, rather than cutting up a wafer into smaller chips and trying to train a model on that entire system, or actually more recently on many such wafers. Um, so it's, and it's really extraordinary. I think it's like the first time ever that kind of wafer scale computing has ever really worked. And so it's a really exciting time to be there, trying to figure out how we can map ML workloads to work, um, on a much, much bigger chip.Swyx: And do you use like [00:03:00] a different programming language or framework to do that? Or is that like..Abhinav: Yeah, so I mean, things have changed a bit since I was there.I think, um, you can actually run just normal tensor flow and pie torch on there. Um, so they've built a kind of software stack that compiles it down. So it actually just kind of works naturally. But yeah.Jonathan : Compiled versions of Python is a hot topic at the moment with Mojo as well. Swyx: And then Mosaic, you, you spearheaded the MPT-7B effort.INTRO TO MOSAIC [00:03:20]Abhinav: Uh, yeah. Yeah, so it's kind of like, it's been maybe six months, 12 months in the making. We kind of started working on LMs sort of back in the summer of last year. Um, and then we came with this blog post where we kind of profiled a lot of LMs and saw, hey, the cost of training is actually a lot lower than what people might think.Um, and then since then, you know, being inspired by kind of, you know, meta's release, so the LLaMA models and lots of other open source work, we kind of started working towards, well, what if we were to release a really good kind of 7 billion parameter model? And that's what MPT is. Alessio:You know, we mentioned some of the podcasts you had done, Jonathan, I think in one of them you mentioned Mosaic was not planning on building a  model and releasing and obviously you eventually did. So what are some of the things that got you there that maybe obviously LLaMA you mentioned was an inspiration. You now have both the training and like inference products that you offer. Was this more of a research challenge in a way, uh, that you wanted to do?Or how did the idea come to be?Jonathan: I think there were a couple of things. So we still don't have a first class model. We're not an open AI where, you know, our businesses come to use our one great model. Our business is built around customers creating their own models. But at the end of the day, if customers are gonna create their own models, we have to have the tools to help them do that, and to have the tools to help them do that and know that they work we have to create our own models to start. We have to know that we can do something great if customers are gonna do something great. And one too many people may have challenged me on Twitter about the fact that, you know, mosaic claims all these amazing numbers, but, you know, I believe not to, you know, call out Ross Whiteman here, but, you know, I believe he said at some point, you know, show us the pudding.Um, and so Ross, you know, please let me know how the pudding tastes. But in all seriousness, like I think there is something, this is a demo in some sense. This is to say we did this in 9.5 days for a really reasonable cost, straight through 200, an intervention. 200 K. Yep. Um, you can do this too.Swyx: Uh, and just to reference the numbers that you're putting out, this is the, the last year you were making a lot of noise for trading GPT 3 under 450 K, which is your, your initial estimate.Um, and then it went down to a 100 K and stable diffusion 160 k going down to less than 50 K as well.Jonathan: So I will be careful about that 100 K number. That's certainly the challenge I've given Abhi to hit. Oh, I wouldn't make the promise that we've hit yet, but you know, it's certainly a target that we have.And I, you know, Abhi may kill me for saying this. I don't think it's crazy. TRAINING AND CREATING THE MODELS [00:05:45] Swyx: So we definitely want to get into like estimation math, right? Like what, what needs to happen for those big order magnitude changes to in, in infrastructure costs. But, uh, let's kind of stick to the MPT-7B story. Yeah. Tell us everything.Like you have, uh, three different models. One of them. State of the art essentially on context length. Let's talk about the process of training them, the, uh, the decisions that you made. Um, I can go into, you know, individual details, but I just wanna let you let you rip.Abhinav: Yeah, so I mean, I think, uh, we started off with the base model, which is kind of for all practical purposes, a recreation of LLaMA 7B.Um, so it's a 7 billion perimeter model trained on the trillion tokens. Um, and our goal was like, you know, we should do it efficiently. We should be able to do it like, kind of hands free so we don't have to babysit the runs as they're doing them. And it could be kind of a, a launching point for these fine tune models and those fine tune models, you know, on, on the one hand they're kind of really fun for the community, like the story writer model, which has like a 65,000 length context window and you can even kind of extrapolate beyond that. Um, but they're, they're also kind of just tr inspirations really. So you could kind of start with an MPT-7B base and then build your own custom, you know, downstream. If you want a long context code model, you could do that with our platform. If you wanted one that was for a particular language, you could do that too.But yeah, so we picked kind of the three variance chat and instruct and story writer just kind of like inspirations looking at what people were doing in the community today. Yeah. Alessio: And what's the beginning of the math to come up with? You know, how many tokens you wanna turn it on? How many parameters do you want in a bottle? 7 billion and 30 billion seem to be kind of like two of the magic numbers going around right now. Abhinav: Yeah, definitely. Definitely. Yeah, I think like there's sort of these scaling laws which kind of tell you how to best spend your training compute if that's all you cared about. So if you wanna spend $200,000 exactly in the most efficient way, there'd be a recipe for doing that.Um, and that we usually go by the Chinchilla laws. Now for these models, we actually didn't quite do that because we wanted to make sure that people could actually run these at home and that they [00:07:30] were good for inference. So we trained them kind of beyond those chinchilla points so that we're almost over-training them.I think there's like a joke going on online that they're like long boy and that that came up internally because we were training them for really, really long durations. So that 7B model, the chinchilla point might be 140 billion tokens. Instead, we trained a trillion, so almost seven times longer than you normally would.Swyx: So longboi was the code name. So is it, is it the trading method? Is it the scaling law that you're trying to coin or is it the code name for the 64 billion?Jonathan: Uh, 64. It was just an internal joke for the, for training on way more tokens than you would via chinchilla. Okay. Um, we can coin it long boy and it, it really stuck, but just to, you know, long boys filled with two ELs at the beginning.Yeah. Cause you know, we wanted the lLLaMA thing in there as well. Jonathan: Yeah, yeah, yeah. Our darn CEO we have to rein him in that guy, you know, you can't, yeah. I'm gonna take away his Twitter password at some point. Um, but you know, he had to let that one out publicly. And then I believe there was a YouTube video where someone happened to see it mentioned before the model came out and called it the Long G boy or something like that.Like, so you know, now it's out there in the world. It's out there. It's like Sydnee can't put it back inSwyx: There's a beautiful picture which I think Naveen tweeted out, which, um, shows a long boy on a whiteboard.Jonathan: That was the origin of Long Boy. In fact, the legs of the lLLaMA were the two Ls and the long boy.DATA CHOICES AND THE IMPORTANCE OF REPETITION [00:08:45]Swyx: Well, talk to me about your data choices, right? Like this is your passion project. Like what can you tell us about it?Jonathan: Yeah, I think Abhi wanted to kill me by the end for trying to use all the GPUs on data and none of them on actually training the model. Um, at the end of the day, We know that you need to train these models and [00:09:00] lots of data, but there are a bunch of things we don't know.Number one is what kinds of different data sources matter. The other is how much does repetition really matter? And really kind of repetition can be broken down into how much does quality versus quantity matter. Suppose I had the world's best 10 billion tokens of data. Would it be better to train on that a hundred times or better to train on a trillion tokens of low quality, fresh data?And obviously there's, there's a middle point in between. That's probably the sweet spot. But how do you even know what good quality data is? And. So, yeah, this is, nobody knows, and I think the more time I spent, we have a whole data team, so me and several other people, the more time that we spent on this, you know, I came away thinking, gosh, we know nothing.Gosh, if I were back in academia right now, I would definitely go and, you know, write a paper about this because I have no idea what's going on.Swyx: You would write a paper about it. I'm interested in such a paper. I haven't come across any that exists. Could you frame the central question of such a paper?THE CENTRAL QUESTION: WHAT MIX OF DATA SETS SHOULD YOU USE? [00:10:00]Jonathan: Yeah. The central question is what mix of data sets should you use? Okay. Actually I've, you know, you had mentioned my law school stuff. I went back to Georgetown Law where I used to teach, um, in the midst of creating this model, and I actually sat down with a class of law students and asked them, I gave them our exact data sets, our data mixes, um, like how many tokens we had, and I said, Create the best data set for your model.Knowing they knew nothing about large language models, they just know that data goes in and it's going to affect the behavior. Um, and I was like, create a mix and they basically covered all the different trade-offs. Um, you probably want a lot of English language [00:10:30] text to start with. You get that from the web, but do you want it to be multilingual?If so, you're gonna have a lot less English text. Maybe it'll be worse. Do you wanna have code in there? There are all these beliefs that code leads to models being better at logical reasoning, of which I've seen zero evidence. Rep. It's not, um, I mean, really made a great code model, but code models leading to better chain of thought reasoning on the part of language or code being in the training set leading to better chain of thought reasoning.People claim this all the time, but I've still never seen any real evidence beyond that. You know, one of the generations of the GPT three model started supposedly from Code Da Vinci. Yes. And so there's a belief that, you know, maybe that helped. But again, no evidence. You know, there's a belief that spending a lot of time on good sources like Wikipedia is good for the model.Again, no evidence. At the end of the day, we tried a bunch of different data mixes and the answer was that there are some that are better or worse than others. We did find that the pile, for example, was a really solid data mix, but you know, there were stronger data mixes by our evaluation metrics. And I'll get back to the evaluation question in a minute cuz that's a really important one.This data set called c4, which is what the original T five model was trained on, is weirdly good. And everybody, when I posted on this on Twitter, like Stella Beaterman from Luther mentioned this, I think someone else mentioned this as well. C4 does really well in the metrics and we have no idea why we de-duplicated it against our evaluation set.So it's not like it memorized the data, it is just one web scrape from 2019. If you actually look at the T five paper and see how it was pre-processed, it looks very silly. Mm-hmm. They removed anything that had the word JavaScript in it because they didn't want to get like no JavaScript [00:12:00] warnings. They removed anything with curly braces cuz they didn't wanna get JavaScript in it.They looked at this list of bad words, um, and removed anything that had those bad words. If you actually look at the list of bad words, words like gay are on that list. And so there's, you know, it is a very problematic, you know, list of words, but that was the cleaning that leads to a data set that seems to be unbeatable.So that to me says that we know nothing about data. We, in fact used a data set called mc four as well, which is they supposedly did the same pre-processing of C4 just on more web calls. The English portion is much worse than C4 for reasons that completely escape us. So in the midst of all that, Basically I set two criteria.One was I wanted to be at least as good as mc four English, like make sure that we're not making things actively worse. And mc four English is a nice step up over other stuff that's out there. And two was to go all in on diversity after that, making sure that we had some code, we had some scientific papers, we had Wikipedia, because people are gonna use this model for all sorts of different purposes.But I think the most important thing, and I'm guessing abhi had a million opinions on this, is you're only as good as your evaluation. And we don't know how to evaluate models for the kind of generation we ask them to do. So past a certain point, you have to kinda shrug and say, well, my evaluation's not even measuring what I care about.Mm-hmm. So let me just make reasonable choices. EVALUATION CHALLENGES OF LLMs [0:13:00]Swyx: So you're saying MMLU, big bench, that kind of stuff is not. Convincing for youJonathan: A lot of this stuff is you've got two kinds of tasks. Some of these are more of multiple choice style tasks where there is a right answer. Um, either you ask the model to spit out A, B, C, or D or you know, and if you're more [00:13:30] sophisticated, you look at the perplexity of each possible answer and pick the one that the model is most likely to generate.But we don't ask these models to do multiple choice questions. We ask them to do open-ended generation. There are also open-ended generation tasks like summarization. You compare using things like a blue score or a rouge score, which are known to be very bad ways of comparing text. At the end of the day, there are a lot of great summaries of a paper.There are a lot of great ways to do open form generation, and so humans are, to some extent, the gold standard. Humans are very expensive. It turns out we can't put them into our eval pipeline and just have the humans look at our model every, you know, 10 minutes? Not yet. Not yet. Maybe soon. Um, are you volunteering Abhi?Abhinav: I, I, I just know we have a great eval team who's, uh, who's helping us build new metrics. So if they're listening,Jonathan:  But it's, you know, evaluation of large language models is incredibly hard and I don't think any of these metrics really truly capture. What we expect from the models in practice.Swyx: Yeah. And we might draw wrong conclusions.There's been a debate recently about the emergence phenomenon, whether or not it's a mirage, right? I don't know if you guys have opinions about that process. Abhinav: Yeah, I think I've seen like this paper and all and all, even just kind of plots from different people where like, well maybe it's just a artifact of power, like log scaling or metrics or, you know, we're meshing accuracy, which is this a very like harsh zero one thing.Yeah. Rather than kind of something more continuous. But yeah, similar to what Jonathan was saying about evals. Like there there's one issue of like you just like our diversity of eval metrics, like when we put these models up, even like the chat ones, the instruct ones, people are using 'em for such a variety of tasks.There's just almost no way we get ahead of time, like measuring individual dimensions. And then also particularly like, you know, at the 7B scale, [00:15:00] um, these models still are not super great yet at the really hard tasks, like some of the hardest tasks in MMLU and stuff. So sometimes they're barely scoring like the above kind of random chance, you know, like on really, really hard tasks.So potentially as we. You know, aim for higher and higher quality models. Some of these things will be more useful to us. But we kind of had to develop MPT 7B kind of flying a little bit blind on, on what we knew it was coming out and just going off of like, you know, a small set of common sensor reasoning tasks.And of course, you know, just comparing, you know, those metrics versus other open source models. Alessio: I think fast training in inference was like one of the goals, right? So there's always the trade off between doing the hardest thing and like. Doing all the other things quickly.Abhinav: Yeah, absolutely. Yeah, I mean, I think like, you know, even at the 7B scale, you know, uh, people are trying to run these things on CPUs at home.You know, people are trying to port these to their phones, basically prioritizing the fact that the small scale would lead to our adoption. That was like a big, um, big thing going on. Alessio: Yeah. and you mentioned, um, flash attention and faster transformer as like two of the core things. Can you maybe explain some of the benefits and maybe why other models don't use it?FLASH ATTENTION [00:16:00]Abhinav: Yeah, absolutely. So flash attention is this basically faster implementation of full attention. Um, it's like a mathematical equivalent developed by like actually some of our collaborators, uh, at Stanford. Uh, the hazy research. Hazy research, yeah, exactly.Jonathan: What is, what, what, what's the name hazy research mean?Abhinav: I actually have no idea.Swyx: I have no clue. All these labs have fun names. I always like the stories behind them.Abhinav: Yeah, absolutely. We really, really liked flash attention. We, I think, had to integrate into repo even as [00:16:30] as early as September of last year. And it really just helps, you know, with training speed and also inference speed and we kind of bake that into model architecture.And this is kind of unique amongst all the other hugging face models you see out there. So ours actually, you can toggle between normal torch attention, which will work anywhere and flash attention, which will work on GPUs right out of the box. And that way I think you get almost like a 2x speed up at training time and somewhere between like 50% to a hundred percent speed up at inference time as well.So again, this is just like, we really, really wanted people to use these and like, feel like an improvement and we, we have the team to, to help deliver that. Swyx: Another part, um, of your choices was alibi position, encodings, which people are very interested in, maybe a lot of people just, uh, to sort of take in, in coatings as, as a given.But there's actually a lot of active research and honestly, it's a lot of, um, it's very opaque as well. Like people don't know how to evaluate encodings, including position encodings, but may, may, could you explain, um, alibi and, um, your choice?Abhinav: Yeah, for sure. The alibi and uh, kind of flash attention thing all kind of goes together in interesting ways.And even with training stability too. What alibi does really is that it eliminates the need to have positional embeddings in your model. Where previously, if you're a token position one, you have a particular embedding that you add, and you can't really go beyond your max position, which usually is like about 2000.With alibies, they get rid of that. Instead, just add a bias to the attention map itself. That's kind of like this slope. And if at inference time you wanna go much, much larger, they just kind of stretch that slope out to a longer, longer number of positions. And because the slope is kind of continuous and you can interpret it, it all works out now.Now one of [00:18:00] the, the funny things we found is like with flash attention, it saved so much memory and like improved performance so much that even as early as I kind of last year, like we were profiling models with, with very long context lines up to like, you know, the 65 k that you seen in release, we just never really got around to using it cuz we didn't really know what we might use it for.And also it's very hard to train stably. So we started experimenting with alibi integration, then we suddenly found that, oh wow, stability improves dramatically and now we can actually work together with alibi in a long context lens. That's how we got to like our story writer model where we can stably train these models out to very, very long context lenses and, and use them performantly.Jonathan: Yeah.Swyx: And it's also why you don't have a firm number. Most people now have a firm number on the context line. Now you're just like, eh, 65 to 85Abhinav: Oh yeah, there's, there's a, there's a big age to be 64 K or 65 k. 65 k plus.Swyx: Just do powers of twos. So 64 isn't, you know. Jonathan: Right, right. Yeah. Yeah. But we could, I mean, technically the context length is infinite.If you give me enough memory, um, you know, we can just keep going forever. We had a debate over what number to say is the longest that we could handle. We picked 84 cakes. It's the longest I expect people to see easily in practice. But, you know, we played around for even longer than that and I don't see why we couldn't go longer.Swyx: Yeah. Um, and so for those who haven't read the blog posts, you put the Great Gatsby in there and, uh, asked it to write an epilogue, which seemed pretty impressive.Jonathan: Yeah. There are a bunch of epilogues floating around internally at Mosaic. Yeah. That wasn't my favorite. I think we all have our own favorites.Yeah. But there are a bunch of really, really good ones. There was one where, you know, it's Gatsby's funeral and then Nick starts talking to Gatsby's Ghost, and Gatsby's father shows up and, you know, then he's [00:19:30] at the police station with Tom. It was very plot heavy, like this is what comes next. And a bunch of that were just very Fitzgerald-esque, like, you know, beautiful writing.Um, but it was cool to just see that Wow, the model seemed to actually be working with. You know, all this input. Yeah, yeah. Like it's, it's exciting. You can think of a lot of things you could do with that kind of context length.FINE-TUNING FOR CREATIVITY [00:19:50]Swyx: Is there a trick to fine tuning for a creative task rather than, um, factual task?Jonathan: I don't know what that is, but probably, yeah, I think, you know, the person, um, Alex who did this, he did fine tune the model explicitly on books. The goal was to try to get a model that was really a story writer. But, you know, beyond that, I'm not entirely sure. Actually, it's a great question. Well, no, I'll ask you back.How would you measure that? Swyx: Uh, God, human feedback is the solve to all things. Um, I think there is a labeling question, right? Uh, in computer vision, we had a really, really good episode with Robo Flow on the segment. Anything model where you, you actually start human feedback on like very, I think it's something like 0.5% of the, the overall, uh, final, uh, uh, labels that you had.But then you sort augment them and then you, you fully automate them, um, which I think could be applied to text. It seems intuitive and probably people like snorkel have already raised ahead on this stuff, but I just haven't seen this applied in the language domain yet.Jonathan: It, I mean there are a lot of things that seem like they make a lot of sense in machine learning that never work and a lot of things that make zero sense that seem to work.So, you know, I've given up trying to even predict. Yeah, yeah. Until I see the data or try it, I just kind shg my shoulders and you know, you hope for the best. Bring data or else, right? Yeah, [00:21:00] exactly. Yeah, yeah, yeah.Alessio: The fine tuning of books. Books three is like one of the big data sets and there was the whole.Twitter thing about trade comments and like, you know, you know, I used to be a community moderator@agenius.com and we've run into a lot of things is, well, if you're explaining lyrics, do you have the right to redistribute the lyrics? I know you ended up changing the license on the model from a commercial use Permitted.Swyx: Yeah let's let them. I'm not sure they did. Jonathan: So we flipped it for about a couple hours. Swyx: Um, okay. Can we, can we introduce the story from the start Just for people who are under the loop. Jonathan: Yeah. So I can tell the story very simply. So, you know, the book three data set does contain a lot of books. And it is, you know, as I discovered, um, it is a data set that provokes very strong feelings from a lot of folks.Um, that was one, one guy from one person in particular, in fact. Um, and that's about it. But it turns out one person who wants a lot of attention can, you know, get enough attention that we're talking about it now. And so we had a, we had a discussion internally after that conversation and we talked about flipping the license and, you know, very late at night I thought, you know, maybe it's a good thing to do.And decided, you know, actually probably better to just, you know, Stan Pat's license is still Apache too. And one of the conversations we had was kind of, we hadn't thought about this cuz we had our heads down, but the Hollywood writer Strike took place basically the moment we released the model. Mm-hmm.Um, we were releasing a model that could do AI generated creative content. And that is one of the big sticking points during the strike. Oh, the optics are not good. So the optics aren't good and that's not what we want to convey. This is really, this is a demo of the ability to do really long sequence lengths and.Boy, you know, [00:22:30] that's, that's not timing that we appreciated. And so we talked a lot internally that night about like, oh, we've had time to read the news. We've had time to take a breath. We don't really love this. Came to the conclusion that it's better to just leave it as it is now and learn the lesson for the future.But certainly that was one of my takeaways is this stuff, you know, there's a societal context around this that it's easy to forget when you're in the trenches just trying to get the model to train. And you know, in hindsight, you know, I might've gone with a different thing than a story writer. I might've gone with, you know, coder because we seem to have no problem putting programmers out of work with these models.Swyx: Oh yeah. Please, please, you know, take away this stuff from me.OPEN SOURCE LICENSES AND ETHICAL CONSIDERATIONS [00:23:00]Jonathan: Right. You know, so it's, I think, you know, really. The copyright concerns I leave to the lawyers. Um, that's really, if I learned one thing teaching at a law school, it was that I'm not a lawyer and all this stuff is a little complicated, especially open source licenses were not designed for this kind of world.They were designed for a world of forcing people to be more open, not forcing people to be more closed. And I think, you know, that was part of the impetus here, was to try to use licenses to make things more closed. Um, which is, I think, against the grain of the open source ethos. So that struck me as a little bit strange, but I think the most important part is, you know, we wanna be thoughtful and we wanna do the right thing.And in that case, you know, I hope with all that interesting licensing fund you saw, we're trying to be really thoughtful about this and it's hard. I learned a lot from that experience. Swyx: There's also, I think, an open question of fair use, right? Is training on words of fair use because you don't have a monopoly on words, but some certain arrangements of words you do.And who is to say how much is memorization by a model versus actually learning and internalizing and then. Sometimes happening to land at the right, the [00:24:00] same result.Jonathan: And if I've learned one lesson, I'm not gonna be the person to answer that question. Right, exactly. And so my position is, you know, we will try to make this stuff open and available.Yeah. And, you know, let the community make decisions about what they are or aren't comfortable using. Um, and at the end of the day, you know, it still strikes me as a little bit weird that someone is trying to use these open source licenses to, you know, to close the ecosystem and not to make things more open.That's very much against the ethos of why these licenses were created.Swyx: So the official mosaic position, I guess is like, before you use TC MPC 7B for anything commercial, check your own lawyers now trust our lawyers, not mosaic's lawyers.Jonathan: Yeah, okay. Yeah. I'm, you know, our lawyers are not your lawyers.Exactly. And, you know, make the best decision for yourself. We've tried to be respectful of the content creators and, you know, at the end of the day, This is complicated. And this is something that is a new law. It's a new law. It's a new law that hasn't been established yet. Um, but it's a place where we're gonna continue to try to do the right thing.Um, and it's, I think, one of the commenters, you know, I really appreciated this said, you know, well, they're trying to do the right thing, but nobody knows what the right thing is to even do, you know, the, I guess the, the most right thing would've been to literally not release a model at all. But I don't think that would've been the best thing for the community either.Swyx: Cool.Well, thanks. Well handled. Uh, we had to cover it, just causeJonathan:  Oh, yes, no worries. A big piece of news. It's been on my mind a lot.TRAINING STABILITY ENHANCEMENT [00:25:15]Swyx: Yeah. Yeah. Well, you've been very thoughtful about it. Okay. So a lot of these other ideas in terms of architecture, flash, attention, alibi, and the other data sets were contributions from the rest of the let's just call it open community of, of machine learning advancements. Uh, but Mosaic in [00:25:30] particular had some stability improvements to mitigate loss spikes, quote unquote, uh, which, uh, I, I took to mean, uh, your existing set of tools, uh, maybe we just co kind of covered that. I don't wanna sort of put words in your mouth, but when you say things like, uh, please enjoy my empty logbook.How much of an oversell is that? How much, you know, how much is that marketing versus how much is that reality?Abhinav: Oh yeah. That, that one's real. Yeah. It's like fully end-to-end. Um, and I think.Swyx: So maybe like what, what specific features of Mosaic malibu?Abhinav: Totally, totally. Yeah. I think I'll break it into two parts.One is like training stability, right? Knowing that your model's gonna basically get to the end of the training without loss spikes. Um, and I think, you know, at the 7B scale, you know, for some models like it ha it's not that big of a deal. As you train for longer and longer durations, we found that it's trickier and trickier to avoid these lost spikes.And so we actually spent a long time figuring out, you know, what can we do about our initialization, about our optimizers, about the architecture that basically prevents these lost spikes. And you know, even in our training run, if you zoom in, you'll see small intermittent spikes, but they recover within a few hundred steps.And so that's kind of the magical bit. Our line is one of defenses we recover from Las Vegas, like just naturally, right? Mm-hmm. Our line two defense was that we used determinism and basically really smart resumption strategies so that if something catastrophic happened, we can resume very quickly, like a few batches before.And apply some of these like, uh, interventions. So we had these kinds of preparations, like a plan B, but we didn't have to use them at all for MPT 7B training. So, that was kind of like a lucky break. And the third part of like basically getting all the way to the empty law book is having the right training infrastructure.[00:27:00]So this is basically what, like is, one of the big selling points of the platform is that when you try to train these models on hundreds of GPUs, not many people outside, you know, like deep industry research owners, but the GPUs fail like a lot. Um, I would say like almost once every thousand a 100 days.So for us on like a big 512 cluster every two days, basically the run will fail. Um, and this is either due to GPUs, like falling off the bus, like that's, that's a real error we see, or kind of networking failures or something like that. And so in those situations, what people have normally done is they'll have an on-call team that's just sitting round the clock, 24-7 on slack, once something goes wrong.And if then they'll basically like to try to inspect the cluster, take nodes out that are broken, restart it, and it's a huge pain. Like we ourselves did this for a few months. And as a result of that, because we're building such a platform, we basically step by step automated every single one of those processes.So now when a run fails, we have this automatic kind of watch talk that's watching. It'll basically stop the job. Test the nodes cord in anyone's that are broken and relaunch it. And because our software's all deterministic has fast resumption stuff, it just continues on gracefully. So within that log you can see sometimes I think maybe at like 2:00 AM or something, the run failed and within a few minutes it's back up and running and all of us are just sleeping peacefully.Jonathan: I do wanna say that was hard one. Mm-hmm. Um, certainly this is not how things were going, you know, many months ago, hardware failures we had on calls who were, you know, getting up at two in the morning to, you know, figure out which node had died for what reason, restart the job, have to cord the node. [00:28:30] Um, we were seeing catastrophic loss spikes really frequently, even at the 7B scale that we're just completely derailing runs.And so this was step by step just ratcheting our way there. As Abhi said, to the point where, Many models are training at the moment and I'm sitting here in the studio and not worrying one bit about whether the runs are gonna continue. Yeah. Swyx: I'm, I'm not so much of a data center hardware kind of guy, but isn't there existing software to do this for CPUs and like, what's different about this domain? Does this question make sense at all?Jonathan: Yeah, so when I think about, like, I think back to all the Google fault tolerance papers I read, you know, as an undergrad or grad student mm-hmm. About, you know, building distributed systems. A lot of it is that, you know, Each CPU is doing, say, an individual unit of work.You've got a database that's distributed across your cluster. You wanna make sure that one CPU failing can't, or one machine failing can't, you know, delete data. So you, you replicate it. You know, you have protocols like Paxos where you're literally, you've got state machines that are replicated with, you know, with leaders and backups and things like that.And in this case, you were performing one giant computation where you cannot afford to lose any node. If you lose a node, you lose model state. If you lose a node, you can't continue. It may be that, that in the future we actually, you know, create new versions of a lot of our distributed training libraries that do have backups and where data is replicated so that if you lose a node, you can detect what node you've lost and just continue training without having to stop the run, you know?Pull from a checkpoint. Yeah. Restart again on different hardware. But for now, we're certainly in a world where if anything dies, that's the end of the run and you have to go back and recover from it. [00:30:00]DATA READINESS & TRAINING PREPARATION [00:30:00]Abhinav: Yeah. Like I think a big part, a big word there is like synchronous data pluralism, right? So like, we're basically saying that on every step, every GP is gonna do some work.They're gonna stay in sync with each other and average their, their gradients and continue. Now that there are algorithmic techniques to get around this, like you could say, oh, if a GP dies, just forget about it. All the data that's gonna see, we'll just forget about it. We're not gonna train on it.But, we don't like to do that currently because, um, it makes us give up determinism, stuff like that. Maybe in the future, as you go to extreme scales, we'll start looking at some of those methods. But at the current time it's like, we want determinism. We wanted to have a run that we could perfectly replicate if we needed to.And it was, the goal is figure out how to run it on a big cluster without humans having to babysit it. Babysit it. Alessio: So as you mentioned, these models are kind of the starting point for a lot of your customers To start, you have a. Inference product. You have a training product. You previously had a composer product that is now kind of not rolled into, but you have like a super set of it, which is like the LLM foundry.How are you seeing that change, you know, like from the usual LOP stack and like how people train things before versus now they're starting from, you know, one of these MPT models and coming from there. Like worship teams think about as they come to you and start their journey.Jonathan: So I think there's a key distinction to make here, which is, you know, when you say starting from MPT models, you can mean two things.One is actually starting from one of our checkpoints, which I think very few of our customers are actually going to do, and one is starting from our configuration. You can look at our friends at Rep for that, where, you know, MPT was in progress when Refl [00:31:30] came to us and said, Hey, we need a 3 billion parameter model by next week on all of our data.We're like, well, here you go. This is what we're doing, and if it's good enough for us, um, hopefully it's good enough for you. And that's basically the message we wanna send to our customers. MPT is basically clearing a path all the way through where they know that they can come bring their data, they can use our training infrastructure, they can use all of our amazing orchestration and other tools that abhi just mentioned, for fault tolerance.They can use Composer, which is, you know, still at the heart of our stack. And then the l l M Foundry is really the specific model configuration. They can come in and they know that thing is gonna train well because we've already done it multiple times. Swyx: Let's dig in a little bit more on what should people have ready before they come talk to you? So data architecture, eval that they're looking, etc.Abhinav: Yeah, I, I mean, I think we'll accept customers at any kind of stage in their pipeline. You know, like I'd say science, there's archetypes of people who have built products around like some of these API companies and reach a stage or maturity level where it's like we want our own custom models now, either for the purpose of reducing cost, right?Like our inference services. Quite a bit cheaper than using APIs or because they want some kind of customization that you can't really get from the other API providers. I'd say the most important things to have before training a big model. You know, you wanna have good eval metrics, you know, some kind of score that you can track as you're training your models and scaling up, they can tell you you're progressing.And it's really funny, like a lot of times customers will be really excited about training the models, right? It's really fun to like launch shelves on hundreds of gfs, just all around. It's super fun. But then they'll be like, but wait, what are we gonna measure? Not just the training loss, right? I mean, it's gotta be more than that.[00:33:00]So eval metrics is like a, it's a good pre-req also, you know, your data, you know, either coming with your own pre-training or fine-tune data and having like a strategy to clean it or we can help clean it too. I think we're, we're building a lot of tooling around that. And I think once you have those two kinds of inputs and sort of the budget that you want, we can pretty much walk you through the rest of it, right?Like that's kind of what we do. Recently we helped build CR FM's model for biomedical language a while back. Jonathan: Um, we can. That's the center of research for foundation models. Abhi: Exactly, exactly.Jonathan: Spelling it out for people. Of course.Abhinav: No, absolutely. Yeah, yeah. No, you've done more of these than I have.Um, I think, uh, basically it's sort of, we can help you figure out what model I should train to scale up so that when I go for my big run company, your here run, it's, uh, it's predictable. You can feel confident that it's gonna work, and you'll kind of know what quality you're gonna get out before you have to spend like a few hundred thousand dollars.DYNAMIC REAL-TIME MODEL EVALUATION [00:34:00]Alessio: The rap Reza from rap was on the podcast last week and, uh, they had human eval and then that, uh, I'm Jon Eval, which is like vibe based. Jonathan: And I, I do think the vibe based eval cannot be, you know, underrated really at the, I mean, at the end of the day we, we did stop our models and do vibe checks and we did, as we monitor our models, one of our evals was we just had a bunch of prompts and we would watch the answers as the model trained and see if they changed cuz honestly, You know, I don't really believe in any of these eval metrics to capture what we care about.Mm-hmm. But when you ask it, uh, you know, I don't know. I think one of our prompts was to suggest games for a three-year-old and a seven-year-old. That would be fun to play. Like that was a lot more [00:34:30] valuable to me personally, to see how that answer evolved and changed over the course of training. So, you know, and human eval, just to clarify for folks, human human eval is an automated evaluation metric.There's no humans in it at all. There's no humans in it at all. It's really badly named. I got so confused the first time that someone brought that to me and I was like, no, we're not bringing humans in. It's like, no, it's, it's automated. They just called it a bad name and there's only a hundred cents on it or something.Abhinav: Yeah. Yeah. And, and it's for code specifically, right?Jonathan: Yeah. Yeah. It's very weird. It's a, it's a weird, confusing name that I hate, but you know, when other metrics are called hella swag, like, you know, you do it, just gotta roll with it at this point. Swyx: You're doing live evals now. So one, one of the tweets that I saw from you was that it is, uh, important that you do it paralyzed.Uh, maybe you kind of wanna explain, uh, what, what you guys did.Abhinav: Yeah, for sure. So with LLM Foundry, there's many pieces to it. There's obviously the core training piece, but there's also, you know, tools for evaluation of models. And we've kind of had one of the, I think it's like the, the fastest like evaluation framework.Um, basically it's multi GPU compatible. It runs with Composer, it can support really, really big models. So basically our framework runs so fast that even Azure models are training. We can run these metrics live during the training. So like if you have a dashboard like weights and biases, you kind of watch all these evil metrics.We have, like, 15 or 20 of them honestly, that we track during the run and add negligible overhead. So we can actually watch as our models go and feel confident. Like, it's not like we wait until the very last day to, to test if the models good or notJonathan: That's amazing. Yeah. I love that we've gotten this far into the conversation.We still haven't talked about efficiency and speed. Those are usually our two watch words at Mosaic, which is, you know, that's great. That says that we're [00:36:00] doing a lot of other cool stuff, but at the end of the day, um, you know, Cost comes first. If you can't afford it, it doesn't matter. And so, you know, getting things down cheap enough that, you know, we can monitor in real time, getting things down cheap enough that we can even do it in the first place.That's the basis for everything we do.OPEN SCIENCE FOR AFFORDABLE AI RESEARCH [00:36:00]Alessio: Do you think a lot of the questions that we have around, you know, what data sets we should use and things like that are just because training was so expensive before that, we just haven't run enough experiments to figure that out. And is that one of your goals is trying to make it cheaper so that we can actually get the answers?Jonathan: Yeah, that's a big part of my personal conviction for being here. I think I'm, I'm still in my heart, the second year grad student who was jealous of all his friends who had GPUs and he didn't, and I couldn't train any models except in my laptop. And that, I mean, the lottery ticket experiments began on my laptop that I had to beg for one K 80 so that I could run amist.And I'm still that person deep down in my heart. And I'm a believer that, you know, if we wanna do science and really understand these systems and understand how to make them work well, understand how they behave, understand what makes them safe and reliable. We need to make it cheap enough that we can actually do science, and science involves running dozens of experiments.When I finally, you know, cleaned out my g c s bucket from my PhD, I deleted a million model checkpoints. I'm not kidding. There were over a million model checkpoints. That is the kind of science we need, you know, that's just what it takes. In the same way that if you're in a biology lab, you don't just grow one cell and say like, eh, the drug seems to work on that cell.Like, there's a lot more science you have to do before you really know.Abhinav: Yeah. And I think one of the special things about Mosaic's kind of [00:37:30] position as well is that we have such, so many customers all trying to train models that basically we have the incentive to like to devote all these resources and time to do this science.Because when we learn which pieces actually work, which ones don't, we get to help many, many people, right? And so that kind of aggregation process I think is really important for us. I remember way back there was a paper about Google that basically would investigate batch sizes or something like that.And it was this paper that must have cost a few million dollars during all the experience. And it was just like, wow, what a, what a benefit to the whole community. Now, like now we all get to learn from that and we get, we get to save. We don't have to spend those millions of dollars anymore. So I think, um, kind of mosaical science, like the insights we get on, on data, on pre-screening architecture, on all these different things, um, that's why customers come to us.Swyx: Yeah, you guys did some really good stuff on PubMed, G B T as well. That's the first time I heard of you. Of you. And that's also published to the community.Abhinav: Yeah, that one was really fun. We were like, well, no one's really trained, like fully from scratch domain specific models before. Like, what if we just did a biomed one?Would it still work? And, uh, yeah, I'd be really excited. That did, um, we'll probably have some follow up soon, I think, later this summer.Jonathan: Yeah. Yes. Stay tuned on that. Um, but I, I will say just in general, it's a really important value for us to be open in some sense. We have no incentive not to be open. You know, we make our money off of helping people train better.There's no cost to us in sharing what we learn with the community. Cuz really at the end of the day, we make our money off of those custom models and great infrastructure and, and putting all the pieces together. That's honestly where the Mosaic name came from. Not off of like, oh, we've got, you know, this one cool secret trick [00:39:00] that we won't tell you, or, you know, closing up.I sometimes, you know, in the past couple weeks I've talked to my friends at places like Brain or, you know, what used to be Brain Now Google DeepMind. Oh, I R I P Brain. Yeah. R i p Brian. I spent a lot of time there and it was really a formative time for me. Um, so I miss it, but. You know, I kind of feel like we're one of the biggest open research labs left in industry, which is a very sad state of affairs because we're not very big.Um, but at least can you say how big the team is actually? Yeah. We were about 15 researchers, so we're, we're tiny compared to, you know, the huge army of researchers I remember at Brain or at fair, at Deep Mind back, you know, when I was there during their heydays. Um, you know, but everybody else is kind of, you know, closed up and isn't saying very much anymore.Yeah. And we're gonna keep talking and we're gonna keep sharing and, you know, we will try to be that vanguard to the best of our ability. We're very small and I, I can't promise we're gonna do what those labs used to do in terms of scale or quantity of research, but we will share what we learn and we will try to create resources for the community.Um, I, I dunno, I just, I believe in openness fundamentally. I'm an academic at heart and it's sad to me to watch that go away from a lot of the big labs. THE OPEN APPROACH [00:40:15]Alessio: We just had a live pod about the, you know, open AI snow mode, uh, post that came out and it was one of the first time I really dove into Laura and some of the this new technologies, like how are you thinking about what it's gonna take for like the open approach to really work?Obviously today, GPT four is still, you know, part of like that state-of-the-art model for a [00:40:30] lot of tasks. Do you think some of the innovation and kind of returning methods that we have today are enough if enough people like you guys are like running these, these research groups that are open? Or do you think we still need a step function improvement there?Jonathan: I think one important point here is the idea of coexistence. I think when you look at, I don't know who won Linux or Windows, the answer is yes. Microsoft bought GitHub and has a Windows subsystem for Linux. Linux runs a huge number of our servers and Microsoft is still a wildly profitable company.Probably the most successful tech company right now. So who won open source or closed source? Yes. Um, and I think that's a similar world that we're gonna be in here where, you know, it's gonna be different things for different purposes. I would not run Linux on my laptop personally cuz I like connecting to wifi and printing things.But I wouldn't run Windows on one of my surfers. And so I do think what we're seeing with a lot of our customers is, do they choose opening IR mosaic? Yes. There's a purpose for each of these. You have to send your data off to somebody else with open eyes models. That's a risk. GPT four is amazing and I would never promise someone that if they come to Mosaic, they're gonna get a GPT four quality model.That's way beyond our means and not what we're trying to do anyway. But there's also a whole world for, you know, domain specific models, context specific models that are really specialized, proprietary, trained on your own data that can do things that you could never do with one of these big models. You can customize in crazy ways like G B T four is not gonna hit 65 K context length for a very long time, cuz they've already trained that [00:42:00] model and you know, they haven't even released the 32 K version yet.So we can, you know, we can do things differently, you know, by being flexible. So I think the answer to all this is yes. But we can't see the open source ecosystem disappear. And that's the scariest thing for me. I hear a lot of talk in academia about, you know, whatever happened to that academic research on this field called information retrieval?Well, in 1999 it disappeared. Why? Because Google came along and who cares about information retrieval research when you know you have a Google Scale, you know, Web Scale database. So you know, there's a balance here. We need to have both. Swyx: I wanna applaud you, Elaine. We'll maybe edit it a little like crowd applause, uh, line.Cuz I, I think that, um, that is something that as a research community, as people interested in progress, we need to see these things instead of just, uh, seeing marketing papers from the advertising GPT 4.Jonathan: Yeah. I, I think I, you know, to get on my soapbox for 10 more seconds. Go ahead. When I talk to policymakers about, you know, the AI ecosystem, the usual fear that I bring up is, Innovation will slow because of lack of openness.I've been complaining about this for years and it's finally happened. Hmm. Why is Google sharing, you know, these papers? Why is Open AI sharing these papers? There are a lot of reasons. You know, I have my own beliefs, but it's not something we should take for granted that everybody's sharing the work that they do and it turns out well, I think we took it for granted for a while and now it's gone.I think it's gonna slow down the pace of progress. In a lot of cases, each of these labs has a bit of a monoculture and being able to pass ideas [00:43:30] back and forth was a lot of what kept, you know, scientific progress moving. So it's imperative not just, you know, for the open source community and for academia, but for the progress of technology.That we have a vibrant open source research community.THE FUTURE OF MOSAIC [00:44:11]Swyx: There's a preview of the ecosystem and commentary that we're, we're gonna do. But I wanna close out some stuff on Mosaic. You launched a bunch of stuff this month. A lot of stuff, uh, actually was, I was listening to you on Gradient descent, uh, and other podcasts we know and love.Uh, and you said you also said you were not gonna do inference and, and, and last week you were like, here's Mosaic ML inference. Oops. So maybe just a, at a high level, what was Mosaic ml and like, what is it growing into? Like how do you conceptualize this? Jonathan: Yeah, and I will say gradient, when graded dissent was recorded, we weren't doing inference and had no plans to do it.It took a little while for the podcast to get out. Um, in the meantime, basically, you know, one thing I've learned at a startup, and I'm sure abhi can comment on this as well, focus is the most important thing. We have done our best work when we've been focused on doing one thing really well and our worst work when we've tried to do lots of things.Yeah. So, We don't want to do inference, we don't want to have had to do inference. Um, and at the end of the day, our customers were begging us to do it because they wanted a good way to serve the models and they liked our ecosystem. And so in some sense, we got dragged into it kicking and screaming. We're very excited to have a product.We're going to put our best foot forward and make something really truly amazing. But there is, you know, that's something that we were reluctant to do. You know, our customers convinced us it would be good for our business. It's been wonderful for business and we are gonna put everything into this, but you know, back when grading dissent came out, I [00:45:00] was thinking like, or when we recorded it or focused, oh God, like focus is the most important thing.I've learned that the hard way multiple times that Mosaic, abhi can tell you like, you know, I've made a lot of mistakes on not focusing enough. Um, boy inference, that's a whole second thing, and a whole different animal from training. And at the end of the day, when we founded the company, our belief was that inference was relatively well served at that time.There were a lot of great inference companies out there. Um, training was not well served, especially efficient training. And we had something to add there. I think we've discovered that as the nature of the models have changed, the nature of what we had to add to inference changed a lot and there became an opportunity for us to contribute something.But that was not the plan. But now we do wanna be the place that people come when they wanna train these big, complex, difficult models and know that it's gonna go right the first time and they're gonna have something they can servee right away. Um, you know, really the rep example of, you know, with 10 days to go saying, Hey, can you please train that model?And, you know, three or four days later the model was trained and we were just having fun doing interesting, fine tuning work in it for the rest of the 10 days, you know. That also requires good inference. Swyx: That's true, that's true. Like, so running evals and, and fine tuning. I'm just putting my business hat on and you know, and Alessio as well, like, uh, I've actually had fights with potential co-founders about this on the primary business.Almost like being training, right? Like essentially a one-time cost.Jonathan: Who told you it was a one time cost? What, who, who told you that?Swyx: No, no, no, no. Correct me. Jonathan: Yeah. Yeah. Let me correct you in two ways. Um, as our CEO Navine would say, if he were here, when you create version 1.0 of your software, do you then fire all the engineers?Of [00:46:30] course not. You never, like, MPT has a thousand different things we wanted to do that we never got to. So, you know, there will be future models.Abhinav: And, and the data that's been trained on is also changing over time too, right? If you wanna ask anything about, I guess like May of 2023, we'll have to retrain it further and so on.Right? And I think this is especially true for customers who run like the kind of things that need to be up to date on world knowledge. So I, I think like, you know, the other thing I would say too is that, The malls we have today are certainly not the best malls we'll ever produce. Right. They're gonna get smaller, they're gonna get faster, they're gonna get cheaper, they're gonna get lower latency, they're gonna get higher quality.Right? And so you always want the next gen version of MPT and the one after that and one after that. There's a reason that even the GPT series goes three, four, and we know there's gonna be a five. Right? Um, so I I I also don't see as a, as a one-time cost.Jonathan: Yeah. Yeah. And I, if you wanna cite a stat on this, there are very, very

Christian Life Center - Heath, OH
Final Things | The Foundation Series | Pastor Michael Ensey

Christian Life Center - Heath, OH

Play Episode Listen Later May 17, 2023 56:39


Christian Life Center - Heath, OH
The Beauty Of Holiness | The Foundation Series | Pastor Michael Ensey

Christian Life Center - Heath, OH

Play Episode Listen Later May 10, 2023 42:10