Podcasts about powerpoints

  • 550PODCASTS
  • 2,103EPISODES
  • 28mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Feb 3, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about powerpoints

Latest podcast episodes about powerpoints

The Steve Harvey Morning Show
Follow Your Passion: The Queen of AI's her mission is to empower the African American community to become the leaders in AI.

The Steve Harvey Morning Show

Play Episode Listen Later Feb 3, 2026 32:16 Transcription Available


Two-time Emmy and Three-time NAACP Image Award-winning, television Executive Producer Rushion McDonald interviewed Alicia Lyttle. SUMMARY OF THE ALICIA LYTTLE INTERVIEW From “Money Making Conversations Master Class” with Rushion McDonald [ 1. Purpose of the Interview The purpose of this interview was to: Showcase Alicia Lyttle, CEO and co‑founder of Air Innovations, known widely as the “Queen of AI.” [ Educate small business owners, entrepreneurs, and nonprofits on how to leverage AI for growth. Highlight her mission to empower the African American community to not only keep up with AI—but lead in it. [ Demonstrate how AI tools can transform operations, content creation, finances, and productivity in minutes instead of months. Inspire listeners through her entrepreneurial journey, professional pivots, and personal resilience. 2. High-Level Summary Alicia Lyttle returns to the show two years after her last appearance, now positioned at the forefront of the global AI movement. She explains how her work has shifted from annual summits to monthly AI Business Summits, teaching tens of thousands of entrepreneurs how to use AI hands‑on for content, marketing, operations, and scaling. She breaks down how simple tools—such as NotebookLM, ChatGPT, Jasper, Gemini, and HeyGen—can turn a single piece of content into newsletters, PowerPoints, videos, study guides, and more. She stresses that AI is now accessible, especially with free versions like ChatGPT. Alicia also shares her origin story in AI, beginning with a 15‑year‑old speaker at Walmart Tech Live describing IBM Watson. This sparked her fascination and ultimately led her to pivot her entire company toward full-time AI training and consulting by 2022—despite skepticism from her peers. She details the massive growth of her brand, including 21,000+ live summit attendees and explosive social media expansion. The interview also addresses AI’s role in finance, healthcare, government, job disruption, and how individuals can future‑proof themselves. Her personal story of overcoming a restrictive ex-husband who told her she would “never speak again” underscores her powerful message: no one should silence your gifts. Now she speaks to thousands, leads major events, and helps others build new careers in AI. 3. Key Takeaways A. AI Is Evolving Fast—and So Must We AI is changing so quickly that entrepreneurs cannot afford to wait for annual updates. This is why Alicia shifted to monthly training summits. People need ongoing education to stay competitive. B. Hands‑On AI Education Is the Key Alicia doesn’t just lecture—she walks participants through real demonstrations: Uploading YouTube links Creating summaries Generating emails, mind maps, PowerPoints, quizzes, videos, and more…all from a single input. Her approach eliminates fear and teaches entrepreneurs how to use AI immediately. C. Accessibility Has Changed the Game The release of ChatGPT, especially the free version, democratized AI. Before that, tools like IBM Watson were too complex and expensive. Now anyone with a laptop and internet connection can build websites, write content, or automate business flows in minutes. [ D. The African American Community Must Lead—Not Follow Alicia emphasizes that historically, Black communities have been “last in line” in tech innovation, but this AI era presents a once‑in‑a‑generation opportunity to jump ahead.She sees it as her mission to speak everywhere Black entrepreneurs are to ensure they seize this moment. E. AI Will Replace Tasks—But People Can Future‑Proof Themselves Jobs are already shifting. Companies are laying off non–AI‑literate employees.Alicia urges people to: Become AI‑fluent Join AI committees at work Pursue certification Use AI to become their company’s internal expert “There’s no maybe—you have to learn AI,” she warns. F. AI is Transforming Every Sector: Finance, Healthcare, Government She provides insights on… AI receptionists (“Monica” and “Leslie”) that boost customer interaction to 92% Financial analysis using secure ChatGPT setups AI mental health companions Government calls for national AI leadership G. Alicia Monetizes Through Education, Certification & Consulting Her business model includes: Free monthly summits Paid masterclasses Corporate consulting AI certifications Live Atlanta workshops She teaches others to become AI consultants too. H. Her Personal Triumph Story Inspires Thousands A powerful moment is when she recounts her ex-husband saying: “There’s only one quarterback on a team—and you will never speak again.”Yet today, 1,200+ people attend her live events, and tens of thousands join her virtual trainings. Her success proves resilience and purpose overcome adversity. 4. Key Quotes On AI Opportunity “Never has there been a better time in history to start, build, or scale a business than right now.” On Training Entrepreneurs “Open your laptops… use the same prompt I use. See what results you get.” On the Power of AI Tools “You can take one episode and repurpose it into all these different content ways.” On Pivoting Her Entire Company “In 2022, I said we’re closing this business and going all in on AI.” On Being Black in Tech “My mission is to make sure our community is not left behind—but ahead of the curve.” On Personal Resilience “You will be speaking on the best stages… people will come to see you.”(A friend’s response after she was told she’d “never speak again.”) On Future-Proofing Careers “Those using AI will replace you. You have to learn how to leverage AI.” On AI as a Human-First Technology “AI plus human intelligence—that’s what takes things to the next level.” #SHMS #STRAW #BESTSupport the show: https://www.steveharveyfm.com/See omnystudio.com/listener for privacy information.

Strawberry Letter
Follow Your Passion: The Queen of AI's her mission is to empower the African American community to become the leaders in AI.

Strawberry Letter

Play Episode Listen Later Feb 3, 2026 32:16 Transcription Available


Two-time Emmy and Three-time NAACP Image Award-winning, television Executive Producer Rushion McDonald interviewed Alicia Lyttle. SUMMARY OF THE ALICIA LYTTLE INTERVIEW From “Money Making Conversations Master Class” with Rushion McDonald [ 1. Purpose of the Interview The purpose of this interview was to: Showcase Alicia Lyttle, CEO and co‑founder of Air Innovations, known widely as the “Queen of AI.” [ Educate small business owners, entrepreneurs, and nonprofits on how to leverage AI for growth. Highlight her mission to empower the African American community to not only keep up with AI—but lead in it. [ Demonstrate how AI tools can transform operations, content creation, finances, and productivity in minutes instead of months. Inspire listeners through her entrepreneurial journey, professional pivots, and personal resilience. 2. High-Level Summary Alicia Lyttle returns to the show two years after her last appearance, now positioned at the forefront of the global AI movement. She explains how her work has shifted from annual summits to monthly AI Business Summits, teaching tens of thousands of entrepreneurs how to use AI hands‑on for content, marketing, operations, and scaling. She breaks down how simple tools—such as NotebookLM, ChatGPT, Jasper, Gemini, and HeyGen—can turn a single piece of content into newsletters, PowerPoints, videos, study guides, and more. She stresses that AI is now accessible, especially with free versions like ChatGPT. Alicia also shares her origin story in AI, beginning with a 15‑year‑old speaker at Walmart Tech Live describing IBM Watson. This sparked her fascination and ultimately led her to pivot her entire company toward full-time AI training and consulting by 2022—despite skepticism from her peers. She details the massive growth of her brand, including 21,000+ live summit attendees and explosive social media expansion. The interview also addresses AI’s role in finance, healthcare, government, job disruption, and how individuals can future‑proof themselves. Her personal story of overcoming a restrictive ex-husband who told her she would “never speak again” underscores her powerful message: no one should silence your gifts. Now she speaks to thousands, leads major events, and helps others build new careers in AI. 3. Key Takeaways A. AI Is Evolving Fast—and So Must We AI is changing so quickly that entrepreneurs cannot afford to wait for annual updates. This is why Alicia shifted to monthly training summits. People need ongoing education to stay competitive. B. Hands‑On AI Education Is the Key Alicia doesn’t just lecture—she walks participants through real demonstrations: Uploading YouTube links Creating summaries Generating emails, mind maps, PowerPoints, quizzes, videos, and more…all from a single input. Her approach eliminates fear and teaches entrepreneurs how to use AI immediately. C. Accessibility Has Changed the Game The release of ChatGPT, especially the free version, democratized AI. Before that, tools like IBM Watson were too complex and expensive. Now anyone with a laptop and internet connection can build websites, write content, or automate business flows in minutes. [ D. The African American Community Must Lead—Not Follow Alicia emphasizes that historically, Black communities have been “last in line” in tech innovation, but this AI era presents a once‑in‑a‑generation opportunity to jump ahead.She sees it as her mission to speak everywhere Black entrepreneurs are to ensure they seize this moment. E. AI Will Replace Tasks—But People Can Future‑Proof Themselves Jobs are already shifting. Companies are laying off non–AI‑literate employees.Alicia urges people to: Become AI‑fluent Join AI committees at work Pursue certification Use AI to become their company’s internal expert “There’s no maybe—you have to learn AI,” she warns. F. AI is Transforming Every Sector: Finance, Healthcare, Government She provides insights on… AI receptionists (“Monica” and “Leslie”) that boost customer interaction to 92% Financial analysis using secure ChatGPT setups AI mental health companions Government calls for national AI leadership G. Alicia Monetizes Through Education, Certification & Consulting Her business model includes: Free monthly summits Paid masterclasses Corporate consulting AI certifications Live Atlanta workshops She teaches others to become AI consultants too. H. Her Personal Triumph Story Inspires Thousands A powerful moment is when she recounts her ex-husband saying: “There’s only one quarterback on a team—and you will never speak again.”Yet today, 1,200+ people attend her live events, and tens of thousands join her virtual trainings. Her success proves resilience and purpose overcome adversity. 4. Key Quotes On AI Opportunity “Never has there been a better time in history to start, build, or scale a business than right now.” On Training Entrepreneurs “Open your laptops… use the same prompt I use. See what results you get.” On the Power of AI Tools “You can take one episode and repurpose it into all these different content ways.” On Pivoting Her Entire Company “In 2022, I said we’re closing this business and going all in on AI.” On Being Black in Tech “My mission is to make sure our community is not left behind—but ahead of the curve.” On Personal Resilience “You will be speaking on the best stages… people will come to see you.”(A friend’s response after she was told she’d “never speak again.”) On Future-Proofing Careers “Those using AI will replace you. You have to learn how to leverage AI.” On AI as a Human-First Technology “AI plus human intelligence—that’s what takes things to the next level.” #SHMS #STRAW #BESTSee omnystudio.com/listener for privacy information.

Best of The Steve Harvey Morning Show
Follow Your Passion: The Queen of AI's her mission is to empower the African American community to become the leaders in AI.

Best of The Steve Harvey Morning Show

Play Episode Listen Later Feb 3, 2026 32:16 Transcription Available


Two-time Emmy and Three-time NAACP Image Award-winning, television Executive Producer Rushion McDonald interviewed Alicia Lyttle. SUMMARY OF THE ALICIA LYTTLE INTERVIEW From “Money Making Conversations Master Class” with Rushion McDonald [ 1. Purpose of the Interview The purpose of this interview was to: Showcase Alicia Lyttle, CEO and co‑founder of Air Innovations, known widely as the “Queen of AI.” [ Educate small business owners, entrepreneurs, and nonprofits on how to leverage AI for growth. Highlight her mission to empower the African American community to not only keep up with AI—but lead in it. [ Demonstrate how AI tools can transform operations, content creation, finances, and productivity in minutes instead of months. Inspire listeners through her entrepreneurial journey, professional pivots, and personal resilience. 2. High-Level Summary Alicia Lyttle returns to the show two years after her last appearance, now positioned at the forefront of the global AI movement. She explains how her work has shifted from annual summits to monthly AI Business Summits, teaching tens of thousands of entrepreneurs how to use AI hands‑on for content, marketing, operations, and scaling. She breaks down how simple tools—such as NotebookLM, ChatGPT, Jasper, Gemini, and HeyGen—can turn a single piece of content into newsletters, PowerPoints, videos, study guides, and more. She stresses that AI is now accessible, especially with free versions like ChatGPT. Alicia also shares her origin story in AI, beginning with a 15‑year‑old speaker at Walmart Tech Live describing IBM Watson. This sparked her fascination and ultimately led her to pivot her entire company toward full-time AI training and consulting by 2022—despite skepticism from her peers. She details the massive growth of her brand, including 21,000+ live summit attendees and explosive social media expansion. The interview also addresses AI’s role in finance, healthcare, government, job disruption, and how individuals can future‑proof themselves. Her personal story of overcoming a restrictive ex-husband who told her she would “never speak again” underscores her powerful message: no one should silence your gifts. Now she speaks to thousands, leads major events, and helps others build new careers in AI. 3. Key Takeaways A. AI Is Evolving Fast—and So Must We AI is changing so quickly that entrepreneurs cannot afford to wait for annual updates. This is why Alicia shifted to monthly training summits. People need ongoing education to stay competitive. B. Hands‑On AI Education Is the Key Alicia doesn’t just lecture—she walks participants through real demonstrations: Uploading YouTube links Creating summaries Generating emails, mind maps, PowerPoints, quizzes, videos, and more…all from a single input. Her approach eliminates fear and teaches entrepreneurs how to use AI immediately. C. Accessibility Has Changed the Game The release of ChatGPT, especially the free version, democratized AI. Before that, tools like IBM Watson were too complex and expensive. Now anyone with a laptop and internet connection can build websites, write content, or automate business flows in minutes. [ D. The African American Community Must Lead—Not Follow Alicia emphasizes that historically, Black communities have been “last in line” in tech innovation, but this AI era presents a once‑in‑a‑generation opportunity to jump ahead.She sees it as her mission to speak everywhere Black entrepreneurs are to ensure they seize this moment. E. AI Will Replace Tasks—But People Can Future‑Proof Themselves Jobs are already shifting. Companies are laying off non–AI‑literate employees.Alicia urges people to: Become AI‑fluent Join AI committees at work Pursue certification Use AI to become their company’s internal expert “There’s no maybe—you have to learn AI,” she warns. F. AI is Transforming Every Sector: Finance, Healthcare, Government She provides insights on… AI receptionists (“Monica” and “Leslie”) that boost customer interaction to 92% Financial analysis using secure ChatGPT setups AI mental health companions Government calls for national AI leadership G. Alicia Monetizes Through Education, Certification & Consulting Her business model includes: Free monthly summits Paid masterclasses Corporate consulting AI certifications Live Atlanta workshops She teaches others to become AI consultants too. H. Her Personal Triumph Story Inspires Thousands A powerful moment is when she recounts her ex-husband saying: “There’s only one quarterback on a team—and you will never speak again.”Yet today, 1,200+ people attend her live events, and tens of thousands join her virtual trainings. Her success proves resilience and purpose overcome adversity. 4. Key Quotes On AI Opportunity “Never has there been a better time in history to start, build, or scale a business than right now.” On Training Entrepreneurs “Open your laptops… use the same prompt I use. See what results you get.” On the Power of AI Tools “You can take one episode and repurpose it into all these different content ways.” On Pivoting Her Entire Company “In 2022, I said we’re closing this business and going all in on AI.” On Being Black in Tech “My mission is to make sure our community is not left behind—but ahead of the curve.” On Personal Resilience “You will be speaking on the best stages… people will come to see you.”(A friend’s response after she was told she’d “never speak again.”) On Future-Proofing Careers “Those using AI will replace you. You have to learn how to leverage AI.” On AI as a Human-First Technology “AI plus human intelligence—that’s what takes things to the next level.” #SHMS #STRAW #BESTSteve Harvey Morning Show Online: http://www.steveharveyfm.com/See omnystudio.com/listener for privacy information.

Business English Pod :: Learn Business English Online
BEP 103c – English Presentations Charts and Trends 1: Visuals

Business English Pod :: Learn Business English Online

Play Episode Listen Later Feb 1, 2026 16:44


https://traffic.libsyn.com/secure/bizpod/BEP103c-Charts-1.mp3 Welcome back to Business English Pod for today's lesson on using visuals and describing charts and trends in an English presentation. We've all sat through boring presentations, with PowerPoints that are just slide after slide of too much text. If all you're doing is reading off your slides, then why do a presentation at all? And if your audience falls asleep, then you've effectively communicated nothing. If you really want to grab people's attention, you use visuals. That could mean not just pictures, but graphs and charts. There's no better way to represent data than with graphs. But the graph doesn't do all the work for you. You still need to give it life and make it a seamless part of your overall presentation. The first thing you might do is introduce the point you want to make, before you use the visual. And remember that your audience might have some understanding of the topic already, so you should acknowledge that. And you can make it dramatic by using foreshadowing and highlighting important points. And just like in any presentation, it's good to use clear transitions between points and slides. In today's dialog, we'll hear a presentation from Pat, a director with a cell, or mobile phone manufacturer called Ambient. He's presenting to the company's sales team about how they've regained market share after a rough couple of years. We will hear how Pat uses visuals to enhance his presentation. Listening Questions 1. At the start of his presentation, what does Pat say they will focus on? 2. When talking about the company called Sirus, what does Pat “draw people's attention” to? 3. What does Pat say to transition to showing information about Ambient? Premium Members: PDF Transcript | Quizzes | PhraseCast | Lesson Module Download: Podcast MP3>>> The post BEP 103c – English Presentations Charts and Trends 1: Visuals first appeared on Business English Pod :: Learn Business English Online.

Inside EMS
Oh, baby: Birth, breakthroughs and the Broselow tape blunder

Inside EMS

Play Episode Listen Later Jan 30, 2026 28:16


Dr. Peter Antevy returns to the Inside EMS co-host seat this week, filling in for Kelly Grayson and bringing some serious pediatric firepower to the conversation. Host Chris Cebollero dives right into the latest buzz around the Broselow tape recall — yes, again — as Dr. Antevy unpacks what went wrong, why it matters and what EMS agencies should be doing about it now. He also shares exciting details on his brand-new, field-focused Newborn Resuscitation & Obstetrics course (NROC), built by EMS for EMS. Designed with two hours of online content (zero PowerPoints!) and a short, in-house skills lab, this course aims to tackle one of the most nerve-wracking call types. No more dragging medics to the hospital for NRP classes that don't translate to street-level care. Also on deck: OB deserts, delayed cord clamping, why you might need to Saran-wrap a newborn (seriously), and what AI can — and can't — do for EMS. This one's packed with practical pearls, myth-busting insights and a whole lot of passion for pediatric education. Quotable takeaways from Dr. Peter Antevy “EMS is one specialty that AI will never take away, as far as like the human-to-human contact. We resuscitate people, we treat people who are seizing. AI will never do that. That's a good thing.” “Academics and the hospital folks don't recognize the value that EMS brings to the table. They think we're ambulance drivers. It's time for them to wake up and recognize that we are the people who deliver babies. We are the people who resuscitate grandma, grandpa and the little kid.” Enjoying Inside EMS? Email theshow@ems1.com to share feedback or suggest guests for future episodes. 

Systemize Your Success Podcast
Why Your Staff Training Isn't Sticking—and How to Make It Actually Work with Dr Carrie Graham, Founder of COG Learning & Solutions, LLC | Ep 262

Systemize Your Success Podcast

Play Episode Listen Later Jan 29, 2026 62:33


Leveraging AI
262 | Build an army of agents using Claude's Skills with no technical background with Isar Meitis

Leveraging AI

Play Episode Listen Later Jan 27, 2026 27:43 Transcription Available


The W. Edwards Deming Institute® Podcast
Where is Quality Really Made? An Insider's View of Deming's World

The W. Edwards Deming Institute® Podcast

Play Episode Listen Later Jan 26, 2026 54:35


In this episode, Bill Scherkenbach, one of W. Edwards Deming's closest protégés, and host Andrew Stotz discuss why leadership decisions shape outcomes far more than frontline effort. Bill draws on decades of firsthand experience with Deming and with businesses across industries. Through vivid stories and practical insights, the conversation challenges leaders and learners alike to rethink responsibility, decision-making, and what it truly takes to build lasting quality. Bill's powerpoint is available here. TRANSCRIPT 0:00:02.2 Andrew Stotz: My name is Andrew Stotz, and I'll be your host as we dive deeper into the teachings of Dr. W. Edwards Deming. Today, I'm continuing my discussions with Bill Scherkenbach, a dedicated protégé of Dr. Deming since 1972. Bill met with Dr. Deming more than a thousand times and later led statistical methods and process improvement at Ford and GM at Dr. Deming's recommendation. He authored the Deming Route to Quality and Productivity at Deming's behest and at 79, still champions his mentor's message: Learn, have fun, and make a difference. The discussion for today is, I think we're going to get an answer to this question. And the question is: Where is quality made? Bill, take it away.   0:00:44.9 Bill Scherkenbach: Where is quality made? I can hear the mellifluous doctor saying that. And the answer is: In the boardroom, not on the factory floor. And over and over again, he would say that it's the quality of the decisions that the management make that can far outweigh anything that happens on the shop floor. And when he would speak about that, he would first of all, because he was talking to the auto industry, he would talk about who's making carburetors anymore. "Nobody's making carburetors because it's all fuel injectors," he would say. And anyone who has been following this, another classic one is: Do you ever hear of a bank that failed? Do you think that failed because of mistakes in tellers' windows or calculations of interest? Heck no. But there are a whole bunch of other examples that are even more current, if you will. I mean, although this isn't that current, but Blockbuster had fantastic movies, a whole array of them, the highest quality resolutions, and they completely missed the transition to streaming. And Netflix and others took it completely away from them because of mistakes made in the boardroom. You got more recently Bed Bath & Beyond having a great product, a great inventory.   0:02:51.4 Bill Scherkenbach: But management took their eyes off of it and looked at, they were concerned about stock buybacks and completely lost the picture of what was happening. It was perfect. It was a great product, but it was a management decision. WeWork, another company supplying office places. It was great in COVID and in other areas, but through financial mismanagement, they also ended up going bust. And so there are, I mean, these are examples of failures, but as Dr. Deming also said, don't confuse success with success. If you think you're making good decisions, you got to ask yourself how much better could it have been if you tried something else. So, quality is made in the boardroom, not on the factory floor.   0:04:07.9 Andrew Stotz: I had an interesting encounter this week and I was teaching a class, and there was a guy that came up and talked to me about his company. His company was a Deming Prize from Japan winner. And that was maybe 20, 25 years ago. They won their first Deming Prize, and then subsidiaries within the company won it. So the actual overall company had won something like nine or 10 Deming Prizes over a couple decades. And the president became...   0:04:43.5 Bill Scherkenbach: What business are they in?   0:04:45.5 Andrew Stotz: Well, they're in...   0:04:47.0 Bill Scherkenbach: Of winning prizes?   0:04:48.7 Andrew Stotz: Yeah, I mean, they definitely, the CEO got the distinguished individual prize because he was so dedicated to the teachings of Dr. Deming. And he really, really expanded the business well, the business did well. A new CEO took over 15 years ago, 10 years ago, and took it in another direction. And right now the company is suffering losses and many other problems that they're facing. And I asked the guy without talking about Deming, I just asked him what was the difference between the prior CEO and the current one or the current regimes that have come in. And he said that the prior CEO, it was so clear what the direction was. Like, he set the direction and we all knew what we were doing. And I just thought now as you talk about, the quality is made at the boardroom, it just made me really think back to that conversation and that was what he noticed more than anything. Yeah well, we were really serious about keeping the factory clean or we used statistics or run charts, that was just what he said, I thought that was pretty interesting.   0:06:06.7 Bill Scherkenbach: Absolutely. And that reminds me of another comment that Dr. Deming was vehement about, and that was was the management turnover. Turnovers in boardrooms every 18 months or so, except maybe in family businesses. But that's based on the quality of decisions made in the boardroom. How fast do you want to turn over the CEOs and that C-suite? So it's going to go back to the quality is made in the boardroom.   0:06:50.0 Andrew Stotz: Yeah, and I think maybe it's a good chance for me to share the slide that you have. And let's maybe look at that graphic. Does that makes sense now?   0:07:00.9 Bill Scherkenbach: Sure, for sure.   0:07:02.2 Andrew Stotz: Let's do that. Let's do that. Hold on. All right.   0:07:15.8 Bill Scherkenbach: Okay, okay, okay. You can see on the top left, we'll start the story. I've got to give you a background. This was generated based on my series of inputs and prompts, but this was generated by Notebook LM and based on the information I put in, this is what they came up with.   0:07:48.6 Andrew Stotz: Interesting.   0:07:50.1 Bill Scherkenbach: Based on various information, which I think did a fairly decent job. In any event, we're going to talk about all of these areas, except maybe the one where it says principles for active leadership, because that was the subject of a couple of our vlogs a while ago, and that is the three foundational obligations. And so the thing is that quality, even though Dr. Deming said it was made in the boardroom, one of the problems is that management did not know what questions to ask, and they would go, and Dr. Deming railed against MBWA, management by walking around, primarily because management hadn't made the transition to really take on board what Dr. Deming was talking about in profound knowledge. And that is, as you've mentioned, setting that vision, continually improving around it, and pretty much absolutely essential was to reduce fear within the organization.   0:09:25.9 Bill Scherkenbach: And so management by walking around without profound knowledge, which we've covered in previous talks, only gets you dog and pony shows. And with the fear in the organization, you're going to be carefully guided throughout a wonderful story. I mentioned I was in Disney with some of my granddaughters over the holidays, and they tell a wonderful story, but you don't ever see what's behind the scenery. And management never gets the chance because they really haven't had the opportunity to attain profound knowledge. So that's one of the things. I want to back up a little bit because Dr. Deming would... When Dr. Deming said quality is made at the top, he only agreed to help companies where the top management invited him, he wasn't out there marketing. If they invited him to come in, he would first meet with them and they had to convince him they were serious about participating, if not leading their improvement. And given that, that litmus test, he then agreed to work with them. Very few companies did he agree to on that. And again as we said, the quality of the decisions and questions and passion that determine the successfulness of the company. And so.   0:11:40.0 Andrew Stotz: It made me think about that letter you shared that he was saying about that there was, I think it was within the government and government department that just wasn't ready for change and so he wasn't going to work with it. I'm just curious, like what do you think was his... How did he make that judgment?   0:12:00.0 Bill Scherkenbach: Well, it wasn't high enough. And again, I don't know how high you'd have to go in there. But quite honestly, what we spoke about privately was in politics and in the federal government, at least in the US, things change every four years. And so you have management turnover. And so what one manager, as you described, one CEO is in there and another one comes in and wants to do it their way, they're singing Frank Sinatra's My Way. But that's life….   0:12:49.3 Andrew Stotz: Another great song.   0:12:50.7 Bill Scherkenbach: Another, yes.   0:12:52.1 Andrew Stotz: And it's not like he was an amateur with the government.   0:12:57.5 Bill Scherkenbach: No.   0:13:00.3 Andrew Stotz: He had a lot of experience from a young age, really working closely with the government. Do you think that he saw there was some areas that were worth working or did he just kind of say it's just not worth the effort there or what was his conclusions as he got older?   0:13:16.9 Bill Scherkenbach: Well, as he got older, it might, it was the turnover in management. When he worked for Agriculture, although agriculture is political, and he worked for Census Bureau back when he worked there, it wasn't that political, it's very political now. But there was more a chance for constancy and more of a, their aim was to do the best survey or census that they could do. And so the focus was on setting up systems that would deliver that. But that's what his work with the government was prior to when things really broke loose when he started with Ford and GM and got all the people wanting him in.   0:14:27.0 Andrew Stotz: I've always had questions about this at the top concept and the concept of constancy of purpose. And I'm just pulling out your Deming Route to Quality and Productivity, which, it's a lot of dog ears, but let's just go to chapter one just to remind ourselves. And that you started out with point number one, which was create constancy of purpose towards improvement of product and service with the aim to become competitive, stay in business and provide jobs. One of my questions I always kind of thought about that one was that at first I just thought he was saying just have a constancy of purpose. But the constancy of purpose is improvement of product and service.   0:15:13.6 Bill Scherkenbach: Well, yes and no. I mean, that's what he said. I believe I was quoting what his point number one was. And as it developed, it was very important to add, I believe, point number five on continual improvement. But constancy of purpose is setting the stage, setting the vision if you will, of where you want to take the company. And in Western management, and this is an area where there really is and was a dichotomy between Western and Eastern management. But in Western management, our concept of time was short-term. Boom, boom, boom, boom. And he had a definite problem with that. And that's how you could come up with, well, we're going to go with this fad and that fad or this CEO and that CEO. There was no thinking through the longer term of, as some folks ask, "what is your aim? Who do you think your customer base is now?" don't get suckered into thinking that carburetors are always going to be marketable to that market base. And so that's where he was going with that constancy of purpose. And in the beginning, I think that was my first book you're quoting, but also, in some of his earlier works, he also spoke of consistency of purpose, that is reducing the variation around that aim, that long-term vision, that aim.   0:17:19.2 Bill Scherkenbach: Now, in my second book, I got at least my learning said that you've got to go beyond the logical understanding and your constancy of purpose needs to be a mission, a values and questions. And those people who have who have listened to the the previous vlogs that we've had, those are the physiological and emotional. And I had mentioned, I think, that when when I went to GM, one of the things I did was looked up all the policy letters and the ones that Alfred Sloan wrote had pretty much consistency of three main points. One, make no mistake about it, this is what we're going to do. Two, this is why we're going to do it, logical folks who need to understand that. And to give a little bit of insight on on how he was feeling about it. Sometimes it was value, but those weren't spoken about too much back then. But it gave you an insider view, if you will. And so I looked at that, maybe I was overlooking. But I saw a physiological and emotional in his policy letters.   0:19:00.7 Bill Scherkenbach: And so that's got to be key when you are establishing your vision, but that's only the beginning of it. You have to operationalize it, and this is where management has to get out of the boardroom to see what's going on. Now, that's going to be the predictable, and some of your clients, and certainly the ones over in Asia, are speaking about Lean and Toyota Production System and going to the Gemba and all of those terms. But I see a need to do a reverse Gemba and we'll talk about that.   0:19:49.6 Andrew Stotz: So, I just want to dig deeper into this a little bit just for my own selfish understanding, which I think will help the audience also. Let's go back in time and say that the, Toyota, let's take Toyota as an example because we can say maybe in the 60s or so, they started to really understand that the improvement of product quality, products and service quality and all that was a key thing that was important to them. But they also had a goal of expanding worldwide. And their first step with that maybe was, let's just say, the big step was expanding to the US. Now, in order to expand to the US successfully, it's going to take 10, maybe 20 years. In the beginning, the cars aren't going to fit the market, you're going to have to adapt and all that. So I can understand first, let's imagine that somebody says our constancy of purpose is to continuously improve or let's say, not continuously, but let's just go back to that statement just to keep it clear. Let's say, create constancy of purpose towards improvement of product and service with the aim to become competitive, stay in business and provide jobs.   0:21:07.2 Andrew Stotz: So the core constancy in that statement to me sounds like the improvement. And then if we say, okay, also our vision of where we want to be with this company is we want to capture, let's say, 5% of the US market share within the next 15 years or five or 10 years. So you've got to have constancy of that vision, repeating it, not backing down from it, knowing that you're going to have to modify it. But what's the difference between a management or a leadership team in the boardroom setting a commitment to improvement versus a commitment to a goal of let's say, expanding the market into the US. How do we think about those two.   0:21:53.6 Bill Scherkenbach: Well as you reread what I wrote there, which is Dr. Deming's words and they led into the, I forget what he called it, but he led into the progression of as you improve quality, you improve productivity, you reduce costs.   0:22:33.6 Andrew Stotz: Chain reaction.   0:22:34.5 Bill Scherkenbach: Yeah, the chain reaction. That's a mini version of the chain reaction there. And at the time, that's what people should be signing up for. Now the thing is that doesn't, or at least the interpretations haven't really gone to the improvement of the board's decision-making process. I mean, where he was going for was you want to be able to do your market research because his sampling and doing the market research was able to close the loop to make that production view a system, a closed-loop system. And so you wanted to make sure that you're looking far enough out to be able to have a viable product or service and not get caught up in short-term thinking. Now, but again, short-term is relative. In the US, you had mentioned 10 or 20 years, Toyota, I would imagine they still are looking 100 years out. They didn't get suckered into the over-committing anyway to the electric vehicles. Plug-in hybrids, yes, hybrids yes, very efficient gas motors, yes. But their constancy of purpose is a longer time frame than the Western time frame.   0:24:27.1 Andrew Stotz: Yeah, that was a real attack on the structure that they had built to say when they were being told by the market and by everybody, investors, you've got to shift now, you've got to make a commitment to 100% EVs. I remember watching one of the boardroom, sorry, one of the shareholder meetings, and it's just exhausting, the pressure that they were under.   0:24:55.2 Bill Scherkenbach: Yep, yep. But there... Yeah.   0:25:00.0 Andrew Stotz: If we take a kid, a young kid growing up and we just say, look, your main objective, and my main objective with you is to every day improve. Whatever that is, let's say we're learning science.   0:25:17.3 Bill Scherkenbach: You're improving around your aim. What is your vision? What are you trying to accomplish? And that obviously, if you're you're saying a kid that could change otherwise there'd be an oversupply of firemen.   0:25:38.5 Andrew Stotz: So let's say that the aim was related to science. Let's say that the kid shows a really great interest in science and you're kind of coaching them along and they're like, "Help me, I want to learn everything I can in science." The aim may be a bit vague for the kid, but let's say that we narrow down that aim to say, we want to get through the main topics of science from physics to chemistry and set a foundation of science, which we think's going to take us a year to do that, let's just say. Or whatever. Whatever time frame we come up with, then every day the idea is, how do we number one improve around that aim? Are we teaching the right topics? Also, is there better ways of teaching? Like, this kid maybe learns better in the afternoon and in the morning, whereas another kid I may work with works better in another... And this kid likes five-minute modules and then some practical discussion, this kid likes, an hour of going deep into something and then having an experiment is when we're talking about improvement, is the idea that we're just always trying to improve around that aim until we reach a really optimized system? Is that what we're talking about when we're talking about constancy of purpose when it comes to improving product and service?   0:27:14.4 Bill Scherkenbach: Well there's a whole process that I take my clients through in coming up with their constancy of purpose statement. And the board should be looking at what the community is doing in the next five years, 10 years, where the market is going, where politics is going, all sorts of things. And some of it. I mean, specifically in the science area, it's fairly well recognized that the time of going generation to generation to generation has gone from years to maybe weeks where you have different iterations of technology. And so that's going to complicate stuff quite honestly, because what was good today can be, as Dr. Deming said, the world could change. And that's what you've got to deal with or you're out of business. Or you're out of relevance in what you're studying. And so you have to... If you if you have certain interests, and the interests are driven... It's all going to be internal. Some interests are driven because that's where I hear you can make the most money or that's where I hear you can make the most impact to society or whatever your internal interests are saying that those are key to establishing what your aim is.   0:29:25.7 Andrew Stotz: Okay. You've got some PowerPoints and we've been talking about some of it. But I just want to pull it up and make sure we don't miss anything. I think this is the first text page, maybe just see if there's anything you want to highlight from that. Otherwise we'll move to the next.   0:29:43.0 Bill Scherkenbach: No I think we've we've covered that. Yeah, yeah. And the second page. Yeah, I wanted to talk and I only mentioned it when the Lean folks and the Agile folks talk about Gemba, they're pretty much talking about getting the board out. It's the traditional management by walking around, seeing what happens. Hugely, hugely important. But one of the things, I had one of my clients. Okay, okay. No, that's in the the next one.   0:30:29.4 Andrew Stotz: There you go.   0:30:30.7 Bill Scherkenbach: Okay, yeah. I had one of one of my clients do a reverse Gemba. And that is, that the strategy committee would be coming up with strategies and then handing it off to the operators to execute. And that's pretty much the way stuff was done in this industry and perhaps in many of them. But what we did was we had the operators, the operating committee, the operations committee, sit in as a peanut gallery or a, oh good grief. Well, you couldn't say a thing, you could only observe what they were doing. But it helped the operators better understand and see and feel what the arguments were, what the discussions were in the strategy, so that they as operators were better able to execute the strategy. And so not the board going out and down, but the folks that are below going up if it helps them better execute what's going on. But vice versa, management can't manage the 94%, and Dr. Deming was purposely giving people marbles, sometimes he'd say 93.4%. You know the marble story?   0:32:37.5 Andrew Stotz: I remember that [laughter]. Maybe you should tell that again just because that was a fun one when he was saying to, give them marbles, and they gave me marbles back.   0:32:45.7 Bill Scherkenbach: Yeah, yeah, yeah. Well, he said there was this professor in oral surgery that said there was a an Asian mouse or cricket, whatever, that would... You put in your mouth and they would eat all of the... Be able to clean the gums of all the bacteria better than anything. And described it in detail. And that question was on the test. Okay, please describe this mouse procedure. And he said all of the people, or a whole bunch of people except one, gave him back exactly step by step that he had taught. And one said, Professor, I've talked to other professors, I've looked around, I think you're loading us, that's what Deming said. And so he made the point that teaching should not be teachers handing out marbles and collecting the same marbles they they handed out. And so to some extent, he was testing, being overly precise.   0:34:12.8 Bill Scherkenbach: He wanted people to look into it, to see, go beyond as you were speaking of earlier, going beyond this shocking statement that there perhaps is some way that that really makes sense. So he wants you to study. Very Socratic in his approach to teaching in my opinion. And any event, management can't understand or make inputs on changing what the various levels of willing workers, and you don't have to be on the shop floor, you can be in the C-suite and be willing workers depending on how your company is operating. Go ahead.   0:35:12.0 Andrew Stotz: So let me... Maybe I can, just for people that don't know, Gemba is a Japanese word that means "the actual place," right? The place where the value is created.   0:35:23.8 Bill Scherkenbach: Sure.   0:35:26.2 Andrew Stotz: And the whole concept of this was that it's kind of almost nonsense to think that you could sit up in an office and run something and never see the location of where the problem's happening or what's going on. And all of a sudden many things become clear when you go to the location and try to dig down into it. However, from Dr. Deming context, I think what you're telling us is that if the leader doesn't have profound knowledge, all they're going to do is go to the location and chase symptoms and disrupt work, ultimately...   0:36:02.0 Bill Scherkenbach: Get the dog and pony shows and all of that stuff. And they still won't have a clue. The thing is...   0:36:08.6 Andrew Stotz: So the objective at the board level, if they were to actually go to the place, the objective is observation of the system, of how management decisions have affected this. What is the system able to produce? And that gives them a deeper understanding to think about what's their next decision that they've got to make in relation to this. Am I capturing it right or?   0:36:40.2 Bill Scherkenbach: Well there's a lot more to it, I think, because top management, the board level, are the ones that set the vision, the mission, the values, the guiding principle, and the questions. And I think it's incumbent on the board to be able to go through the ranks and see how their constancy of purpose, the intended, where they want to take the place is being interpreted throughout the organization because, and I know it's an oversimplification and maybe a broad generalization, but middle management... Well, there are layers of management everywhere based on their aim to get ahead, will effectively stop communication upstream and downstream in order to fill their particular aim of what they want to get out of it. And so this is a chance for the top management to see, because they're doing their work, establishing the vision of the company, which is the mission, values and questions, they really should be able to go layer by layer as they're walking around seeing how those, their constancy, their intended constancy is being interpreted and executed. And so that's where beyond understanding how someone is operating a lathe or an accountant is doing a particular calculation, return on invested capital, whatever.   0:38:47.5 Bill Scherkenbach: Beyond that, I think it's important for management to be able to absolutely see what is happening. But the Gemba that I originally spoke about is just the other way. You've got the strategy people that are higher up, and you have the operations people that are typically, well, they might be the same level, but typically lower. You want the lower people to sit in on some higher meetings so they have a better idea of the intent, management's intent in this constancy of purpose. And that will help them execute, operationalize what management has put on paper or however they've got it and are communicating it. It just helps. So when I talk about Gemba, I'm talking the place where the quality is made or the action is. As the boardroom, you need to be able to have people understand and be able to see what's going on there, and all the way up the chain and all the way down the chain.   0:40:14.4 Andrew Stotz: That's great one. I'm just visualizing people in the operations side thinking, we've got some real problems here and we don't really understand it. We've got to go to the actual place, and that's the boardroom[laughter]. It's not the factory line.   0:40:31.7 Bill Scherkenbach: Yes. Absolutely. And if the boardroom says you're not qualified, then shame on you, the boardroom, are those the people you're hiring? So no, it goes both ways, both ways.   0:40:46.8 Andrew Stotz: Now, you had a final slide here. Maybe you want to talk a little bit about some of the things you've identified here.   0:40:53.4 Bill Scherkenbach: Okay, that's getting back to, in the logical area of this TDQA is my cycle: Theory, question, data, action. And it's based on Dr. Deming and Shewhart and Lewis saying, where do questions come from? They're based on theory. What do you do with questions? Well, the answers to questions are your data. And you're just not going to do nothing with data, you're supposed to take action. What are you going to do with it? And so the theory I'm going to address, the various questions I've found helpful in order to, to some extent, make the decisions better, the ability to operationalize them better and perhaps even be more creative, if you will. And so one of the questions I ask any team is, have you asked outside experts their opinion? Have you included them? Have you included someone to consistently, not consistently, but to take a contrarian viewpoint that their job in this meeting is to play the devil's advocate? And the theory is you're looking for a different perspective as Pete Jessup at Ford came up with that brilliant view of Escher's.   0:42:47.1 Bill Scherkenbach: Different perspectives are going to help you make a better decision. And so you want to get out of the echo chamber and you want to be challenged. Every team should be able to have some of these on there. What's going to get delayed? The underlying theory or mental model is, okay, you don't have people sitting around waiting for this executive committee to come up with new things, time is a zero-sum game. What's going to get delayed and what are they willing to get delayed if this is so darn important to get done? Decision criteria. I've seen many teams where they thought that the decision would be a majority rule. They discuss and when it came down to submit it, they said, "no, no, this VP is going to make the decision." And so that completely sours the next team to do that. And so you have to be, if you're saying trust, what's your definition of trust? If the people know that someone is going to make the decision with your advice or the executive's going to get two votes and everyone else gets one, or it's just simple voting.   0:44:35.3 Bill Scherkenbach: The point is that making the decision and taking it to the next level, the theory is you've got to be specific and relied on. Team turnover, fairly simple. We spoke about executive turnover, which was a huge concern that Dr. Deming had about Western management. But at one major auto company, we would have product teams and someone might be in charge of, be a product manager for a particular model car. Well, if that person was a hard charger and it took product development at the time was three and a half years, you're going to get promoted from a director level to a VP halfway through and you're going to screw up the team, other team members will be leaving as well because they have careers. You need to change the policy just to be able to say, if you agree that you're going to lead this team, you're going to lead it from start to finish and to minimize the hassle and the problems and the cost of turnover, team turnover. And this is a short list of stuff, but it's very useful to have a specific "no-fault policy."   0:46:20.6 Bill Scherkenbach: And this is where Dr. Deming speaks about reducing fear. I've seen teams who know they can really, once management turns on the spigot and says, let's really do this, this is important, the team is still hesitant to really let it go because that management might interpret that as saying, "well, what are you doing, slacking off the past year?" As Deming said, "why couldn't you do that if you could do it with no method, why didn't you do it last year?" but the fear in the organization, well, we're going to milk it. And so all of these things, it helps to be visible to everyone.   0:47:23.0 Andrew Stotz: So, I guess we should probably wrap up and I want to go back to where we started. And first, we talked about, where is quality made? And we talked about the boardroom. Why is this such an important topic from your perspective? Why did you want to talk about it? And what would you say is the key message you want to get across from it?   0:47:47.1 Bill Scherkenbach: The key message is that management thinks quality's made in operations. And it's the quality of the... I wanted to put a little bit more meat, although there's a lot more meat, we do put on it. But the quality of the organization, I wanted to make the point depends on the quality of the decisions, that's their output that top leaders make, whether it's the board or the C-suite or any place making decisions. The quality of your decisions.   0:48:28.9 Andrew Stotz: Excellent. And I remember, this reminds me of when I went to my first Deming seminar back in 1990, roughly '89, maybe '90. And I was a young guy just starting as a supervisor at a warehouse in our Torrance plant at Pepsi, and Pepsi sent me there. And I sat in the front row, so I didn't pay attention to all the people behind me, but there was many people behind me and there was a lot of older guys. Everybody technically was pretty much older than me because when I was just starting my career. And it was almost like these javelins were being thrown from the stage to the older men in the back who were trying to deal with this, and figure out what's coming at them, and that's where I kind of really started to understand that this was a man, Dr. Deming, who wasn't afraid to direct blame at senior management to say, you've got to take responsibility for this. And as a young guy seeing all kinds of mess-ups in the factory every day that I could see, that we couldn't really solve. We didn't have the tools and we couldn't get the resources to get those tools.   0:49:47.9 Andrew Stotz: It just really made sense to me. And I think the reiteration of that today is the idea, as I'm older now and I look at what my obligation is in the organizations I'm working at, it's to set that constancy of purpose, to set the quality at the highest level that I can. And the discussion today just reinforced it, so I really enjoyed it.   0:50:11.2 Bill Scherkenbach: Well, that's great. I mean, based on that observation, Dr. Deming many times said that the master chef is the person who knows no fear, and he was a master chef putting stuff together. And we would talk about fairly common knowledge that the great artists, the great thinkers, the great producers were doing it for themselves, it just happened that they had an audience. The music caught on, the poetry caught on, the painting caught on, the management system caught on. But we're doing it for ourselves with no fear. And that's the lesson.   0:51:11.8 Andrew Stotz: Yeah. Well, I hope that there's a 24-year-old out there right now listening to this just like I was, or think about back in 1972 when you were sitting there listening to his message. And they've caught that message from you today. So I appreciate it, and I want to say on behalf of everyone at the Deming Institute, of course, thank you so much for this discussion and for people who are listening and interested, remember to go to deming.org to continue your journey. And of course, you can reach Bill on LinkedIn, very simple. He's out there posting and he's responding. So feel free if you've got a question or comment or something, reach out to him on LinkedIn and have a discussion. This is your host, Andrew Stotz, and I'm going to leave you with one of my favorite quotes from Dr. Deming, and it doesn't change. It is, "people are entitled to joy in work."

HR Leaders
Why AI Literacy Is Now a Business Skill Every Leader Needs

HR Leaders

Play Episode Listen Later Jan 16, 2026 14:12


In this episode of the HR Leaders Podcast, we sit down with David Sperl, Head of HR for Advanced Visualization Solutions at GE HealthCare, to unpack how HR earns real business credibility by shipping outcomes, not PowerPoints, inside a heavily regulated, science driven environment.David explains why AI literacy must move from theory to hands-on practice, how microlearning and shared baseline tools help drive adoption, and why leadership advocacy is essential to scale change across technical, clinical, and commercial teams. He breaks down GE HealthCare's four stages of AI adoption, how communities of practice create demand pull, and why unlearning outdated mental models is now harder than learning new ones.Most importantly, he shares why user experience and friction removal are the real unlocks for AI in HR and business, and why the future of change isn't “change management”, it's change agility.

PowerPoints: A Bible Study Guide for Juniors
Q1 Lesson 04 - Clearheaded or Beheaded

PowerPoints: A Bible Study Guide for Juniors

Play Episode Listen Later Jan 10, 2026 5:25


Have you ever waited to hear from a friend who had moved to another city? Perhaps you heard about them from others, but waited anxiously to hear from them personally. Did you begin to doubt their friendship?

PowerPoints: A Bible Study Guide for Juniors
Q1 Lesson 03 - From Prophet to Prisoner

PowerPoints: A Bible Study Guide for Juniors

Play Episode Listen Later Jan 10, 2026 5:25


Have you ever waited to hear from a friend who had moved to another city? Perhaps you heard about them from others, but waited anxiously to hear from them personally. Did you begin to doubt their friendship?

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah Hill-Smith

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 9, 2026 78:14


don't miss George's AIE talk: https://www.youtube.com/watch?v=sRpqPgKeXNk —- From launching a side project in a Sydney basement to becoming the independent gold standard for AI benchmarking—trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities—George Cameron and Micah Hill-Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is "open" really? We discuss: The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx's retweet Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard) The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding "I don't know"), and Claude models lead with the lowest hallucination rates despite not always being the smartest GDP Val AA: their version of OpenAI's GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias) The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron) The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents) Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions) V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models) — Artificial Analysis Website: https://artificialanalysis.ai (https://artificialanalysis.ai ("https://artificialanalysis.ai")) George Cameron on X: https://x.com/grmcameron (https://x.com/grmcameron ("https://x.com/grmcameron")) Micah Hill-Smith on X: https://x.com/_micah_h (https://x.com/_micah_h ("https://x.com/_micah_h")) Chapters 00:00:00 Introduction: Full Circle Moment and Artificial Analysis Origins 00:01:08 Business Model: Independence and Revenue Streams 00:04:00 The Origin Story: From Legal AI to Benchmarking 00:07:00 Early Challenges: Cost, Methodology, and Independence 00:16:13 AI Grant and Moving to San Francisco 00:18:58 Evolution of the Intelligence Index: V1 to V3 00:27:55 New Benchmarks: Hallucination Rate and Omissions Index 00:33:19 Critical Point and Frontier Physics Problems 00:35:56 GDPVAL AA: Agentic Evaluation and Stirrup Harness 00:51:47 The Openness Index: Measuring Model Transparency 00:57:57 The Smiling Curve: Cost of Intelligence Paradox 01:04:00 Hardware Efficiency and Sparsity Trends 01:07:43 Reasoning vs Non-Reasoning: Token Efficiency Matters 01:10:47 Multimodal Benchmarking and Community Requests 01:14:50 Looking Ahead: V4 Intelligence Index and Beyond

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Artificial Analysis: Independent LLM Evals as a Service — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 8, 2026 78:24


Happy New Year! You may have noticed that in 2025 we had moved toward YouTube as our primary podcasting platform. As we'll explain in the next State of Latent Space post, we'll be doubling down on Substack again and improving the experience for the over 100,000 of you who look out for our emails and website updates!We first mentioned Artificial Analysis in 2024, when it was still a side project in a Sydney basement. They then were one of the few Nat Friedman and Daniel Gross' AIGrant companies to raise a full seed round from them and have now become the independent gold standard for AI benchmarking—trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities.We have chatted with both Clementine Fourrier of HuggingFace's OpenLLM Leaderboard and (the freshly valued at $1.7B) Anastasios Angelopoulos of LMArena on their approaches to LLM evals and trendspotting, but Artificial Analysis have staked out an enduring and important place in the toolkit of the modern AI Engineer by doing the best job of independently running the most comprehensive set of evals across the widest range of open and closed models, and charting their progress for broad industry analyst use.George Cameron and Micah-Hill Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is “open” really?We discuss:* The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx's retweet* Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers* The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints* How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard)* The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs* Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding ”I don't know”), and Claude models lead with the lowest hallucination rates despite not always being the smartest* GDP Val AA: their version of OpenAI's GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias)* The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron)* The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents)* Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future* Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions)* V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models)Links to Artificial Analysis* Website: https://artificialanalysis.ai* George Cameron on X: https://x.com/georgecameron* Micah-Hill Smith on X: https://x.com/micahhsmithFull Episode on YouTubeTimestamps* 00:00 Introduction: Full Circle Moment and Artificial Analysis Origins* 01:19 Business Model: Independence and Revenue Streams* 04:33 Origin Story: From Legal AI to Benchmarking Need* 16:22 AI Grant and Moving to San Francisco* 19:21 Intelligence Index Evolution: From V1 to V3* 11:47 Benchmarking Challenges: Variance, Contamination, and Methodology* 13:52 Mystery Shopper Policy and Maintaining Independence* 28:01 New Benchmarks: Omissions Index for Hallucination Detection* 33:36 Critical Point: Hard Physics Problems and Research-Level Reasoning* 23:01 GDP Val AA: Agentic Benchmark for Real Work Tasks* 50:19 Stirrup Agent Harness: Open Source Agentic Framework* 52:43 Openness Index: Measuring Model Transparency Beyond Licenses* 58:25 The Smiling Curve: Cost Falling While Spend Rising* 1:02:32 Hardware Efficiency: Blackwell Gains and Sparsity Limits* 1:06:23 Reasoning Models and Token Efficiency: The Spectrum Emerges* 1:11:00 Multimodal Benchmarking: Image, Video, and Speech Arenas* 1:15:05 Looking Ahead: Intelligence Index V4 and Future Directions* 1:16:50 Closing: The Insatiable Demand for IntelligenceTranscriptMicah [00:00:06]: This is kind of a full circle moment for us in a way, because the first time artificial analysis got mentioned on a podcast was you and Alessio on Latent Space. Amazing.swyx [00:00:17]: Which was January 2024. I don't even remember doing that, but yeah, it was very influential to me. Yeah, I'm looking at AI News for Jan 17, or Jan 16, 2024. I said, this gem of a models and host comparison site was just launched. And then I put in a few screenshots, and I said, it's an independent third party. It clearly outlines the quality versus throughput trade-off, and it breaks out by model and hosting provider. I did give you s**t for missing fireworks, and how do you have a model benchmarking thing without fireworks? But you had together, you had perplexity, and I think we just started chatting there. Welcome, George and Micah, to Latent Space. I've been following your progress. Congrats on... It's been an amazing year. You guys have really come together to be the presumptive new gardener of AI, right? Which is something that...George [00:01:09]: Yeah, but you can't pay us for better results.swyx [00:01:12]: Yes, exactly.George [00:01:13]: Very important.Micah [00:01:14]: Start off with a spicy take.swyx [00:01:18]: Okay, how do I pay you?Micah [00:01:20]: Let's get right into that.swyx [00:01:21]: How do you make money?Micah [00:01:24]: Well, very happy to talk about that. So it's been a big journey the last couple of years. Artificial analysis is going to be two years old in January 2026. Which is pretty soon now. We first run the website for free, obviously, and give away a ton of data to help developers and companies navigate AI and make decisions about models, providers, technologies across the AI stack for building stuff. We're very committed to doing that and tend to keep doing that. We have, along the way, built a business that is working out pretty sustainably. We've got just over 20 people now and two main customer groups. So we want to be... We want to be who enterprise look to for data and insights on AI, so we want to help them with their decisions about models and technologies for building stuff. And then on the other side, we do private benchmarking for companies throughout the AI stack who build AI stuff. So no one pays to be on the website. We've been very clear about that from the very start because there's no use doing what we do unless it's independent AI benchmarking. Yeah. But turns out a bunch of our stuff can be pretty useful to companies building AI stuff.swyx [00:02:38]: And is it like, I am a Fortune 500, I need advisors on objective analysis, and I call you guys and you pull up a custom report for me, you come into my office and give me a workshop? What kind of engagement is that?George [00:02:53]: So we have a benchmarking and insight subscription, which looks like standardized reports that cover key topics or key challenges enterprises face when looking to understand AI and choose between all the technologies. And so, for instance, one of the report is a model deployment report, how to think about choosing between serverless inference, managed deployment solutions, or leasing chips. And running inference yourself is an example kind of decision that big enterprises face, and it's hard to reason through, like this AI stuff is really new to everybody. And so we try and help with our reports and insight subscription. Companies navigate that. We also do custom private benchmarking. And so that's very different from the public benchmarking that we publicize, and there's no commercial model around that. For private benchmarking, we'll at times create benchmarks, run benchmarks to specs that enterprises want. And we'll also do that sometimes for AI companies who have built things, and we help them understand what they've built with private benchmarking. Yeah. So that's a piece mainly that we've developed through trying to support everybody publicly with our public benchmarks. Yeah.swyx [00:04:09]: Let's talk about TechStack behind that. But okay, I'm going to rewind all the way to when you guys started this project. You were all the way in Sydney? Yeah. Well, Sydney, Australia for me.Micah [00:04:19]: George was an SF, but he's Australian, but he moved here already. Yeah.swyx [00:04:22]: And I remember I had the Zoom call with you. What was the impetus for starting artificial analysis in the first place? You know, you started with public benchmarks. And so let's start there. We'll go to the private benchmark. Yeah.George [00:04:33]: Why don't we even go back a little bit to like why we, you know, thought that it was needed? Yeah.Micah [00:04:40]: The story kind of begins like in 2022, 2023, like both George and I have been into AI stuff for quite a while. In 2023 specifically, I was trying to build a legal AI research assistant. So it actually worked pretty well for its era, I would say. Yeah. Yeah. So I was finding that the more you go into building something using LLMs, the more each bit of what you're doing ends up being a benchmarking problem. So had like this multistage algorithm thing, trying to figure out what the minimum viable model for each bit was, trying to optimize every bit of it as you build that out, right? Like you're trying to think about accuracy, a bunch of other metrics and performance and cost. And mostly just no one was doing anything to independently evaluate all the models. And certainly not to look at the trade-offs for speed and cost. So we basically set out just to build a thing that developers could look at to see the trade-offs between all of those things measured independently across all the models and providers. Honestly, it was probably meant to be a side project when we first started doing it.swyx [00:05:49]: Like we didn't like get together and say like, Hey, like we're going to stop working on all this stuff. I'm like, this is going to be our main thing. When I first called you, I think you hadn't decided on starting a company yet.Micah [00:05:58]: That's actually true. I don't even think we'd pause like, like George had an acquittance job. I didn't quit working on my legal AI thing. Like it was genuinely a side project.George [00:06:05]: We built it because we needed it as people building in the space and thought, Oh, other people might find it useful too. So we'll buy domain and link it to the Vercel deployment that we had and tweet about it. And, but very quickly it started getting attention. Thank you, Swyx for, I think doing an initial retweet and spotlighting it there. This project that we released. And then very quickly though, it was useful to others, but very quickly it became more useful as the number of models released accelerated. We had Mixtrel 8x7B and it was a key. That's a fun one. Yeah. Like a open source model that really changed the landscape and opened up people's eyes to other serverless inference providers and thinking about speed, thinking about cost. And so that was a key. And so it became more useful quite quickly. Yeah.swyx [00:07:02]: What I love talking to people like you who sit across the ecosystem is, well, I have theories about what people want, but you have data and that's obviously more relevant. But I want to stay on the origin story a little bit more. When you started out, I would say, I think the status quo at the time was every paper would come out and they would report their numbers versus competitor numbers. And that's basically it. And I remember I did the legwork. I think everyone has some knowledge. I think there's some version of Excel sheet or a Google sheet where you just like copy and paste the numbers from every paper and just post it up there. And then sometimes they don't line up because they're independently run. And so your numbers are going to look better than... Your reproductions of other people's numbers are going to look worse because you don't hold their models correctly or whatever the excuse is. I think then Stanford Helm, Percy Liang's project would also have some of these numbers. And I don't know if there's any other source that you can cite. The way that if I were to start artificial analysis at the same time you guys started, I would have used the Luther AI's eval framework harness. Yup.Micah [00:08:06]: Yup. That was some cool stuff. At the end of the day, running these evals, it's like if it's a simple Q&A eval, all you're doing is asking a list of questions and checking if the answers are right, which shouldn't be that crazy. But it turns out there are an enormous number of things that you've got control for. And I mean, back when we started the website. Yeah. Yeah. Like one of the reasons why we realized that we had to run the evals ourselves and couldn't just take rules from the labs was just that they would all prompt the models differently. And when you're competing over a few points, then you can pretty easily get- You can put the answer into the model. Yeah. That in the extreme. And like you get crazy cases like back when I'm Googled a Gemini 1.0 Ultra and needed a number that would say it was better than GPT-4 and like constructed, I think never published like chain of thought examples. 32 of them in every topic in MLU to run it, to get the score, like there are so many things that you- They never shipped Ultra, right? That's the one that never made it up. Not widely. Yeah. Yeah. Yeah. I mean, I'm sure it existed, but yeah. So we were pretty sure that we needed to run them ourselves and just run them in the same way across all the models. Yeah. And we were, we also did certain from the start that you couldn't look at those in isolation. You needed to look at them alongside the cost and performance stuff. Yeah.swyx [00:09:24]: Okay. A couple of technical questions. I mean, so obviously I also thought about this and I didn't do it because of cost. Yep. Did you not worry about costs? Were you funded already? Clearly not, but you know. No. Well, we definitely weren't at the start.Micah [00:09:36]: So like, I mean, we're paying for it personally at the start. There's a lot of money. Well, the numbers weren't nearly as bad a couple of years ago. So we certainly incurred some costs, but we were probably in the order of like hundreds of dollars of spend across all the benchmarking that we were doing. Yeah. So nothing. Yeah. It was like kind of fine. Yeah. Yeah. These days that's gone up an enormous amount for a bunch of reasons that we can talk about. But yeah, it wasn't that bad because you can also remember that like the number of models we were dealing with was hardly any and the complexity of the stuff that we wanted to do to evaluate them was a lot less. Like we were just asking some Q&A type questions and then one specific thing was for a lot of evals initially, we were just like sampling an answer. You know, like, what's the answer for this? Like, we didn't want to go into the answer directly without letting the models think. We weren't even doing chain of thought stuff initially. And that was the most useful way to get some results initially. Yeah.swyx [00:10:33]: And so for people who haven't done this work, literally parsing the responses is a whole thing, right? Like because sometimes the models, the models can answer any way they feel fit and sometimes they actually do have the right answer, but they just returned the wrong format and they will get a zero for that unless you work it into your parser. And that involves more work. And so, I mean, but there's an open question whether you should give it points for not following your instructions on the format.Micah [00:11:00]: It depends what you're looking at, right? Because you can, if you're trying to see whether or not it can solve a particular type of reasoning problem, and you don't want to test it on its ability to do answer formatting at the same time, then you might want to use an LLM as answer extractor approach to make sure that you get the answer out no matter how unanswered. But these days, it's mostly less of a problem. Like, if you instruct a model and give it examples of what the answers should look like, it can get the answers in your format, and then you can do, like, a simple regex.swyx [00:11:28]: Yeah, yeah. And then there's other questions around, I guess, sometimes if you have a multiple choice question, sometimes there's a bias towards the first answer, so you have to randomize the responses. All these nuances, like, once you dig into benchmarks, you're like, I don't know how anyone believes the numbers on all these things. It's so dark magic.Micah [00:11:47]: You've also got, like… You've got, like, the different degrees of variance in different benchmarks, right? Yeah. So, if you run four-question multi-choice on a modern reasoning model at the temperatures suggested by the labs for their own models, the variance that you can see on a four-question multi-choice eval is pretty enormous if you only do a single run of it and it has a small number of questions, especially. So, like, one of the things that we do is run an enormous number of all of our evals when we're developing new ones and doing upgrades to our intelligence index to bring in new things. Yeah. So, that we can dial in the right number of repeats so that we can get to the 95% confidence intervals that we're comfortable with so that when we pull that together, we can be confident in intelligence index to at least as tight as, like, a plus or minus one at a 95% confidence. Yeah.swyx [00:12:32]: And, again, that just adds a straight multiple to the cost. Oh, yeah. Yeah, yeah.George [00:12:37]: So, that's one of many reasons that cost has gone up a lot more than linearly over the last couple of years. We report a cost to run the artificial analysis. We report a cost to run the artificial analysis intelligence index on our website, and currently that's assuming one repeat in terms of how we report it because we want to reflect a bit about the weighting of the index. But our cost is actually a lot higher than what we report there because of the repeats.swyx [00:13:03]: Yeah, yeah, yeah. And probably this is true, but just checking, you don't have any special deals with the labs. They don't discount it. You just pay out of pocket or out of your sort of customer funds. Oh, there is a mix. So, the issue is that sometimes they may give you a special end point, which is… Ah, 100%.Micah [00:13:21]: Yeah, yeah, yeah. Exactly. So, we laser focus, like, on everything we do on having the best independent metrics and making sure that no one can manipulate them in any way. There are quite a lot of processes we've developed over the last couple of years to make that true for, like, the one you bring up, like, right here of the fact that if we're working with a lab, if they're giving us a private endpoint to evaluate a model, that it is totally possible. That what's sitting behind that black box is not the same as they serve on a public endpoint. We're very aware of that. We have what we call a mystery shopper policy. And so, and we're totally transparent with all the labs we work with about this, that we will register accounts not on our own domain and run both intelligence evals and performance benchmarks… Yeah, that's the job. …without them being able to identify it. And no one's ever had a problem with that. Because, like, a thing that turns out to actually be quite a good… …good factor in the industry is that they all want to believe that none of their competitors could manipulate what we're doing either.swyx [00:14:23]: That's true. I never thought about that. I've been in the database data industry prior, and there's a lot of shenanigans around benchmarking, right? So I'm just kind of going through the mental laundry list. Did I miss anything else in this category of shenanigans? Oh, potential shenanigans.Micah [00:14:36]: I mean, okay, the biggest one, like, that I'll bring up, like, is more of a conceptual one, actually, than, like, direct shenanigans. It's that the things that get measured become things that get targeted by labs that they're trying to build, right? Exactly. So that doesn't mean anything that we should really call shenanigans. Like, I'm not talking about training on test set. But if you know that you're going to be great at another particular thing, if you're a researcher, there are a whole bunch of things that you can do to try to get better at that thing that preferably are going to be helpful for a wide range of how actual users want to use the thing that you're building. But will not necessarily work. Will not necessarily do that. So, for instance, the models are exceptional now at answering competition maths problems. There is some relevance of that type of reasoning, that type of work, to, like, how we might use modern coding agents and stuff. But it's clearly not one for one. So the thing that we have to be aware of is that once an eval becomes the thing that everyone's looking at, scores can get better on it without there being a reflection of overall generalized intelligence of these models. Getting better. That has been true for the last couple of years. It'll be true for the next couple of years. There's no silver bullet to defeat that other than building new stuff to stay relevant and measure the capabilities that matter most to real users. Yeah.swyx [00:15:58]: And we'll cover some of the new stuff that you guys are building as well, which is cool. Like, you used to just run other people's evals, but now you're coming up with your own. And I think, obviously, that is a necessary path once you're at the frontier. You've exhausted all the existing evals. I think the next point in history that I have for you is AI Grant that you guys decided to join and move here. What was it like? I think you were in, like, batch two? Batch four. Batch four. Okay.Micah [00:16:26]: I mean, it was great. Nat and Daniel are obviously great. And it's a really cool group of companies that we were in AI Grant alongside. It was really great to get Nat and Daniel on board. Obviously, they've done a whole lot of great work in the space with a lot of leading companies and were extremely aligned. With the mission of what we were trying to do. Like, we're not quite typical of, like, a lot of the other AI startups that they've invested in.swyx [00:16:53]: And they were very much here for the mission of what we want to do. Did they say any advice that really affected you in some way or, like, were one of the events very impactful? That's an interesting question.Micah [00:17:03]: I mean, I remember fondly a bunch of the speakers who came and did fireside chats at AI Grant.swyx [00:17:09]: Which is also, like, a crazy list. Yeah.George [00:17:11]: Oh, totally. Yeah, yeah, yeah. There was something about, you know, speaking to Nat and Daniel about the challenges of working through a startup and just working through the questions that don't have, like, clear answers and how to work through those kind of methodically and just, like, work through the hard decisions. And they've been great mentors to us as we've built artificial analysis. Another benefit for us was that other companies in the batch and other companies in AI Grant are pushing the capabilities. Yeah. And I think that's a big part of what AI can do at this time. And so being in contact with them, making sure that artificial analysis is useful to them has been fantastic for supporting us in working out how should we build out artificial analysis to continue to being useful to those, like, you know, building on AI.swyx [00:17:59]: I think to some extent, I'm mixed opinion on that one because to some extent, your target audience is not people in AI Grants who are obviously at the frontier. Yeah. Do you disagree?Micah [00:18:09]: To some extent. To some extent. But then, so a lot of what the AI Grant companies are doing is taking capabilities coming out of the labs and trying to push the limits of what they can do across the entire stack for building great applications, which actually makes some of them pretty archetypical power users of artificial analysis. Some of the people with the strongest opinions about what we're doing well and what we're not doing well and what they want to see next from us. Yeah. Yeah. Because when you're building any kind of AI application now, chances are you're using a whole bunch of different models. You're maybe switching reasonably frequently for different models and different parts of your application to optimize what you're able to do with them at an accuracy level and to get better speed and cost characteristics. So for many of them, no, they're like not commercial customers of ours, like we don't charge for all our data on the website. Yeah. They are absolutely some of our power users.swyx [00:19:07]: So let's talk about just the evals as well. So you start out from the general like MMU and GPQA stuff. What's next? How do you sort of build up to the overall index? What was in V1 and how did you evolve it? Okay.Micah [00:19:22]: So first, just like background, like we're talking about the artificial analysis intelligence index, which is our synthesis metric that we pulled together currently from 10 different eval data sets to give what? We're pretty much the same as that. Pretty confident is the best single number to look at for how smart the models are. Obviously, it doesn't tell the whole story. That's why we published the whole website of all the charts to dive into every part of it and look at the trade-offs. But best single number. So right now, it's got a bunch of Q&A type data sets that have been very important to the industry, like a couple that you just mentioned. It's also got a couple of agentic data sets. It's got our own long context reasoning data set and some other use case focused stuff. As time goes on. The things that we're most interested in that are going to be important to the capabilities that are becoming more important for AI, what developers are caring about, are going to be first around agentic capabilities. So surprise, surprise. We're all loving our coding agents and how the model is going to perform like that and then do similar things for different types of work are really important to us. The linking to use cases to economically valuable use cases are extremely important to us. And then we've got some of the. Yeah. These things that the models still struggle with, like working really well over long contexts that are not going to go away as specific capabilities and use cases that we need to keep evaluating.swyx [00:20:46]: But I guess one thing I was driving was like the V1 versus the V2 and how bad it was over time.Micah [00:20:53]: Like how we've changed the index to where we are.swyx [00:20:55]: And I think that reflects on the change in the industry. Right. So that's a nice way to tell that story.Micah [00:21:00]: Well, V1 would be completely saturated right now. Almost every model coming out because doing things like writing the Python functions and human evil is now pretty trivial. It's easy to forget, actually, I think how much progress has been made in the last two years. Like we obviously play the game constantly of like the today's version versus last week's version and the week before and all of the small changes in the horse race between the current frontier and who has the best like smaller than 10B model like right now this week. Right. And that's very important to a lot of developers and people and especially in this particular city of San Francisco. But when you zoom out a couple of years ago, literally most of what we were doing to evaluate the models then would all be 100% solved by even pretty small models today. And that's been one of the key things, by the way, that's driven down the cost of intelligence at every tier of intelligence. We can talk about more in a bit. So V1, V2, V3, we made things harder. We covered a wider range of use cases. And we tried to get closer to things developers care about as opposed to like just the Q&A type stuff that MMLU and GPQA represented. Yeah.swyx [00:22:12]: I don't know if you have anything to add there. Or we could just go right into showing people the benchmark and like looking around and asking questions about it. Yeah.Micah [00:22:21]: Let's do it. Okay. This would be a pretty good way to chat about a few of the new things we've launched recently. Yeah.George [00:22:26]: And I think a little bit about the direction that we want to take it. And we want to push benchmarks. Currently, the intelligence index and evals focus a lot on kind of raw intelligence. But we kind of want to diversify how we think about intelligence. And we can talk about it. But kind of new evals that we've kind of built and partnered on focus on topics like hallucination. And we've got a lot of topics that I think are not covered by the current eval set that should be. And so we want to bring that forth. But before we get into that.swyx [00:23:01]: And so for listeners, just as a timestamp, right now, number one is Gemini 3 Pro High. Then followed by Cloud Opus at 70. Just 5.1 high. You don't have 5.2 yet. And Kimi K2 Thinking. Wow. Still hanging in there. So those are the top four. That will date this podcast quickly. Yeah. Yeah. I mean, I love it. I love it. No, no. 100%. Look back this time next year and go, how cute. Yep.George [00:23:25]: Totally. A quick view of that is, okay, there's a lot. I love it. I love this chart. Yeah.Micah [00:23:30]: This is such a favorite, right? Yeah. And almost every talk that George or I give at conferences and stuff, we always put this one up first to just talk about situating where we are in this moment in history. This, I think, is the visual version of what I was saying before about the zooming out and remembering how much progress there's been. If we go back to just over a year ago, before 01, before Cloud Sonnet 3.5, we didn't have reasoning models or coding agents as a thing. And the game was very, very different. If we go back even a little bit before then, we're in the era where, when you look at this chart, open AI was untouchable for well over a year. And, I mean, you would remember that time period well of there being very open questions about whether or not AI was going to be competitive, like full stop, whether or not open AI would just run away with it, whether we would have a few frontier labs and no one else would really be able to do anything other than consume their APIs. I am quite happy overall that the world that we have ended up in is one where... Multi-model. Absolutely. And strictly more competitive every quarter over the last few years. Yeah. This year has been insane. Yeah.George [00:24:42]: You can see it. This chart with everything added is hard to read currently. There's so many dots on it, but I think it reflects a little bit what we felt, like how crazy it's been.swyx [00:24:54]: Why 14 as the default? Is that a manual choice? Because you've got service now in there that are less traditional names. Yeah.George [00:25:01]: It's models that we're kind of highlighting by default in our charts, in our intelligence index. Okay.swyx [00:25:07]: You just have a manually curated list of stuff.George [00:25:10]: Yeah, that's right. But something that I actually don't think every artificial analysis user knows is that you can customize our charts and choose what models are highlighted. Yeah. And so if we take off a few names, it gets a little easier to read.swyx [00:25:25]: Yeah, yeah. A little easier to read. Totally. Yeah. But I love that you can see the all one jump. Look at that. September 2024. And the DeepSeek jump. Yeah.George [00:25:34]: Which got close to OpenAI's leadership. They were so close. I think, yeah, we remember that moment. Around this time last year, actually.Micah [00:25:44]: Yeah, yeah, yeah. I agree. Yeah, well, a couple of weeks. It was Boxing Day in New Zealand when DeepSeek v3 came out. And we'd been tracking DeepSeek and a bunch of the other global players that were less known over the second half of 2024 and had run evals on the earlier ones and stuff. I very distinctly remember Boxing Day in New Zealand, because I was with family for Christmas and stuff, running the evals and getting back result by result on DeepSeek v3. So this was the first of their v3 architecture, the 671b MOE.Micah [00:26:19]: And we were very, very impressed. That was the moment where we were sure that DeepSeek was no longer just one of many players, but had jumped up to be a thing. The world really noticed when they followed that up with the RL working on top of v3 and R1 succeeding a few weeks later. But the groundwork for that absolutely was laid with just extremely strong base model, completely open weights that we had as the best open weights model. So, yeah, that's the thing that you really see in the game. But I think that we got a lot of good feedback on Boxing Day. us on Boxing Day last year.George [00:26:48]: Boxing Day is the day after Christmas for those not familiar.George [00:26:54]: I'm from Singapore.swyx [00:26:55]: A lot of us remember Boxing Day for a different reason, for the tsunami that happened. Oh, of course. Yeah, but that was a long time ago. So yeah. So this is the rough pitch of AAQI. Is it A-A-Q-I or A-A-I-I? I-I. Okay. Good memory, though.Micah [00:27:11]: I don't know. I'm not used to it. Once upon a time, we did call it Quality Index, and we would talk about quality, performance, and price, but we changed it to intelligence.George [00:27:20]: There's been a few naming changes. We added hardware benchmarking to the site, and so benchmarks at a kind of system level. And so then we changed our throughput metric to, we now call it output speed, and thenswyx [00:27:32]: throughput makes sense at a system level, so we took that name. Take me through more charts. What should people know? Obviously, the way you look at the site is probably different than how a beginner might look at it.Micah [00:27:42]: Yeah, that's fair. There's a lot of fun stuff to dive into. Maybe so we can hit past all the, like, we have lots and lots of emails and stuff. The interesting ones to talk about today that would be great to bring up are a few of our recent things, I think, that probably not many people will be familiar with yet. So first one of those is our omniscience index. So this one is a little bit different to most of the intelligence evils that we've run. We built it specifically to look at the embedded knowledge in the models and to test hallucination by looking at when the model doesn't know the answer, so not able to get it correct, what's its probability of saying, I don't know, or giving an incorrect answer. So the metric that we use for omniscience goes from negative 100 to positive 100. Because we're simply taking off a point if you give an incorrect answer to the question. We're pretty convinced that this is an example of where it makes most sense to do that, because it's strictly more helpful to say, I don't know, instead of giving a wrong answer to factual knowledge question. And one of our goals is to shift the incentive that evils create for models and the labs creating them to get higher scores. And almost every evil across all of AI up until this point, it's been graded by simple percentage correct as the main metric, the main thing that gets hyped. And so you should take a shot at everything. There's no incentive to say, I don't know. So we did that for this one here.swyx [00:29:22]: I think there's a general field of calibration as well, like the confidence in your answer versus the rightness of the answer. Yeah, we completely agree. Yeah. Yeah.George [00:29:31]: On that. And one reason that we didn't do that is because. Or put that into this index is that we think that the, the way to do that is not to ask the models how confident they are.swyx [00:29:43]: I don't know. Maybe it might be though. You put it like a JSON field, say, say confidence and maybe it spits out something. Yeah. You know, we have done a few evils podcasts over the, over the years. And when we did one with Clementine of hugging face, who maintains the open source leaderboard, and this was one of her top requests, which is some kind of hallucination slash lack of confidence calibration thing. And so, Hey, this is one of them.Micah [00:30:05]: And I mean, like anything that we do, it's not a perfect metric or the whole story of everything that you think about as hallucination. But yeah, it's pretty useful and has some interesting results. Like one of the things that we saw in the hallucination rate is that anthropics Claude models at the, the, the very left-hand side here with the lowest hallucination rates out of the models that we've evaluated amnesty is on. That is an interesting fact. I think it probably correlates with a lot of the previously, not really measured vibes stuff that people like about some of the Claude models. Is the dataset public or what's is it, is there a held out set? There's a hell of a set for this one. So we, we have published a public test set, but we we've only published 10% of it. The reason is that for this one here specifically, it would be very, very easy to like have data contamination because it is just factual knowledge questions. We would. We'll update it at a time to also prevent that, but with yeah, kept most of it held out so that we can keep it reliable for a long time. It leads us to a bunch of really cool things, including breakdown quite granularly by topic. And so we've got some of that disclosed on the website publicly right now, and there's lots more coming in terms of our ability to break out very specific topics. Yeah.swyx [00:31:23]: I would be interested. Let's, let's dwell a little bit on this hallucination one. I noticed that Haiku hallucinates less than Sonnet hallucinates less than Opus. And yeah. Would that be the other way around in a normal capability environments? I don't know. What's, what do you make of that?George [00:31:37]: One interesting aspect is that we've found that there's not really a, not a strong correlation between intelligence and hallucination, right? That's to say that the smarter the models are in a general sense, isn't correlated with their ability to, when they don't know something, say that they don't know. It's interesting that Gemini three pro preview was a big leap over here. Gemini 2.5. Flash and, and, and 2.5 pro, but, and if I add pro quickly here.swyx [00:32:07]: I bet pro's really good. Uh, actually no, I meant, I meant, uh, the GPT pros.George [00:32:12]: Oh yeah.swyx [00:32:13]: Cause GPT pros are rumored. We don't know for a fact that it's like eight runs and then with the LM judge on top. Yeah.George [00:32:20]: So we saw a big jump in, this is accuracy. So this is just percent that they get, uh, correct and Gemini three pro knew a lot more than the other models. And so big jump in accuracy. But relatively no change between the Google Gemini models, between releases. And the hallucination rate. Exactly. And so it's likely due to just kind of different post-training recipe, between the, the Claude models. Yeah.Micah [00:32:45]: Um, there's, there's driven this. Yeah. You can, uh, you can partially blame us and how we define intelligence having until now not defined hallucination as a negative in the way that we think about intelligence.swyx [00:32:56]: And so that's what we're changing. Uh, I know many smart people who are confidently incorrect.George [00:33:02]: Uh, look, look at that. That, that, that is very humans. Very true. And there's times and a place for that. I think our view is that hallucination rate makes sense in this context where it's around knowledge, but in many cases, people want the models to hallucinate, to have a go. Often that's the case in coding or when you're trying to generate newer ideas. One eval that we added to artificial analysis is, is, is critical point and it's really hard, uh, physics problems. Okay.swyx [00:33:32]: And is it sort of like a human eval type or something different or like a frontier math type?George [00:33:37]: It's not dissimilar to frontier frontier math. So these are kind of research questions that kind of academics in the physics physics world would be able to answer, but models really struggled to answer. So the top score here is not 9%.swyx [00:33:51]: And when the people that, that created this like Minway and, and, and actually off via who was kind of behind sweep and what organization is this? Oh, is this, it's Princeton.George [00:34:01]: Kind of range of academics from, from, uh, different academic institutions, really smart people. They talked about how they turn the models up in terms of the temperature as high temperature as they can, where they're trying to explore kind of new ideas in physics as a, as a thought partner, just because they, they want the models to hallucinate. Um, yeah, sometimes it's something new. Yeah, exactly.swyx [00:34:21]: Um, so not right in every situation, but, um, I think it makes sense, you know, to test hallucination in scenarios where it makes sense. Also, the obvious question is, uh, this is one of. Many that there is there, every lab has a system card that shows some kind of hallucination number, and you've chosen to not, uh, endorse that and you've made your own. And I think that's a, that's a choice. Um, totally in some sense, the rest of artificial analysis is public benchmarks that other people can independently rerun. You provide it as a service here. You have to fight the, well, who are we to, to like do this? And your, your answer is that we have a lot of customers and, you know, but like, I guess, how do you converge the individual?Micah [00:35:08]: I mean, I think, I think for hallucinations specifically, there are a bunch of different things that you might care about reasonably, and that you'd measure quite differently, like we've called this a amnesty and solutionation rate, not trying to declare the, like, it's humanity's last hallucination. You could, uh, you could have some interesting naming conventions and all this stuff. Um, the biggest picture answer to that. It's something that I actually wanted to mention. Just as George was explaining, critical point as well is, so as we go forward, we are building evals internally. We're partnering with academia and partnering with AI companies to build great evals. We have pretty strong views on, in various ways for different parts of the AI stack, where there are things that are not being measured well, or things that developers care about that should be measured more and better. And we intend to be doing that. We're not obsessed necessarily with that. Everything we do, we have to do entirely within our own team. Critical point. As a cool example of where we were a launch partner for it, working with academia, we've got some partnerships coming up with a couple of leading companies. Those ones, obviously we have to be careful with on some of the independent stuff, but with the right disclosure, like we're completely comfortable with that. A lot of the labs have released great data sets in the past that we've used to great success independently. And so it's between all of those techniques, we're going to be releasing more stuff in the future. Cool.swyx [00:36:26]: Let's cover the last couple. And then we'll, I want to talk about your trends analysis stuff, you know? Totally.Micah [00:36:31]: So that actually, I have one like little factoid on omniscience. If you go back up to accuracy on omniscience, an interesting thing about this accuracy metric is that it tracks more closely than anything else that we measure. The total parameter count of models makes a lot of sense intuitively, right? Because this is a knowledge eval. This is the pure knowledge metric. We're not looking at the index and the hallucination rate stuff that we think is much more about how the models are trained. This is just what facts did they recall? And yeah, it tracks parameter count extremely closely. Okay.swyx [00:37:05]: What's the rumored size of GPT-3 Pro? And to be clear, not confirmed for any official source, just rumors. But rumors do fly around. Rumors. I get, I hear all sorts of numbers. I don't know what to trust.Micah [00:37:17]: So if you, if you draw the line on omniscience accuracy versus total parameters, we've got all the open ways models, you can squint and see that likely the leading frontier models right now are quite a lot bigger than the ones that we're seeing right now. And the one trillion parameters that the open weights models cap out at, and the ones that we're looking at here, there's an interesting extra data point that Elon Musk revealed recently about XAI that for three trillion parameters for GROK 3 and 4, 6 trillion for GROK 5, but that's not out yet. Take those together, have a look. You might reasonably form a view that there's a pretty good chance that Gemini 3 Pro is bigger than that, that it could be in the 5 to 10 trillion parameters. To be clear, I have absolutely no idea, but just based on this chart, like that's where you would, you would land if you have a look at it. Yeah.swyx [00:38:07]: And to some extent, I actually kind of discourage people from guessing too much because what does it really matter? Like as long as they can serve it as a sustainable cost, that's about it. Like, yeah, totally.George [00:38:17]: They've also got different incentives in play compared to like open weights models who are thinking to supporting others in self-deployment for the labs who are doing inference at scale. It's I think less about total parameters in many cases. When thinking about inference costs and more around number of active parameters. And so there's a bit of an incentive towards larger sparser models. Agreed.Micah [00:38:38]: Understood. Yeah. Great. I mean, obviously if you're a developer or company using these things, not exactly as you say, it doesn't matter. You should be looking at all the different ways that we measure intelligence. You should be looking at cost to run index number and the different ways of thinking about token efficiency and cost efficiency based on the list prices, because that's all it matters.swyx [00:38:56]: It's not as good for the content creator rumor mill where I can say. Oh, GPT-4 is this small circle. Look at GPT-5 is this big circle. And then there used to be a thing for a while. Yeah.Micah [00:39:07]: But that is like on its own, actually a very interesting one, right? That is it just purely that chances are the last couple of years haven't seen a dramatic scaling up in the total size of these models. And so there's a lot of room to go up properly in total size of the models, especially with the upcoming hardware generations. Yes.swyx [00:39:29]: So, you know. Taking off my shitposting face for a minute. Yes. Yes. At the same time, I do feel like, you know, especially coming back from Europe, people do feel like Ilya is probably right that the paradigm is doesn't have many more orders of magnitude to scale out more. And therefore we need to start exploring at least a different path. GDPVal, I think it's like only like a month or so old. I was also very positive when it first came out. I actually talked to Tejo, who was the lead researcher on that. Oh, cool. And you have your own version.George [00:39:59]: It's a fantastic. It's a fantastic data set. Yeah.swyx [00:40:01]: And maybe it will recap for people who are still out of it. It's like 44 tasks based on some kind of GDP cutoff that's like meant to represent broad white collar work that is not just coding. Yeah.Micah [00:40:12]: Each of the tasks have a whole bunch of detailed instructions, some input files for a lot of them. It's within the 44 is divided into like two hundred and twenty two to five, maybe subtasks that are the level of that we run through the agenda. And yeah, they're really interesting. I will say that it doesn't. It doesn't necessarily capture like all the stuff that people do at work. No avail is perfect is always going to be more things to look at, largely because in order to make the tasks well enough to find that you can run them, they need to only have a handful of input files and very specific instructions for that task. And so I think the easiest way to think about them are that they're like quite hard take home exam tasks that you might do in an interview process.swyx [00:40:56]: Yeah, for listeners, it is not no longer like a long prompt. It is like, well, here's a zip file with like a spreadsheet or a PowerPoint deck or a PDF and go nuts and answer this question.George [00:41:06]: OpenAI released a great data set and they released a good paper which looks at performance across the different web chat bots on the data set. It's a great paper, encourage people to read it. What we've done is taken that data set and turned it into an eval that can be run on any model. So we created a reference agentic harness that can run. Run the models on the data set, and then we developed evaluator approach to compare outputs. That's kind of AI enabled, so it uses Gemini 3 Pro Preview to compare results, which we tested pretty comprehensively to ensure that it's aligned to human preferences. One data point there is that even as an evaluator, Gemini 3 Pro, interestingly, doesn't do actually that well. So that's kind of a good example of what we've done in GDPVal AA.swyx [00:42:01]: Yeah, the thing that you have to watch out for with LLM judge is self-preference that models usually prefer their own output, and in this case, it was not. Totally.Micah [00:42:08]: I think the way that we're thinking about the places where it makes sense to use an LLM as judge approach now, like quite different to some of the early LLM as judge stuff a couple of years ago, because some of that and MTV was a great project that was a good example of some of this a while ago was about judging conversations and like a lot of style type stuff. Here, we've got the task that the grader and grading model is doing is quite different to the task of taking the test. When you're taking the test, you've got all of the agentic tools you're working with, the code interpreter and web search, the file system to go through many, many turns to try to create the documents. Then on the other side, when we're grading it, we're running it through a pipeline to extract visual and text versions of the files and be able to provide that to Gemini, and we're providing the criteria for the task and getting it to pick which one more effectively meets the criteria of the task. Yeah. So we've got the task out of two potential outcomes. It turns out that we proved that it's just very, very good at getting that right, matched with human preference a lot of the time, because I think it's got the raw intelligence, but it's combined with the correct representation of the outputs, the fact that the outputs were created with an agentic task that is quite different to the way the grading model works, and we're comparing it against criteria, not just kind of zero shot trying to ask the model to pick which one is better.swyx [00:43:26]: Got it. Why is this an ELO? And not a percentage, like GDP-VAL?George [00:43:31]: So the outputs look like documents, and there's video outputs or audio outputs from some of the tasks. It has to make a video? Yeah, for some of the tasks. Some of the tasks.swyx [00:43:43]: What task is that?George [00:43:45]: I mean, it's in the data set. Like be a YouTuber? It's a marketing video.Micah [00:43:49]: Oh, wow. What? Like model has to go find clips on the internet and try to put it together. The models are not that good at doing that one, for now, to be clear. It's pretty hard to do that with a code editor. I mean, the computer stuff doesn't work quite well enough and so on and so on, but yeah.George [00:44:02]: And so there's no kind of ground truth, necessarily, to compare against, to work out percentage correct. It's hard to come up with correct or incorrect there. And so it's on a relative basis. And so we use an ELO approach to compare outputs from each of the models between the task.swyx [00:44:23]: You know what you should do? You should pay a contractor, a human, to do the same task. And then give it an ELO and then so you have, you have human there. It's just, I think what's helpful about GDPVal, the OpenAI one, is that 50% is meant to be normal human and maybe Domain Expert is higher than that, but 50% was the bar for like, well, if you've crossed 50, you are superhuman. Yeah.Micah [00:44:47]: So we like, haven't grounded this score in that exactly. I agree that it can be helpful, but we wanted to generalize this to a very large number. It's one of the reasons that presenting it as ELO is quite helpful and allows us to add models and it'll stay relevant for quite a long time. I also think it, it can be tricky looking at these exact tasks compared to the human performance, because the way that you would go about it as a human is quite different to how the models would go about it. Yeah.swyx [00:45:15]: I also liked that you included Lama 4 Maverick in there. Is that like just one last, like...Micah [00:45:20]: Well, no, no, no, no, no, no, it is the, it is the best model released by Meta. And... So it makes it into the homepage default set, still for now.George [00:45:31]: Other inclusion that's quite interesting is we also ran it across the latest versions of the web chatbots. And so we have...swyx [00:45:39]: Oh, that's right.George [00:45:40]: Oh, sorry.swyx [00:45:41]: I, yeah, I completely missed that. Okay.George [00:45:43]: No, not at all. So that, which has a checkered pattern. So that is their harness, not yours, is what you're saying. Exactly. And what's really interesting is that if you compare, for instance, Claude 4.5 Opus using the Claude web chatbot, it performs worse than the model in our agentic harness. And so in every case, the model performs better in our agentic harness than its web chatbot counterpart, the harness that they created.swyx [00:46:13]: Oh, my backwards explanation for that would be that, well, it's meant for consumer use cases and here you're pushing it for something.Micah [00:46:19]: The constraints are different and the amount of freedom that you can give the model is different. Also, you like have a cost goal. We let the models work as long as they want, basically. Yeah. Do you copy paste manually into the chatbot? Yeah. Yeah. That's, that was how we got the chatbot reference. We're not going to be keeping those updated at like quite the same scale as hundreds of models.swyx [00:46:38]: Well, so I don't know, talk to a browser base. They'll, they'll automate it for you. You know, like I have thought about like, well, we should turn these chatbot versions into an API because they are legitimately different agents in themselves. Yes. Right. Yeah.Micah [00:46:53]: And that's grown a huge amount of the last year, right? Like the tools. The tools that are available have actually diverged in my opinion, a fair bit across the major chatbot apps and the amount of data sources that you can connect them to have gone up a lot, meaning that your experience and the way you're using the model is more different than ever.swyx [00:47:10]: What tools and what data connections come to mind when you say what's interesting, what's notable work that people have done?Micah [00:47:15]: Oh, okay. So my favorite example on this is that until very recently, I would argue that it was basically impossible to get an LLM to draft an email for me in any useful way. Because most times that you're sending an email, you're not just writing something for the sake of writing it. Chances are context required is a whole bunch of historical emails. Maybe it's notes that you've made, maybe it's meeting notes, maybe it's, um, pulling something from your, um, any of like wherever you at work store stuff. So for me, like Google drive, one drive, um, in our super base databases, if we need to do some analysis or some data or something, preferably model can be plugged into all of those things and can go do some useful work based on it. The things that like I find most impressive currently that I am somewhat surprised work really well in late 2025, uh, that I can have models use super base MCP to query read only, of course, run a whole bunch of SQL queries to do pretty significant data analysis. And. And make charts and stuff and can read my Gmail and my notion. And okay. You actually use that. That's good. That's, that's, that's good. Is that a cloud thing? To various degrees of order, but chat GPD and Claude right now, I would say that this stuff like barely works in fairness right now. Like.George [00:48:33]: Because people are actually going to try this after they hear it. If you get an email from Micah, odds are it wasn't written by a chatbot.Micah [00:48:38]: So, yeah, I think it is true that I have never actually sent anyone an email drafted by a chatbot. Yet.swyx [00:48:46]: Um, and so you can, you can feel it right. And yeah, this time, this time next year, we'll come back and see where it's going. Totally. Um, super base shout out another famous Kiwi. Uh, I don't know if you've, you've any conversations with him about anything in particular on AI building and AI infra.George [00:49:03]: We have had, uh, Twitter DMS, um, with, with him because we're quite big, uh, super base users and power users. And we probably do some things more manually than we should in. In, in super base support line because you're, you're a little bit being super friendly. One extra, um, point regarding, um, GDP Val AA is that on the basis of the overperformance of the models compared to the chatbots turns out, we realized that, oh, like our reference harness that we built actually white works quite well on like gen generalist agentic tasks. This proves it in a sense. And so the agent harness is very. Minimalist. I think it follows some of the ideas that are in Claude code and we, all that we give it is context management capabilities, a web search, web browsing, uh, tool, uh, code execution, uh, environment. Anything else?Micah [00:50:02]: I mean, we can equip it with more tools, but like by default, yeah, that's it. We, we, we give it for GDP, a tool to, uh, view an image specifically, um, because the models, you know, can just use a terminal to pull stuff in text form into context. But to pull visual stuff into context, we had to give them a custom tool, but yeah, exactly. Um, you, you can explain an expert. No.George [00:50:21]: So it's, it, we turned out that we created a good generalist agentic harness. And so we, um, released that on, on GitHub yesterday. It's called stirrup. So if people want to check it out and, and it's a great, um, you know, base for, you know, generalist, uh, building a generalist agent for more specific tasks.Micah [00:50:39]: I'd say the best way to use it is get clone and then have your favorite coding. Agent make changes to it, to do whatever you want, because it's not that many lines of code and the coding agents can work with it. Super well.swyx [00:50:51]: Well, that's nice for the community to explore and share and hack on it. I think maybe in, in, in other similar environments, the terminal bench guys have done, uh, sort of the Harbor. Uh, and so it's, it's a, it's a bundle of, well, we need our minimal harness, which for them is terminus and we also need the RL environments or Docker deployment thing to, to run independently. So I don't know if you've looked at it. I don't know if you've looked at the harbor at all, is that, is that like a, a standard that people want to adopt?George [00:51:19]: Yeah, we've looked at it from a evals perspective and we love terminal bench and, and host benchmarks of, of, of terminal mention on artificial analysis. Um, we've looked at it from a, from a coding agent perspective, but could see it being a great, um, basis for any kind of agents. I think where we're getting to is that these models have gotten smart enough. They've gotten better, better tools that they can perform better when just given a minimalist. Set of tools and, and let them run, let the model control the, the agentic workflow rather than using another framework that's a bit more built out that tries to dictate the, dictate the flow. Awesome.swyx [00:51:56]: Let's cover the openness index and then let's go into the report stuff. Uh, so that's the, that's the last of the proprietary art numbers, I guess. I don't know how you sort of classify all these. Yeah.Micah [00:52:07]: Or call it, call it, let's call it the last of like the, the three new things that we're talking about from like the last few weeks. Um, cause I mean, there's a, we do a mix of stuff that. Where we're using open source, where we open source and what we do and, um, proprietary stuff that we don't always open source, like long context reasoning data set last year, we did open source. Um, and then all of the work on performance benchmarks across the site, some of them, we looking to open source, but some of them, like we're constantly iterating on and so on and so on and so on. So there's a huge mix, I would say, just of like stuff that is open source and not across the side. So that's a LCR for people. Yeah, yeah, yeah, yeah.swyx [00:52:41]: Uh, but let's, let's, let's talk about open.Micah [00:52:42]: Let's talk about openness index. This. Here is call it like a new way to think about how open models are. We, for a long time, have tracked where the models are open weights and what the licenses on them are. And that's like pretty useful. That tells you what you're allowed to do with the weights of a model, but there is this whole other dimension to how open models are. That is pretty important that we haven't tracked until now. And that's how much is disclosed about how it was made. So transparency about data, pre-training data and post-training data. And whether you're allowed to use that data and transparency about methodology and training code. So basically, those are the components. We bring them together to score an openness index for models so that you can in one place get this full picture of how open models are.swyx [00:53:32]: I feel like I've seen a couple other people try to do this, but they're not maintained. I do think this does matter. I don't know what the numbers mean apart from is there a max number? Is this out of 20?George [00:53:44]: It's out of 18 currently, and so we've got an openness index page, but essentially these are points, you get points for being more open across these different categories and the maximum you can achieve is 18. So AI2 with their extremely open OMO3 32B think model is the leader in a sense.swyx [00:54:04]: It's hooking face.George [00:54:05]: Oh, with their smaller model. It's coming soon. I think we need to run, we need to get the intelligence benchmarks right to get it on the site.swyx [00:54:12]: You can't have it open in the next. We can not include hooking face. We love hooking face. We'll have that, we'll have that up very soon. I mean, you know, the refined web and all that stuff. It's, it's amazing. Or is it called fine web? Fine web. Fine web.Micah [00:54:23]: Yeah, yeah, no, totally. Yep. One of the reasons this is cool, right, is that if you're trying to understand the holistic picture of the models and what you can do with all the stuff the company's contributing, this gives you that picture. And so we are going to keep it up to date alongside all the models that we do intelligence index on, on the site. And it's just an extra view to understand.swyx [00:54:43]: Can you scroll down to this? The, the, the, the trade-offs chart. Yeah, yeah. That one. Yeah. This, this really matters, right? Obviously, because you can b

Scrum Master Toolbox Podcast
When Remote Teams Stop Listening—The Silent Killer of Agile Collaboration | Carmela Then

Scrum Master Toolbox Podcast

Play Episode Listen Later Jan 6, 2026 18:01


Carmela Then: When Remote Teams Stop Listening—The Silent Killer of Agile Collaboration Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.   "Two minutes into it, my mind's starting to wander and I started to do my own thing." - Carmela Then   Carmela paints a vivid picture of a distributed team stretched across Sydney, New Zealand, India, and beyond—a team where communication had quietly become the enemy of progress. The warning signs were subtle at first: in meetings with 20 people on the call, only two or three would speak for the entire hour or two, with no visual aids, no PowerPoints, no drawings. The result? Within minutes, attention drifted, and everyone assumed someone else understood the message.  The speakers believed their ideas had landed; the listeners had already tuned out. This miscommunication compounded sprint after sprint until, just two months before go-live, the team was still discussing proof of concept. Trust eroded completely, and the Product Owner resorted to micromanagement—tracking developers by the hour, turning what was supposed to be an Agile team into a waterfall nightmare. Carmela points to a critical missing element: the Scrum Master had been assigned delivery management duties, leaving no one to address the communication dysfunction.  The lesson is clear—in remote, cross-cultural teams, you cannot simply talk your way through complex ideas; you need visual anchors, shared artifacts, and constant verification that understanding has truly been achieved.   In this segment, we talk about the importance of visual communication in remote teams and psychological safety.   Self-reflection Question: How do you verify that your message has truly landed with every team member, especially when working across time zones and cultures? Featured Book of the Week: How to Win Friends and Influence People by Dale Carnegie Carmela recommends How to Win Friends and Influence People by Dale Carnegie, a timeless classic that remains essential reading for every Scrum Master. As Carmela explains, "We work with people—customers are people, and our team, they are human beings as well. Whether we want it or not, we are leaders, we are coaches, and sometimes we could even be mentors." Written during the Great Depression and predating software entirely, this book emphasizes that relationships and understanding people are the foundation of personal and professional success. Carmela was first introduced to the book by a successful person outside of work who advised her not just to read it once, but to revisit it every year. For Scrum Masters navigating team dynamics, stakeholder relationships, and the human side of Agile, Carnegie's principles remain as relevant today as they were nearly a century ago.   [The Scrum Master Toolbox Podcast Recommends]

PowerPoints: A Bible Study Guide for Juniors
Q1 Lesson 02 - Guarding the Gates

PowerPoints: A Bible Study Guide for Juniors

Play Episode Listen Later Jan 3, 2026 5:25


Imagine what John the Baptist might have said to share the reason for his simple lifestyle with someone your age.

PowerPoints: A Bible Study Guide for Juniors

HHave you ever been confused over what someone was trying to say? It sounded important. It sounded right, but you just needed to think about it for a while. Imagine a young man asking John the Baptist about his message.

Der KI-Podcast
PowerPoints, Weihnachtsbriefe, Gefühle: Was kann meine KI 2026?

Der KI-Podcast

Play Episode Listen Later Dec 23, 2025 52:04


Was passiert mit einer KI, wenn niemand mit ihr chattet? Kann mein Chatbot endlich ordentliche PowerPoints? Und warum schreibt meine KI plötzlich auf Arabisch? Gregor, Fritz und Marie packen eure Fragen aus dem Jahr 2025 aus.

The Strategy Skills Podcast: Management Consulting | Strategy, Operations & Implementation | Critical Thinking
608: Harvard Professor and former CEO of Medtronic, Bill George, on How Leaders Should Manage Challenging Times

The Strategy Skills Podcast: Management Consulting | Strategy, Operations & Implementation | Critical Thinking

Play Episode Listen Later Dec 8, 2025 55:40


Bill George, former CEO of Medtronic and Harvard Business School Executive Fellow, explains how leaders can stay grounded, principled, and effective in chaotic times. "It's a world of chaos and it requires a very different kind of leader than in more stable times." The skills that once mattered (process control, long-term plans) are now secondary to courage, self-awareness, and moral clarity. George says most executives still lead comfortably "inside the walls" but fear the external world (media, public scrutiny, and rapid change). "Today, if you're a leader, you are a public figure. You have to face that reality." Leadership now starts with knowing your True North, your values and principles. "When everything's going your way, you start to think you're better than you are. When you lose, you learn your weaknesses." He warns: "The people who will struggle are those faking it to make it. They're trying to impress the outside world but aren't grounded inside." Purpose, not position, defines identity. "A CEO once said, 'Without a title, I'm nothing.' You won't hold that title forever. Who are you then?" True fulfillment comes from alignment between personal purpose and work. "Every business has a deep sense of purpose if it's well run. The ones that only make money, like GE, go away." He lists five traits of leaders who thrive in crisis: Face reality. Stay true to values. Adapt strategies fast. Engage your team. Go on offense when others retreat. Each requires courage. "You can't teach courage in a classroom. It has to come from within." He urges humility: "Leadership is all about relationships, it's a two-way street." His turning point came when he stopped "building a résumé" and started building people. He defines authentic leadership as growth through feedback: "I never walk into a classroom unless I'm going to learn from everyone there." And he closes with the core message: "You don't have to be CEO. If you can do great work and help others, you'll feel fulfilled. Leaders make the difference between success and failure."   Key Insights (Verbatim Quotes) 1. Chaos demands a new kind of leader. "It's a world of chaos and it requires a very different kind of leader than in more stable times." 2. Authenticity starts with grounding. "Our true north is our principles, our beliefs, and our values all rolled into one." 3. Titles are temporary. "I am not the CEO of Best Buy. …That's the title I hold. I won't hold that forever." 4. Courage separates real leaders. "You can't teach courage in a classroom. It has to come from within." 5. Purpose drives resilience. "Every business has a deep sense of purpose if it's well run." 6. Leadership is relational. "I was building a résumé, not relationships. Leadership is all about relationships." 7. Fear destroys authenticity. "A lot of people are living in fear. That's no way to live your life." 8. Great leaders empower others. "You want everyone on your team to be better than you are at what they do." 9. Growth never ends. "Anyone who's authentic knows they have to continue to grow as a human being." 10. True success is internal. "You'll never have enough power, fame, or money. You find fulfillment within."   Action Items "Face reality, starting with yourself." Look in the mirror and ask, "Maybe I'm creating this negative culture. What did I do wrong?" "Stay true to your purpose and your values." Never abandon principles when pressure rises. "Adapt your strategies and tactics." What worked yesterday may not work today. "Get your team involved." Say, "Hey guys, we've got a real problem. What ideas do you have to keep it going?" "Go on offense when everyone else is pulling back." Make bold moves when others retreat. "Have the courage to look yourself in the mirror." Courage starts with self-reflection. Ask, "What's the worst case? What do I have to lose?" and move forward without fear. "If one door closes, maybe another one's going to open that I never even saw." "Know who you are." Reflect on your life story, relationships, and crucibles that shaped you. "Don't get caught up in titles or money." Remember, "Without a title I'm nothing" leads nowhere. "Find a congruence between your purpose and the organization's purpose." "Every business has a deep sense of purpose if it's well run." Identify how yours helps people. "Get away from toxic leaders." If they drive you down, take credit for your work, or never support you, move on. "Work for people you feel really good about working with." "Learn all aspects of the business and how to integrate them creatively." "Pull together a cross-disciplinary team" and act as the integrator. "Have everyone on your team be better than you are at what they do." "Be the glue." Integrate experts to solve tough problems. "Care about your people first." They must know you care before they'll perform. "Get everyone into their sweet spot" — where they use all their skills and are highly motivated. "Align everyone around purpose and goals." "Challenge people to reach their full potential." Say, "I know you can do better. Let's take your game to the next level." "Get out there and be with the people." Don't hide behind PowerPoints. "Help your people do better." Work beside them. "Believe in someone who doesn't believe in themselves." Tell them, "You have this potential. Go for it." "Find someone who believes in you." A mentor, boss, or spouse who sees your potential. "As a leader, be that person who believes in others." "Face your blind spots." Ask people who care about you for honest feedback. "If you get feedback from people that care about you, take it in." "Stop building a résumé and start building relationships." "Take time for people. Ask, 'How are you doing today? What challenges are you facing?'" "Leadership is all about relationships — it's a two-way street." "Tell the truth — the good, the bad, and the ugly." "Stay away from blame." Take responsibility instead of pointing fingers. "Be transparent." Don't hide problems; fix them. "Never fake it to make it." "Keep growing as a human being." "Take feedback and adapt." Growth requires awareness of impact on others. "Believe in yourself even if you fail." Failure is learning. "Spend time reflecting on your purpose and the person you are becoming." "Help other people reach their full potential." "Measure success by how many people you help every day." "Remember: leadership is about who you are, not what title you hold." Get Bills book, True North, here: https://shorturl.at/bRXsK Claim your free gift: Free gift #1 McKinsey & BCG winning resume www.FIRMSconsulting.com/resumePDF Free gift #2 Breakthrough Decisions Guide with 25 AI Prompts www.FIRMSconsulting.com/decisions Free gift #3 Five Reasons Why People Ignore Somebody www.FIRMSconsulting.com/owntheroom Free gift #4 Access episode 1 from Build a Consulting Firm, Level 1 www.FIRMSconsulting.com/build Free gift #5 The Overall Approach used in well-managed strategy studies www.FIRMSconsulting.com/OverallApproach Free gift #6 Get a copy of Nine Leaders in Action, a book we co-authored with some of our clients: www.FIRMSconsulting.com/gift    

Ones Ready
Ep 538: The Official UnOfficial US Air Force Podcast

Ones Ready

Play Episode Listen Later Dec 5, 2025 67:02


Send us a textPeaches and Trent roll into another beautifully unprepared episode packed with humor, straight talk, and real military insight. From the Okinawa body-slam everyone argues about to actually useful Air Force leadership lessons, fan-mail adventures, pipeline expectations, and what young candidates should really learn before joining Special Warfare, the guys keep it light, honest, and genuinely helpful. If you want a mix of Air Force culture, Special Warfare mindset, leadership truth bombs, and a laugh or two, this one delivers without the negativity spiral.⏱️ TIMESTAMPS00:00 Zero prep, full personality 02:00 OTS updates and gear that actually works 04:20 Fan mail roulette: from wholesome to wild 09:00 Waivers, pipelines, and realistic expectations 13:00 Life skills every future operator should master 16:30 Why commanders get roasted (and the reality behind it) 22:00 Chiefs, officers, and the leadership lessons nobody teaches 26:00 Okinawa body-slam drama — what matters and what doesn't 33:00 SOFA agreements and overseas military life 39:00 LSCO talk without panic or PowerPoints 44:00 NCO Corps: how to lead without being needy 53:00 GWOT nostalgia and lessons for the next generation 58:00 Commander's intent vs permission culture 01:04:00 LEDs, merch, and Peaches campaigning for a fresh SR shirt

Get a 6-Figure Job You Love
EP 260: 1 Shift To Win In This Market

Get a 6-Figure Job You Love

Play Episode Listen Later Dec 5, 2025 28:22


Are you secretly applying for jobs you're overqualified for because you think it's your only option? In this raw, unfiltered episode recorded at 5am, I'm saying the thing that might trigger some people, but needs to be said.I'm calling out the posts on LinkedIn where people are defending why they should be considered for lower-level roles, and why fighting for your limitations is keeping you stuck. Your brain wants certainty so badly that it's convinced you taking scraps is the "realistic" choice. But here's what I know after coaching hundreds of people: when you can't even get the job you're overqualified for, it's not because the market is impossible, it's because you're trying to solve the wrong problem.I share the story of a client who thought he "just put together some PowerPoints" (spoiler: he didn't), why employers' concerns about overqualified candidates are actually valid, and the harsh truth about what skills you're missing that worked fine 20 years ago but don't cut it now. If you're tired of shrinking yourself and ready to learn what actually works, this episode won't coddle you, but it will show you the way forward.Watch the free training, How to Finally Value Yourself and Get Paid What You Deserve in 2026: https://www.asknataliefisher.com/workshop-2026Get full show notes and more information here: https://www.asknataliefisher.com/episode-260

PowerPoints: A Bible Study Guide for Juniors
Q4 Lesson 10 - Risking Everything

PowerPoints: A Bible Study Guide for Juniors

Play Episode Listen Later Nov 29, 2025 3:57


A piano recital, public speaking, jumping off the high dive at a pool. Do these things make you think of sweaty palms and butterfly stomachs? We all have to take risks sometimes. But Jesus took the greatest risk of all.

Chaotic Compass
️ EPISODE 109 — 40 Things I've Learned by 40 (Part 2): The Clarity Half

Chaotic Compass

Play Episode Listen Later Nov 28, 2025


The A&P Professor
Steve Sullivan on Teaching A&P Bit by Bit: Podcasts, Digital Learning, & Keeping It Human | TAPP 156

The A&P Professor

Play Episode Listen Later Nov 26, 2025 64:40


Steve Sullivan joins me for a lively conversation about podcasting, tutor videos, and digital A&P teaching. We explore how he humanizes online learning, why students crave multiple approaches, and what he's learned after 23 years of teaching. From LMS-independent course design to global podcast reach, Steve shares practical strategies and inspiring stories that can help any A&P instructor evolve their teaching. 0:00:00 | Introduction 0:00:49 | This Episode 0:02:28 | Becoming Steve Sullivan 0:06:41 | Your Teaching Voice* 0:07:30 | Why Start a Podcast? 0:14:03 | Farewell to TAPP ed* 0:15:45 | Growing a Podcast & Growing Through It 0:19:56 | Authors Alert * 0:21:05 | Digital Teaching That Actually Helps 0:30:59 | When Our Tools Disappear* 0:32:48 | A&P Tools That Fit Any Textbook 0:48:36 | Collaboration Audit* 0:49:14 | What 23 Years of A&P Reveals 1:01:10 | Innovation Check * 1:01:44 | Staying Connected * Breaks ★ If you cannot see or activate the audio player, go to: theAPprofessor.org/podcast-episode-156.html ❓ Please take the anonymous survey: theAPprofessor.org/survey ☝️ Questions & Feedback: 1-833-LION-DEN (1-833-546-6336)

Fearless Presentation
Tip #24: Consider Other Types of Visual Aids (Visuals that Aren't Slides) | 30 Public Speaking Tips

Fearless Presentation

Play Episode Listen Later Nov 24, 2025 3:34


Welcome to 30 Tips in 30 Days! Over the entire month of November, I will be releasing a short, bite sized episode of Fearless Presentations every morning covering things that are absolutely essential to being a better presenter. Whether you've been speaking professionally for years and years or are looking to just start your public speaking journey, applying just these 30 tips I cover here will instantly and easily make you improve as a speaker. This does not mean you shouldn't use slideshows or PowerPoints. They are still the standard for a reason. But they are also not the only form of visual aid that works at conveying your presentation to people in the audience.Mixing it up with things like posters boards, props, samples, handouts, or videos can spice things up a bit and, in many cases, are actually better than just going with the safe slideshow.Show Notes: 101 Public Speaking Tips For Delivering Your Best Speech(https://www.fearlesspresentations.com/101-public-speaking-tips-for-delivering-your-best-speech/)

Keeping It Real with Jac and Ral
Build A Team Offsite People Want to Attend (and remember!)

Keeping It Real with Jac and Ral

Play Episode Listen Later Nov 23, 2025 33:57


Offsites should be the most energising day in your team's year…So why do so many feel like a hostage situation with pastries?In this episode, Jac & Ral break down exactly what makes an offsite brilliant, memorable and commercially valuable - not just another day of a stuffy room, bad slides, beige thinking and forced fun.You'll learn:The 5 elements every high-impact 2026 offsite must includeHow to design an agenda that wakes people up (literally and mentally)Simple ways to link your offsite to real business outcomesFresh ideas to inspire, align and fire up your people for the year aheadAnd the common mistakes leaders make that drain energy and kills momentum If you want your next offsite to spark clarity, creativity, connection and actual behaviour change, this is the episode.Because teams don't remember PowerPoints.They remember how the day made them think, feel and act.----Your support helps keep this show going — join us on Patreon.https://tinyurl.com/jacandralpatreonNew Episode Every Monday Follow the showhttps://www.instagram.com/keepingitrealwithjacandral/https://open.spotify.com/show/5yIs5ncJGvJyXhI55Js0if?si=aCNOdB68QnOGnT0vCTPcPgFollow Jac https://www.linkedin.com/in/jacphillips/https://www.instagram.com/jac.phillips.coaching/Follow Ralhttps://www.linkedin.com/in/gabrielledolan/https://www.instagram.com/gabrielledolan.1/Produced by Keehlan Ferrari-Brown

Loose Screws - The Elite Dangerous Podcast
Episode 310 - Banned to Open

Loose Screws - The Elite Dangerous Podcast

Play Episode Listen Later Nov 15, 2025 82:19


#310st for 14th November, 2025 or 3311! (33-Oh-Leven, not Oh-Eleven, OH-Leven)http://loosescrewsed.comJoin us on discord! And check out the merch store! PROMO CODEShttps://discord.gg/3Vfap47ReaSupport us on Patreon: https://www.patreon.com/LooseScrewsEDSquad Update:  (Updated by Bloom 10/16)Busted in old and fun ways. Come join us colonizing, BGS works out here.Ish. So does the colonizing.Ish.PowerPlay Update: - (unashamedly copied from KrugerFive's post in our Discord, 11/13)Cycle 54:Soontil relics hit 600t supply this cycle and the powers jumped on to boom for huge gains. Last relics rush was 14 cycles ago.The power of princess Aisling showed with the relics rush. +138 new systems, +8 new fortifieds.The other powers to maximize this were Yong-Rui (+73 systems), Antal (+70), and Mahon (+60)Delaine putting up a fight and keeping Torval behind for now (-3 systems difference)This relic boom creates a nice battle in the FDev board between Archer, Antal, and Kaine for Archer's P6. Archer 1131 systemsKaine 1100 systemsAntal 1089 systems1t trading is gone, trade is next to useless for control points, and relics are back down (but still at a healthy 120t). Next week is going to be interesting.Kruger 5's Power Rankings - https://k5elite.com/ Niceygy's Power Points - https://elite.niceygy.net/powerpointsFind out more in the LSN-powerplay-hub forum channel.Dev News: HIP 87621 Permit issuedCG - HIP 87621 Exobiology Initiative beginsPilots can support this initiative by first signing up at Exogene Sciences in the HIP 87621 system, before gathering samples of the newly discovered flora at biological sites located on several bodies of the HIP 87621 system. These samples must then be sold to Vista Genomics at Exogene Sciences, via the representative located in the station concourse.Pilots who register at least 1 sample will receive the following rewards:- Artemis Photon Blue Suit Pack- Credits, depending on success tier achieved and individual contribution level.Pilots in the top 75% and above of contributions will receive a grade 5 Artemis suit, with Improved Battery Capacity, Night Vision, Increased Sprint Duration and Improved Jump Assist modifications.Careful, it's hot out thereColonization main starport effects nerfed, then retracted, a bit. Lots of people apparently (cynically) think it's so FDev can sell more Dodec's…?Galnet News: https://community.elitedangerous.com/ (updated 11/14)Pilots' Federation Members Enter HIP 87621Trailblazer Fleet WithdrawnDiscussion:HIP 87621 bio's, CG, leading to?

PowerPoints: A Bible Study Guide for Juniors
Q4 Lesson 08 - Journey to Jerusalem

PowerPoints: A Bible Study Guide for Juniors

Play Episode Listen Later Nov 15, 2025 4:59


How would you feel if your family planned to move to a place where you had never been before? Scared, excited, or both? This week we'll learn about some people who made a big move during the time of Ezra and Nehemiah.

PowerPoints: A Bible Study Guide for Juniors
Q4 Lesson 07 - Don't Be Shy!

PowerPoints: A Bible Study Guide for Juniors

Play Episode Listen Later Nov 8, 2025 5:25


Think about a time you were away from home and were eager to get back. Maybe it was a week at summer camp or just a couple nights staying with a friend. Imagine what it would be like to be away from home for 70 yearn

Real Talk with Caleb
What if More Mental Health Providers Never Show up…?

Real Talk with Caleb

Play Episode Listen Later Nov 8, 2025 30:49


What if more mental health providers never show up?I'm not saying we don't need them; we absolutely do. But what if they just… don't due to lack of resources?In this episode of The Informed Airman, we talk about facing that reality head-on: how to stop waiting for a rescue that might never arrive and start preparing yourself, and your tribe, for the storm that's already here.This isn't about toughness for toughness' sake. It's about ownership. About building your tribe before the fight, not during it. About forming the kind of bonds, discipline, and trust that keep you in the fight long after the first punch lands.Because resilience isn't built in PowerPoints or programs, it's built in people.So don't wait for help to show up. Be the help. Build your tribe. Face the fight.Stay strong. Stay connected. Stay Hard to Kill.

Loose Screws - The Elite Dangerous Podcast
Episode 309 - A Machiavellian Scheme

Loose Screws - The Elite Dangerous Podcast

Play Episode Listen Later Nov 6, 2025 96:06


#309th  for 5th November, 2025 or 3311! (33-Oh-Leven, not Oh-Eleven, OH-Leven)http://loosescrewsed.comJoin us on discord! And check out the merch store! PROMO CODEShttps://discord.gg/3Vfap47ReaSupport us on Patreon: https://www.patreon.com/LooseScrewsEDSquad Update:  (Updated by Bloom 10/16)Busted in old and fun ways. Come join us colonizing, BGS works out here.Ish. So does the colonizingPowerPlay Update: - (unashamedly copied from KrugerFive's post in our Discord, 10/30)Cycle 52: we are 1 day ahead of the cycle so… no updateKruger 5's Power Rankings - https://k5elite.com/ Niceygy's Power Points - https://elite.niceygy.net/powerpointsFind out more in the LSN-powerplay-hub forum channel.Dev News: Elite Dangerous | Developer Log - 4 November 2025Galnet News: https://community.elitedangerous.com/ (updated 10/23)Trailblazer Fleet To Be WithdrawnDiscussion:New Video StyleTrailblazers out of beta on 11 November Significant amount of fixes and QoL to BGS, PP, and ColonizationBalance pass for on-foot combatNew star port - Dodec station - Tier 3+ Available at the 11Nov updateTech broker, higher stats all around50k ARX to unlock per account, deploy one instantly where allowed, and then build others as normalNew ship - Zorgon Caspian Explorer - early access in DecemberLarge padCan load mk2 modulesIncluding new MK2 FSD with improved neutron superchargingRetracting engine bitsSmall landing footprintSaid something about “mind that gravity and spot interesting plating pattern on hull”New feature pushed to early 2026 - OperationsNew set of multi-phased scenariosOn foot and in space4 CMDRs take on a challenging obstacleNever seen before areas - internals of hostile megashipMultiple operations, varying difficultyRewards that are difficult to obtain

The Badass Womens Council
Beyond the Stage: How Events Can Build Connection and Trust

The Badass Womens Council

Play Episode Listen Later Nov 6, 2025 30:56


“Stories build trust faster than any strategy deck ever could.”In this episode of Business is Human, Rebecca Fleetwood Hession shares how she designed her signature event “Stand Tall in Your Story” to foster genuine human connection through neuroscience-backed storytelling. She explores why traditional business gatherings often miss the mark and how emotional, story-centered experiences can transform relationships between colleagues, clients, and communities. Rebecca offers practical takeaways for leaders looking to make meetings, events, and company retreats more meaningful by trading PowerPoints for purpose and conversation for connection.In this episode, you'll learn:How storytelling triggers trust-building chemicals like oxytocin and strengthens relationships faster than facts or data aloneWhy emotional shared experiences create psychological safety and lasting engagement within teams and client groupsPractical ways to reimagine your next meeting or event, from TED-style storytelling formats to intentional conversation designThings to listen for:(00:00) Intro(02:41) The importance of celebration and connection(04:48) Emotional and social bonding at events(06:39) The power of storytelling(08:40) Creating meaningful conversations(15:27) Practical tips for hosting effective events(26:49) Virtual event strategiesConnect with Rebecca:https://www.rebeccafleetwoodhession.com/

The MomForce Podcast Hosted by Chatbooks
Are Pets Really Worth It?

The MomForce Podcast Hosted by Chatbooks

Play Episode Listen Later Nov 4, 2025 30:31


Growing up, Vanessa's family was basically two animals away from running a full-on zoo—but parenting with pets? That's a whole different story. In this episode, she sits down with Katherine Schwarzenegger Pratt—New York Times bestselling author, animal advocate, and mom—to ask the big question: Are pets really worth it? They talk about the beauty and chaos of raising kids (and animals), how pets teach responsibility and empathy, and the heartbreak that comes when it's time to say goodbye. Katherine shares stories from her new children's book Kat and Brandy—inspired by her own childhood pony—and offers thoughtful advice for families deciding whether to add a furry (or feathered!) friend to the mix. Whether you're a lifelong "animal person" or still dodging your kids' puppy PowerPoints, this conversation will make you laugh, reflect, and maybe even rethink what "family" really means.   Katherine's book is out now! Order Kat and Brandy today!   Start printing your photos with Chatbooks!   Follow us on Instagram @vanessaquigley @chatbooks  

CPO PLAYBOOK
86 Raising Capital with Vision: $290M to See the Future

CPO PLAYBOOK

Play Episode Listen Later Nov 4, 2025 43:39


In this episode, Roman Axelrod, founder and CEO of Xpanceo, shares the bold vision behind raising capital - $290M to create smart contact lenses designed to expand human capability. From building prototypes to leading a deep-tech team, Roman opens up about the real challenges of turning science fiction into science fact. He discusses the grit it takes to lead a company operating at the edge of innovation—and how clarity, communication, and conviction become non-negotiables in the process. We explore: • The role of storytelling in raising capital for moonshot ideas • Why prototypes matter more than PowerPoints in deep tech • How to attract world-class talent in a hyper-competitive market • What makes co-founder relationships thrive under pressure • How creativity and discipline coexist in breakthrough innovation Whether you're a founder, investor, or future-focused leader, Roman's journey offers powerful lessons on building what doesn't yet exist—and getting others to believe in it. — Subscribe to CPO PLAYBOOK for more conversations at the intersection of leadership, innovation, and capital strategy. Chapters 00:00 The Vision Behind Xpanceo 10:16 Challenges and Reactions to the Idea 14:14 Building a Deep Tech Company 17:22 Motivating a High-Stakes Team 21:47 The Importance of Prototypes 23:58 Recruiting Top Talent in Deep Tech 27:35 The Co-Founder Dynamic 31:14 Communication and Leadership in High-Stakes Environments 34:42 Cultivating Creativity and Vision 39:49 Looking Ahead: The Future of Xpanceo

PowerPoints: A Bible Study Guide for Juniors
Q4 Lesson 06 - Habakkuk's Song

PowerPoints: A Bible Study Guide for Juniors

Play Episode Listen Later Nov 1, 2025 4:13


You have always had questions. Why? Why? Why? Your parents may even have told you they didn't want to hear the word “why” again. Habakkuk had questions too. When He asked God his questions, he found good reasons to trust Him. Habakkuk the prophet was trou

Loose Screws - The Elite Dangerous Podcast
Episode 308 - West Side Daxi

Loose Screws - The Elite Dangerous Podcast

Play Episode Listen Later Oct 31, 2025 74:04


#308th  for 30th of October, 2025 or 3311! (33-Oh-Leven, not Oh-Eleven, OH-Leven)http://loosescrewsed.comJoin us on discord! And check out the merch store! PROMO CODEShttps://discord.gg/3Vfap47ReaSupport us on Patreon: https://www.patreon.com/LooseScrewsEDSquad Update:  (Updated by Bloom 10/16)Busted in old and fun ways. Come join us colonizing, BGS works out here.Ish. So does the colonizingPowerPlay Update: - (unashamedly copied from KrugerFive's post in our Discord, 10/30)Cycle 52: Here's to one year of powerplay 2.0 complete! Cheers! Princess Aisling with 3 strongholds lost! First week she's went negative since at least cycle 22Li Yong-Rui wins across all the boards this week with +39 new systems, including 6 new fortifieds, and 1 stronghold.Possible the most gained in pp2.0Kaine upgrades 5 systems to strongholds with the most this week, and +20 systems overallDelaine hit with a loss of -4 systemTorval passes Delaine for P10 in the Nicey/KrugerFive boardsShe is in sight of Delaine for P10 in the FDev boards as well, possibly passing in a 2-3 weeksAll the powers are starting to show pretty steady trends and the FDev leaderboard is getting pretty close to being sorted. The Nicey/KrugerFive oneThen after that may be a really long wait. 14 and 24 cycles are a pretty ls are pretty much sorted out now. If everyone keeps playing the same for the FDev leaderboard my passing predictions are:Torval passes Delaine in 3 cyclesEmperor Arissa passes Mahon in 14 cyclesKaine passes Archer in 24 cyclesong time as it is. I hope this stuff around HIP 87621 shakes up powerplay some. (edited)Thursday, October 30, 2025 1:20 PMKruger 5's Power Rankings - https://k5elite.com/ Niceygy's Power Points - https://elite.niceygy.net/powerpointsFind out more in the LSN-powerplay-hub forum channel.Galnet Update: https://community.elitedangerous.com/ (updated 10/23)Panther Clipper Enters Full Production ( Galnet News | Elite Dangerous Community Site )Brewer Construction Campaign Achieves Targets“Independent observers believe that the concentration of megaship traffic and the secrecy surrounding the order, together with the reported direct messages to pilots pledged to one of the twelve Powers, are strong indicators of an intention to rapidly construct a science and security enclave around HIP 87621.”Dev News: New paint job - “ https://www.elitedangerous.com/store/catalog/promoHalloween event is live, Paints are the reward! I couldn't find the link ~ larkDiscussion:Gameplay: Making the case that fixing BGS and other bugs would bring some people back to the game, who are currently doing other things but would like to play elite. Combat vs other gameplays Community Corner:***** Audaxius' song: THE MUSIC OF THE HARBINGER *****

The Cloudcast
AI Data Analytics

The Cloudcast

Play Episode Listen Later Oct 29, 2025 20:26


Soham Mazumdar, CEO and Co-Founder of WisdomAI, discusses how organizations can break free from the "drowning in data but starving for insights" paradox that plagues modern enterprises. We explore his journey from Google's TeraGoogle project to co-founding and scaling Rubrik through its $5.6 billion IPO, and why he left that success to build an agentic AI approach to Business Intelligence (BI) that transforms how businesses extract value from their data investments.SHOW: 971SHOW TRANSCRIPT: The Cloudcast #963 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS" SPONSORS:[Interconnected] Interconnected is a new series from Equinix diving into the infrastructure that keeps our digital world running. With expert guests and real-world insights, we explore the systems driving AI, automation, quantum, and more. Just search “Interconnected by Equinix”.[TestKube] TestKube is Kubernetes-native testing platform, orchestrating all your test tools, environments, and pipelines into scalable workflows empowering Continuous Testing. Check it out at TestKube.io/cloudcastSHOW NOTES:WisdomAI websiteTopic 1 - Welcome to the show, Soham. We overlapped briefly at Rubrik. Give everyone a quick introduction and tell everyone a bit about your time at Google prior to RubrikTopic 2 - You helped scale Rubrik from inception to a $5.6 billion IPO in 2024. What was the "aha moment" that made you leave that success to tackle the enterprise data analytics problem with WisdomAI?Topic 3 - Let's define the core problem. Organizations invest heavily in modern data platforms - Snowflake, Databricks, etc. - but there is the term "drowning in data but starving for insights." What's broken in the traditional BI stack that prevents business users from getting answers?Topic 4 - How do agentic AI and BI fit together? WisdomAI introduces the concept of "Knowledge Fabric" and agentic data insights. Break this down for us - how does this fundamentally differ from traditional dashboards and BI tools?Topic 5 - One of the biggest challenges with GenAI in enterprise settings is hallucination. You've emphasized that WisdomAI separates GenAI from answer generation. How does your approach tackle this critical trust issue?Topic 6 - Let's talk about data integration complexity. Your platform works with both structured and unstructured data - Snowflake, BigQuery, Redshift, but also Excel, PDFs, PowerPoints. How do you handle this "dirty" data reality that most enterprises face?Topic 6a - With so much data, how do most organizations get started? What's a typical use case for adoption?Topic 7 - If anyone is interested, what's the best way to get started?FEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod

DEI After 5 with Sacha
From Boring to Engaging: Transforming Onboarding for the New Workforce

DEI After 5 with Sacha

Play Episode Listen Later Oct 28, 2025 14:49


Let's be honest—most onboarding experiences are forgettable at best and overwhelming at worst. Yet for many organizations, the way they welcome new employees hasn't kept up. Traditional onboarding often feels like a box to check—an administrative marathon of paperwork, policies, and PowerPoints. But in a world where people are craving connection, clarity, and belonging, that approach simply doesn't work anymore.As discussed in a recent DEI After 5 episode, embracing change—especially when it comes to how we onboard—can be a powerful catalyst for growth, both for individuals and organizations.Why Onboarding Needs to ChangeWe know that employees decide whether they'll stay with an organization within their first few months—and for Generation Z, that decision happens even faster. According to recent data, 20% of Gen Z employees quit because of poor onboarding, and 8% leave within the first 90 days if the experience doesn't meet expectations. That's not just a retention problem—it's a culture problem.Gen Z and younger millennials are entering the workforce with a clear set of values. They want to understand what a company stands for from day one. In fact, 62% of women and 42% of men in Gen Z expect to learn about their organization's diversity, equity, and inclusion (DEI) policies during onboarding. This isn't just a “nice-to-have” feature—it's foundational to how they decide whether they belong.When onboarding fails to answer those deeper questions—Do I fit here? Is this a place where I can grow? Will my voice matter?—employees start to disengage before they've even begun.From Administrative to TransformationalEffective onboarding is no longer about checklists—it's about connection. It's an invitation to embrace change, to build trust, and to set the tone for psychological safety from day one.Organizations that get this right are moving from “orientation sessions” to onboarding experiences—interactive, personalized, and grounded in the company's values and culture. Instead of overwhelming new hires with information, they're creating space for exploration and engagement.In the podcast, we explored how today's employees are wired for interactivity. They grew up in digital spaces that reward curiosity and participation. Sitting through hours of dense slides? That's a fast track to disengagement. In fact, 75% of Gen Z admits to skipping or fast-forwarding through boring onboarding content.Modern onboarding should mirror how people learn and connect today:* Short, engaging videos that bring your culture and values to life.* Interactive learning tools that reinforce understanding instead of memorization.* Opportunities for dialogue, where new hires can safely ask questions without fear of judgment.* Stories and experiences that show—not just tell—how your organization lives its values.Psychological Safety Starts on Day OneA powerful theme from the podcast was the link between effective onboarding and psychological safety. When employees feel comfortable asking questions, sharing feedback, or admitting what they don't know, they're more likely to succeed—and stay.But when onboarding is rigid or transactional, it sends an early signal: “We care more about compliance than connection.” And that's where disengagement begins.By reframing onboarding as the first act of culture-building, organizations can demonstrate trust and transparency immediately. That first impression becomes the foundation for engagement, innovation, and long-term commitment.Embracing Change for GrowthEmbracing change—whether in how we work, lead, or onboard—requires adaptability and courage. It's about stepping outside of what's comfortable to build something that actually resonates.The most successful organizations are those that view onboarding not as a one-time event, but as an evolving process of integration and growth. They understand that people don't just need information—they need belonging.When leaders create space for new hires to feel seen, supported, and empowered, they set the stage for resilience, innovation, and shared success. Change, after all, is only disruptive when we resist it. When we lean into it, it becomes the very thing that helps us grow.If you want to learn more about how to create a culture of care, foster psychological safety, and design workplaces where people thrive from day one, subscribe to our YouTube channelSacha Thompson, founder of The Equity Equation, boasts 20+ years of experience spanning education, non-profit, and tech sectors. With a fervent commitment to inclusive leadership and workplace equity, Sacha specializes in fostering psychological safety for all team members. Her transformative coaching and consultancy services have earned her recognition in Forbes, Newsweek, and Business Insider. A seasoned speaker on psychological safety and leadership, Sacha is dedicated to building inclusive cultures and driving organizational success. She was most recently featured in Success, NBC News, Newsweek, and Business Insider. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit deiafter5.substack.com/subscribe

Is This Really a Thing?
Is the $1 Billion Powerpoint Really a Thing?

Is This Really a Thing?

Play Episode Listen Later Oct 27, 2025 20:49


We've all sat through bad slide decks—but what about the ones that change history? In this episode of Is This Really a Thing?, Dean Paul Jarley is joined by Jim Balaschak, Dr. Mike Pape, and Derek Saltzman to explore whether the so-called “billion-dollar PowerPoint” is myth or reality. From Airbnb and Tesla's iconic pitch decks to the role of storytelling, trust, and investor psychology, they unpack what makes a presentation powerful, what doesn't, and whether AI or new tools might one day dethrone PowerPoint. Featured Guests Michael Pape, Ph.D. - Dr. Phillips Entrepreneur in Residence & Professor of Practice, Management Jim G. Balaschak - Principal, Deanja, LLC Derek Saltzman - Co-Founder & Chief Executive Officer, Soarce Episode Transcription Paul Jarley: We've all sat through terrible slide decks, but every so often a PowerPoint does more than communicate. It creates value. Think of the pitch deck that launched Airbnb, the presentation that convinced investors to fund Tesla or the strategy decks that shape billion dollar mergers. So is the billion dollar PowerPoint really a thing? Can a few slides actually change the course of business history, or is it just a fancy way of describing really good storytelling? This show is all about separating hype from fundamental change. I'm Paul Jarley, Dean of the College of Business here at UCF. I've got lots of questions. To get answers, I'm talking to people with interesting insights into the future of business. Have you ever wondered, Is This Really a Thing? Onto our show. To help me figure this out, I've invited three guests. Jim Balaschak is an alum of the college, in our Hall of Fame, and a serial investor. Dr. Mike Pape is an Entrepreneur in Residence here at the College of Business, and Derek Saltzman is a former winner of the Joust and is co-founder of a company called Soarce. Thank you gentlemen for being here today. We've all seen really bad PowerPoints. Talk a little bit about what makes a great one. Jim, I'll start with you. Jim Balaschak: A PowerPoint that catches my eyes shows a big potential market, a problem they've identified that they have a solution for that they can make money on. It's not necessarily always the slides, but the slides can quickly convey the idea of the thoughts. And a lot of times before I meet with a founder, I'm emailed the pitch deck and going through the pitch deck helps me determine do I want to pursue this to the next step, get on the call with the founder, have them pitch it to me? I think it's a good way to open the door. Paul Jarley: The quality of the pitch deck tells you something about how serious and well thought out this is, right? So a schlocky one can really close the door, maybe more than a really good one can enhance it. Is that fair in your view? Derek Saltzman: Yeah. Paul Jarley: Derek, what do you think? Derek Saltzman: I think there's a lot to take into consideration with the audience and the stage gate of when you're first starting a pitch or when you're trying to interact. There's multiple decks for multiple stage gates. So in the first beginning intro, like for instance, how Jim said, when you're trying to send and get that initial meeting, it's all about a hook. Can you describe what you do in the most succinct, effective way possible to get the message across of what the problem is, how you're solving that problem, and what's the revenue potential like he described? Because that's what all investors are really looking for. Once you move past that initial stage gate, you have much more detailed decks that go into your financials that go into your true revenue model, your business model, maybe your IP strategy, and a variety of other topics. The overall optics and the overall clear messaging is I'd say the two biggest things. Paul Jarley: Mike, what do you tell students? Michael Pape: The way I deal with the pitch deck is treat it as just one element of a much bigger picture.

PowerPoints: A Bible Study Guide for Juniors
Q4 Lesson 05 - Passover Party

PowerPoints: A Bible Study Guide for Juniors

Play Episode Listen Later Oct 25, 2025 4:25


Have you ever attended a weeklong camp meeting? Interesting speakers and meaningful meetings take place. You see old friends, learn new songs, and discover new things about God. Hezekiah invites his kingdom to a sort of camp meeting where a revival among

Loose Screws - The Elite Dangerous Podcast
Episode 307 - Bloomingwind Dies First

Loose Screws - The Elite Dangerous Podcast

Play Episode Listen Later Oct 24, 2025 65:55


#307rd for 23st of October, 2025 or 3311! (33-Oh-Leven, not Oh-Eleven, OH-Leven)http://loosescrewsed.comJoin us on discord! And check out the merch store! PROMO CODEShttps://discord.gg/3Vfap47ReaSupport us on Patreon: https://www.patreon.com/LooseScrewsEDSquad Update:  (Updated by Bloom 10/16)BGS - Alec and our friends from Lave were reading the update notes for T-11 Patch 2 from last week. Reviewing the issue tracker ID's in the patch notes - BGS getting stuck is what FDev thinks is fixed and it looks fixed.What wasn't fixed per the patch notes - Mission influence going to the right place still doesn't seem to be fixed. So now it's half brokeOur friends were discussing when BGS Broke, was it PP 2.0, was it Trailblazers, when did it break? (Cockney Accent) - Blimey - It all began with PP 2.0, bloke! Fifteen Quid For a Broken Game?!?!?!Colonization Update - The Loose Screws control IC 22602 Sector ZU-Y d103!Arai's Inheritance. It is the official platinum mining hole of the Loose Screws Network. Thank you to Volt, Edward Skeele, Uraniborg, BorkedPowerPlay Update: - (unashamedly copied from KrugerFive's post in our Discord, 10/23)Cycle 51: Can you believe we are entering into 1 year of powerplay 2.0 this week?Winters with a strong week adding the most systems at +18 (all exploited)Yong-Rui again with the overall strongest week with +5 more strongholds and +8 fortifiedsPatreus goes -1 system overall, but -3 fortifiedsDelaine also kept flat with 0 systems gainedKruger 5's Power Rankings - https://k5elite.com/ Niceygy's Power Points - https://elite.niceygy.net/powerpointsFind out more in the LSN-powerplay-hub forum channel.Galnet Update: https://community.elitedangerous.com/ (updated 10/23)Megaship Movements Spark HIP 87621 SpeculationIndependent observers have reported unusual activity in systems surrounding HIP 87621, intensifying rumours of covert operations in the region.Multiple reports indicate that megaships have been spotted operating near the permit-locked system over the past week. Though no Power has formally acknowledged involvement, analysts believe this early mobilisation suggests a push for influence around HIP 87621.CG NOTE “Merit-palooza”: The large merit awards for mining within the CG system are no longer available, due to revoking the ‘God-Handed' powerplay state. The only power within 20-30ly is Grom (as it always was), so the ‘normal' mechanic for getting mining merits there doesn't work now.Will it come back?...Dev News: New paint job - “ https://www.elitedangerous.com/store/catalog/promoHalloween paint jobsSpectrix for all the new ships - looks like monster teeth on the ships' backsMalevolent Horror for AnacondaVarious ‘wisps' (haunt, poltergeist, shade, yurei, revenant, phantasm, horror)Basically pumpkin faces of various sizes for the old shipsOn-foot pumpkin outfitsOn-foot skeleton outfitsOn-foot ‘slimed' outfitsPumpkin ship decal

Ones Ready
Ep 518: We F'd Up the A-10? Jarred Taylor & CMSgt Spreter Talk The Future of TACP

Ones Ready

Play Episode Listen Later Oct 20, 2025 59:14


Send us a textBuckle up — Peaches sits down with Black Rifle Coffee's Jared Taylor and AFSW's Chief Jimbo Spreeder to torch the nonsense strangling the Air Force from the inside out. From the death-by-a-thousand-cuts of the A-10, to the badge redesign drama, to the Tech P force reduction nobody understood, this episode pulls zero punches.Peaches calls out leadership confusion (“Wait, you didn't know what TacPs do?!”), while JT and Jimbo laugh their way through the bureaucratic chaos that makes warriors less lethal. Expect hard truths, gallows humor, and the kind of brutally honest conversation you'll never hear in a press briefing. If you think the military's “heritage problem” ends with pilots and PowerPoints, think again. The boys talk heritage, mental toughness, rebuilding the pipeline, and why being “Instagram fit” won't save your ass when the rotors kick up and it's go time.This one's pure Ones Ready energy: real talk, no filters, and all attitude.⏱️ Timestamps: 00:00 – “So… the Air Force forgot what TacPs do?” 03:00 – JT & Jimbo on making the ‘Controlled' documentary and saving the legacy 07:00 – How two dudes turned chaos into a badass TacP history film 10:00 – “The A-10 ain't dead yet… but it's bleeding out” 15:00 – Inside the new badge redesign and why it pissed everyone off (again) 20:00 – Future of the TacP pipeline: less fluff, more fight 25:00 – “We don't want influencers — we want killers” 32:00 – Swimming, stress, and suffering: TacPs hit the pool 38:00 – Morale shocker: why commanders are finally happy again 43:00 – Peaches & Jimbo on the State of TacP: cutting dead weight, building killers 50:00 – The new Scout Program and the legend of Funky Bunkley 54:00 – JT's next mission: writing, war stories, and whiskey 56:00 – “Train hard, shut up, and stop believing the rumors”

Just Saying - The BRIEF Lab
Ep. 380 – The B.E.S.T. Meeting Method

Just Saying - The BRIEF Lab

Play Episode Listen Later Oct 20, 2025 15:13 Transcription Available


When Amazon infamously ditched PowerPoints in meetings and opted to use a six-page written narrative format, many professionals took notice of their bold move. However, very few followed their lead. After reading a book on Amazon's leadership philosophy (“Working Backwards”), I decided to experiment with a new meeting format. In this podcast, I provide a […] The post Ep. 380 – The B.E.S.T. Meeting Method appeared first on Just Saying.

PowerPoints: A Bible Study Guide for Juniors
Q4 Lesson 04 - Purifying the Temple

PowerPoints: A Bible Study Guide for Juniors

Play Episode Listen Later Oct 18, 2025 4:54


Does your mom or dad ever get after you to clean your room? Is it a mess, with dirty clothes lying on the floor, piles of things you are “saving,” schoolbooks stacked up, your bed unmade? If it is, you can relate to the condition of the Temple before a ne

Loose Screws - The Elite Dangerous Podcast

#306rd for 16nd of October, 2025 or 3311! (33-Oh-Leven, not Oh-Eleven, OH-Leven)http://loosescrewsed.comJoin us on discord! And check out the merch store! PROMO CODEShttps://discord.gg/3Vfap47ReaSupport us on Patreon: https://www.patreon.com/LooseScrewsEDSquad Update:  (Updated by Bloom 10/16)BGS is apparently fixed - We had a bunch of conflicts going back to 8/31 that have since begun and completedOnly time and testing will tell what is and isn't fixed.13 systems over 75% influence - so even if things are fixed…Wars in Synuefe and Col 285 sectors along with Yen TiVictory in Musca Dark Region UE-W a3-0 - thank you Uraniborg and Edward SkeeleWe Just expanded out of MaikoroWe're in 319 systems controlling 79PowerPlay Update: - (updated by Obl1v1ous) Cycle 50: I will interpret CMDR KurgerFive's analysis so you won't have to. Kaine wrestled seventh place away from Antal on the FDev board,For the fifth week in a row our frat boy Rui, has delivered the strongest performance which even out distanced the fancy footwork of Princess Podiatry.  At this pace Yong-Rui will move to second on Nicey/KrugerFive charts. And what of the FDev charts? Well this reporter believes FDev treats Rui like an SEC referee treats Auburn football. Down in tenth place it looks like Torval is about to win the Torval Delaine staring contest.  And at the bottom of the pile, the only growth Emperor Arissa has seen is that mole on her face while Patreus dropped another three exploited systems. Kruger 5's Power Rankings - https://k5elite.com/ Niceygy's Power Points - https://elite.niceygy.net/powerpointsFind out more in the LSN-powerplay-hub forum channel.Galnet Update: https://community.elitedangerous.com/Type-11 Prospector Declared a TriumphStrategic Order Placed with Brewer Corporation - CG with stickers for all, and 1 size 5 non-Euclidican cargo rack for the 75%ers.Dev News: Type 11 Prospecter Update 2Introduced new functionality to prevent 'claim sniping'. When the primary port is completed in a newly colonised system, an exclusive lock on making claims FROM that system's colonisation contact will be enforced for a short periodThe System Architect may exclusively make claims from the system for 30 minutesAfter this 30 minutes expires, anyone within the System Architect's Squadron may make a claim for the next 23.5 hoursThis will function even if the System Architect is in their own solo SquadronAfter a total of 24 hours, the exclusive claim rights lift and any player may make a claim from the newly constructed systemNote that if the System Architect is not in a squadron, then only the 30 minute lockout applies.The claim panel via the colonisation contact has been updated to display when a lockout is in effect, and details the remaining duration COMPANION APIFacility construction effort requirements info has been included in the /market endpointBUG FIXESFixed the main cause of the faction simulation (BGS) becoming stuck in some situations. This should resolve most instances of conflicts/expansion/influence etc. not functioning as intended. Fixed instances of a physics issue that could cause ships to explodeFixed multiple issues with transferring cargo from your ship to fleet carrier inventoryFixed instances when transferring cargo to fleet carrier inventory, the transfer can sometimes silently fail with items "lost" on both the carrier and ship until a relog (Issue ID: 78801)Fixed instances of transferring all cargo at once from ship to fleet carrier inventory to failDiscussion: Community Corner:Elite Dangerous and Odyssey are on sale on Steam again ($5.99 & $8.99 respectively) until Oct-21.More holo-skins on sale for T-11 (why no yellow?)

The Neurodivergent Creative Podcast
The Retreat That Changed How I Write Forever | #184

The Neurodivergent Creative Podcast

Play Episode Listen Later Oct 17, 2025 16:22


Welcome back to The Neurodivergent Creative Podcast, the cozy, chaotic corner of the internet where we unpack creativity, shame, and the messy process of making art while living in a neurodivergent brain! In this week's episode, host Caitlin Liz Fisher takes us along to their annual writing retreat—a gathering of writers, friends, and creative misfits who have built a community rooted in kindness, curiosity, and care. Between murder mysteries, unhinged PowerPoints, and chocolate tastings, Caitlin dives deep into what it really means to write your story—even when it doesn't all make it into the final draft. What We Explore in This Episode- The difference between story and plot, and why not everything you write needs to “fit” the final version- How writing can be both emotional processing and artistic craft—and the freedom that comes from separating the two- Reflections on creative community, self-trust, and being loved without fear of punishment- Why neurodivergent writers often fear being “too much,” and how shared space can heal that- The joy of creative play: unhinged PowerPoints, ramen nights, and the art of just having fun again

Loose Screws - The Elite Dangerous Podcast
Episode 305 - The Phillies Just Lost

Loose Screws - The Elite Dangerous Podcast

Play Episode Listen Later Oct 10, 2025 73:47


#305nd for 9rd of October, 2025 or 3311! (33-Oh-Leven, not Oh-Eleven, OH-Leven)http://loosescrewsed.comJoin us on discord! And check out the merch store! PROMO CODEShttps://discord.gg/3Vfap47ReaSupport us on Patreon: https://www.patreon.com/LooseScrewsEDSquad Update:  (Updated by Bloom 10/2)BGS might be waking up - Kruger 5 posted something from Sirius Gov discord where Phil from FDev Dropped a message that they think they've unstuck things. We shall see.Screwspace 315 systems, controlling 789 systems in Boom, 4 in Investment some paired with civil liberty or Unrest9 conflicts have gone live, only HIP 76575 still lockedMusca Dark region UE-W a3-0 - Deep Skrew One is online! Uraniborg, Volt and some others have been pushing our influence (thanks!) we're less than 4% away from a control war!The Training Challenge for the SquadRoy's Ode to Obl1v1ous and the Broken BGS“O Screws Where Art Thou?”Site with lyrics and song: https://suno.com/s/l30AO0z1ENU2Lv9uWAV file: https://drive.google.com/file/d/1VlmPh7LyM3efIqg4dFQdzhNkmYv2oM7O/view?usp=drive_linkPowerPlay Update: - Cycle 49Cycle 49:Archer with the most new systems with +17Li Yong-Rui with the strongest week again, now for 4 cycles in a rowYong-Rui with the most new strongholds and fortifiedsYong-Rui may overtake Emperor Arissa for P2 in the next couple cycles in the Nicey / KrugerFive boardsKaine and Antal still battling for FDev's P7Looking at the "points per cycle" chart at www.k5elite.com We may be heading to some stagnation in the leaderboards soon if something doesn't change.In the Nicey & KrugerFive boards: once LYR takes P2 and Torval takes P10 from Delaine, if trends continue, it could be a long time before there are any other changesIn the FDev board:Emperor Arissa will pass Mahon for P2 at some point (months out?) and Kaine takes P7, same will occurKruger 5's Power Rankings - https://k5elite.com/ Niceygy's Power Points - https://elite.niceygy.net/powerpointsFind out more in the LSN-powerplay-hub forum channel.Galnet Update: https://community.elitedangerous.com/October Consortium Defends Control of HIP 87621Community Goal requesting Platinum, Osmium and Painite deliveries to Malzberg Vision in Andere - At Tier 5/9 - 3 ½ days left - 15,801 participantsDev News: Ruby Paintjobs are backCongrats to friend of the program Alec Turner for his Stellar screenshotDiscussion:Will the next Elite game feature be something else from the ‘MMO playbook'? (any WoW, Everquest, SWG, or Destiny players in the crew here?) Community Corner:EdAstro exploration density heat map - response to Lave Radio discussionhttps://edastro.com/galmap/Alec Turner invents new photo category -  “rings in front of things”

Loose Screws - The Elite Dangerous Podcast
Episode 304 - Git Gud, FDev

Loose Screws - The Elite Dangerous Podcast

Play Episode Listen Later Oct 3, 2025 120:44


#304rd for 2st of October, 2025 or 3311! (33-Oh-Leven, not Oh-Eleven, OH-Leven)http://loosescrewsed.comJoin us on discord! And check out the merch store! PROMO CODEShttps://discord.gg/3Vfap47ReaSupport us on Patreon: https://www.patreon.com/LooseScrewsEDSquad Update:  (Updated by Bloom 10/2)Screwspace - Same conflicts still stuck in a pending state314 systems, controlling 78V640C is in Boom and Public Holiday - Buying core mined minerals for a mintMusca Dark region UE-W a3-0 - Deep Skrew One is online! I'm looking to push us up to the top spot before continuing my march to IC 2602 Sector ZU-Y d103 - if anyone has bounties they want to deliver 500 LY from home…PowerPlay Update: - Cycle 48Princess Aisling puts up +25 new systemsLi with +3 strongholds, and +16 overall for the strongest weekTorval in 3rd for adding systems this week with +10Patreus continues to slide losing -5 systems. -34 since cycle 45Delaine takes the biggest hit losing a stronghold and a fortifiedAntal appears to be deploying the battery to keep Kaine behind on the FDev board, but she is still in the mirrorKruger 5's Power Rankings - https://k5elite.com/ Niceygy's Power Points - https://elite.niceygy.net/powerpointsFind out more in the LSN-powerplay-hub forum channel.Galnet Update: https://community.elitedangerous.com/Lakon Spaceways Launches Type-11 ProspectorCommunity Goal requesting Platinum, Osmium and Painite deliveries to Malzberg Vision in Andere - At Tier 1 of 9 - Rewards for participants (1 tonne), a paint job, a sticker, credits and a community award of 1% bump per tier in the Squadron Mining Fragment Yield perk to a max of 20% Projected to achieve 61%, so probably tier 5. Platinum 333,030 Cr./Tonne, Painite 289,998 Cr. Tonne, Osmium 264,306 Cr./TonneDev News: 0.0653% of the galaxy or 261,063,785 systems exploredElite Dangerous T-11 has been updated to T-11.1 to fix a holopaint job causing crashes and some issues affecting new player onboardingRookie Cosmetic Starter Bundle - Speedway Blue, Crossfire Blue and Slipstream Blue paint jobs and ship kit for the Cobra Mk. III for free in the Elite Gamestore, along with other items on sale including many items for new ships. Discussion:T-11 ReviewNew ship power creep, good, bad, indifferent?

Loose Screws - The Elite Dangerous Podcast
Episode 303 - Roy Needs More Sax

Loose Screws - The Elite Dangerous Podcast

Play Episode Listen Later Sep 27, 2025 69:43


#303nd for 25th of September, 2025 or 3311! (33-Oh-Leven, not Oh-Eleven, OH-Leven)http://loosescrewsed.comJoin us on discord! And check out the merch store! PROMO CODEShttps://discord.gg/3Vfap47ReaSupport us on Patreon: https://www.patreon.com/LooseScrewsEDSquad Update:  (Updated by Bloom 9/25)Screwspace - Several conflicts stuck in a pending stateWar in G 218-5 pending but new6A and 7A are in investment/civil liberty, V2151 Cygni is in Boom/Civil LIberty, several other systems are in boom or investment310 Star Systems, controlling 78Musca Dark region UE-W a3-0 - Deep Skrew One is online! I'm looking to push us up to the top spot before continuing my march to IC 2602 Sector ZU-Y d103 - if anyone has bounties they want to deliver 500 LY from home…PowerPlay Update: - Cycle 47Kruger 5's Power Rankings - https://k5elite.com/ Niceygy's Power Points - https://elite.niceygy.net/powerpointsFind out more in the LSN-powerplay-hub forum channel.Galnet Update: https://community.elitedangerous.com/Rackham Seeks clues to HIP 87621 Mystery - aka CG endedSchrodinger's Mining CG did/didn't occurExobiologists Demand Answers as HIP 87621 Grows - Lore UpdateType-11 latest reported buffs:More than doubled mining damage resulting in Increased fragments per second, Increased limpet speed, Increased Controller limpet count from 8 to 14, Increased chance of enriched chunk spawning, Reduced Mining Volley Repeater spreadDev News: T-11 Delayed a Week due to suggestions from Partner Program, of which NONE OF US ARE, FDEV, LOOKING AT YOU BRABEN!Renaming your ACCT is live for 500 ARXMining CG consequently Delayed a Week - T11 paint job among the prizes for 1T of Platinum/Painite/OsmiumDiscussion:New ship power creep, good, bad, indifferent? Community Corner:CMDR Sulu storyCMDR Chiggy storySpecial premiere / sneak peek of ‘Roy's Stories - The Musical'https://suno.com/s/3yU6sv1b8AFaYU1N