Legendary horse-like creature with a large horn projecting from forehead
POPULARITY
Categories
Enjoy a sleepover with the Unicorn Princess in Unicornia: The Land of the Unicorns
Existentialism, unicornicopias, and speed dating EM SCHULZ… Oh lawd, it's about to get pretty dark! These ‘90s kids are beside themselves with anticipation to announce their new home—the ParaPods network, founded by Em Schulz and Christine Scheifer of And That's Why We Drink podcast! First up, Em joins Kalyn for a Valentine's extravaganza. Then, they get into your regularly scheduled programming with a trip through 1970s animation culture. The Last Unicorn, from earliest origins, is a tale for those who harbor a phantom, fading magic, and are up to the daunting task of finding their place in the world. Email us at thatsprettydarkpodcast@gmail.com Give to our Patreon for extra content: patreon.com/tpdpodcast Follow us on Instagram and Facebook @thatsprettydarkpodcast Learn more about your ad choices. Visit podcastchoices.com/adchoices
Welcome back to another After Dark episode of the Iron Sights Podcast.Recorded at SHOT Show 2026, I sit down with Brian Williams — better known as the Unicorn Chief. From SWAT operator to Chief, a stint in City Hall, and back to policing, Brian's career has given him a perspective that most people in the profession never get. He's seen leadership from the street, the admin side, and the political layer in between.In this conversation, we dig into police culture, mentorship, accountability, and why standards in training actually matter. As a USPSA Grandmaster, Brian doesn't just preach performance — he lives it. If he's not in the office handling the administrative side of the job, he's on the range sharpening his edge and pushing others to do the same.Brian is direct. He doesn't sugarcoat. He's not interested in protecting feelings — he's interested in protecting the mission. Sit back and enjoy this After Dark episode with the Unicorn Chief, Brian Williams.Timestamps:00:00 Intro08:38 Brian's Career18:52 Leadership Path28:42 Public Perception40:03 Leadership Struggles48:16 City Management01:04:53 Leadership Defined01:14:07 Accountability Crisis01:22:27 Mentorship01:27:21 Training Challenges01:42:57 Competition & Standards01:55:09 Execution Mode02:02:09 Self-CareRed Dot Fitness Train Online: http://rdftrainonline.com/Online Membership (Full Access To All Programs & Virtual Coaching):https://www.reddotfitness.net/online-membershipVirtual Coaching:https://www.reddotfitness.net/virtual-coachingSelf-Guided Programs:https://www.reddotfitness.net/Self-Guided-Programs1Connect With Us:Website - https://ironsightspodcast.com/Instagram - https://www.instagram.com/ironsightspodcast/Facebook - https://www.facebook.com/
February 11, 2026: Your daily rundown of health and wellness news, in under 5 minutes. Today's top stories: Solace Health raises $130M Series C at $1B+ valuation, connecting patients with nurse advocates to navigate care and insurance Weight Watchers partners with Pvolve to bring streaming strength workouts into medically guided weight loss and menopause care Function Health sues Superpower over "100-plus biomarkers" claims, highlighting pressure on diagnostic platforms to define measured versus inferred data More from Fitt: Fitt Insider breaks down the convergence of fitness, wellness, and healthcare — and what it means for business, culture, and capital. Subscribe to our newsletter → insider.fitt.co/subscribe Work with our recruiting firm → https://talent.fitt.co/ Follow us on Instagram → https://www.instagram.com/fittinsider/ Follow us on LinkedIn → linkedin.com/company/fittinsider Reach out → insider@fitt.co
Today we tackle the noise surrounding the AI movement with Paul Klein, CEO and Founder of Browserbase. With a career spanning early-stage Twilio to raising $70 million in under two years for his own infrastructure startup, Paul brings much-needed critical thinking to the "AI bubble" debate. We explore the bridge between old-world sales principles and modern, developer-first GTM strategies. Paul breaks down why Product-Led Growth (PLG) should be viewed as a pipeline engine rather than just a revenue machine and explains the power of the "Logo Flywheel" in creating executive FOMO.
Inside Wirtschaft - Der Podcast mit Manuel Koch | Börse und Wirtschaft im Blick
In den USA sind 2025 über 230 Milliarden Dollar in junge Firmen geflossen. In Deutschland bekamen Startups nur 6,5 Milliarden Euro. Wie abgehängt ist Deutschland wirklich? „Die USA sind ganz starker Vorreiter, UK hat aufgeholt und Deutschland ist stark im Hintergrund und muss extrem viel aufholen. Das Investmentvolumen im Venture Capital 2025 ist in den USA 230 Milliarden US-Dollar gewesen. Eine riesen Summe! UK ist kleiner und hat weniger Einwohner als Deutschland. Dort haben wir eine Investitionssumme von 22 Milliarden Euro und in Deutschland sind wir nur bei 6,5 Milliarden Euro. Auch bei den Unicorns gibt es ein starkes Gefälle”, sagt Fabian Fuchs. Der Startup-Gründer und Investor - der selbst in Deutschland, Großbritannien und den USA gelebt und gearbeitet hat - weiter: „Du willst sehr gute Leute für ein Startup, aber gleichzeitig konkurrierst du mit großen Firmen wie Google oder Facebook, die sehr hohe Gehälter zahlen können. Und der Schlüssel dazu, liegt eigentlich in den Anreizen Mitarbeiter-Beteiligungen zu machen. Da sind die USA schon extrem smart. In Deutschland hängen wir komplett hinterher. Es gibt aber Überlegungen, das zu ändern. Ich habe selbst in den USA ein Unternehmen gegründet - das ging innerhalb von einer Stunden. In Deutschland dauert das mehrere Wochen. Stichwort Finanzamt, Steuernummer, IHK, Notar, Bank.” Alle Infos im Interview von Inside Wirtschaft-Chefredakteur Manuel Koch und auf https://inside-wirtschaft.de
In this episode, Anna Sjönell shares her powerful journey of transformation. After many years in the fashion industry, she chose to change direction - and through Business & Dreams Academy, she reached goals she once believed were out of reach. Anna reflects on the skills, mindset shifts, and inner growth she gained along the way - learning things she never thought she was capable of learning. With honesty and warmth, she also speaks about the importance of truly investing your time, energy, and presence into the process, and how that commitment made all the difference.Contact Anna: - Instagram- Website
Send a textHost: Kendra BeavisGuest: Sarah Dumas What if success didn't require burnout, over-performance, or constant self-sacrifice?In this episode of Tribe of Unicorns, Kendra Beavis sits down with Sarah Dumas for a grounded conversation about embodied leadership, confidence, and building wealth without exhaustion. Together, they explore what happens when women stop performing success and start embodying it — leading their businesses from self-worth, clarity, and alignment instead of pressure.Sarah shares her perspective on why so many high-achieving women feel disconnected from their success, how separating life from business creates burnout, and what it looks like to lead from a fully expressed, embodied place. This conversation isn't about doing more — it's about refining how you lead yourself so your business can expand without costing you your health, presence, or joy.This episode is for women who are done proving, pushing, and powering through — and ready to build success that actually supports their lives.In this episode, we talk about:What embodied leadership really meansWhy confidence and self-worth shape business outcomesLetting go of performance-based successAligning your mission with how you make moneyCreating impact without burnout or sacrificeLeading from power, presence, and clarityAbout Sarah DumasSarah Dumas is a mentor for multi-6-figure women who are done performing success and ready to embody it. For over 15 years, she's guided high-achieving entrepreneurs, leaders, and visionaries out of burnout and into embodied wealth, where their businesses expand because they do. Sarah specializes in helping powerful women lead themselves with the same precision they lead their businesses, creating legacy without sacrifice and wealth without exhaustion. Her work focuses on refinement, embodiment, and next-level leadership for women who refuse to choose between power and presence.Resources & Links
Parents! Download Mr Jim's app Riffio to create your own stories and songs inside your favorite shows! iOS Download | Android Listen to this podcast, audiobooks and more on Storybutton, without your kids needing to use a screened device or your phone. Listen with no fees or subscriptions.—> Order Storybutton Today
Imagine a 6'2–6'3 power athlete who can dominate the paint, handle business on the perimeter, then turn around and pitch pure gas on a softball field. That's not a recruiting bio — that's Missy Odom.Missy is a Top-40 national prospect (Class of 2026) out of Monteverde Academy, committed to Florida State to play both basketball and softball. That automatically puts her in rare air.Call it what it is: unicorn status.Below, we break it down SLT-style — where it started, what separates her, the mindset behind leaving Alabama for the toughest prep environment in the country, and what's next for one of the most complete athletes in America.Missy fell in love with the game early — age 4 or 5 — running backyard games with her brothers.Not one brother who “let her play.”Five brothers.She rattled them off like a starting lineup: AJ, Bubba, Jojo, Memphis, Hosman.That environment matters. When you grow up battling brothers every day, toughness isn't taught — it's inherited. Missy even admitted the one who got after her the most was Bubba… and she didn't lose much.That's where the edge starts.Missy used one word coaches love — versatile — but she meant it the right way.She can:Score on all three levelsPlay inside or outDefend every positionRebound at a high levelRun the floorBring energyDo the dirty work (charges, loose balls, winning plays)That last part separates “talented” from problem.Quick BreakdownSchool: Monteverde AcademyClass: 2026Build: 6'2–6'3, powerful, built to lastBasketball Identity: Versatile scorer + elite motorSoftball Identity: High-level pitcher (pure gas)Superpower: Impacts winning without needing the ball every possessionLeaving Alabama wasn't a vibe change — it was a growth decision.Missy said it straight: there was a stretch where she wasn't playing the way she wanted. Balancing softball heavily meant basketball training slipped, and she felt herself falling behind.That's when her mom delivered the truth:“You want to be great? Go somewhere that's going to push you every day.”Monteverde is the gauntlet.You either shrink — or level up.Missy leveled up.Daily gym runs with:The #1 player in the Class of 2026Top-ranked 2025 talentThat's not ego fuel — that's reality testing.Missy said things clicked around January of last year when she realized:“I belong here. I can play with anybody.”Quiet confidence. Business-like. Dangerous.This already looks like Tallahassee:6:45 AM wake-up7:15 AM leave dorms7:45 – 3:10 classes (only four)Weights built into the school day3:45 – 6:00 practiceDinner + dorm life + bondingTuesdays & Thursdays? 6 AM workouts — up at 5:30.That's not high school structure.That's college preparation.One word kept showing up: consistency.Missy chose Florida State because the staff:Showed real loveStayed consistentRecruited her, not just the stat lineThen came the separator.After her visit, they came to watch her pitch.That matters.Yes — Missy confirmed FSU will allow her to play both basketball and softball.Basketball wraps → softball ramps (February through June).Elite stamina. Elite discipline.Missy's real number is 33 — but it's retired at Florida State.Why 33?Her mom wore it playing softball. Missy carried it in both sports.She's wearing 32 now, but the identity stays the same:family, legacy, purpose.Top 5 artists:Rod Wave, Drake, NBA YoungBoy, Jhene Aiko, Lil BabySuperhero: Batman (no explanation needed)Theme song: “Invisible Scars” (yes, she quoted it)What basketball taught her:Friendships, memories, toughness — and how the game protects her mental health whether it's a great night or a rough one.That's maturity.No fancy answers. Just real.If you're watching the next wave of women's hoops — and the next era of two-sport dominance — Missy Odom is the name.
Annual venture capital funding into Irish tech SMEs fell for the first time last year since 2018, according to the Irish Venture Capital Association (IVCA) VenturePulse report, published today in association with William Fry. Funding in 2025 was down by 23% to €1.1 billion. Funding in the fourth quarter fell by 46% to €291.4 million. "It's been a roller coaster year for Irish SMEs looking to raise capital," commented Caroline Gaynor, chairperson, IVCA. She said that there had been an undoubted Trump effect after uncertainty caused by tariffs led to the worst second quarter for ten years. "In addition, the fourth quarter saw a 71% retreat from the Irish market by international investors from €470m to €132.4m. This may be due to hesitation and uncertainty by US VC firms due to a number of factors, including an 'America first' focus, negativity from across the Atlantic about Europe, and the impact of a weakening dollar." However, despite these headwinds, the IVCA chairperson said that she remained positive about Irish entrepreneurs looking to raise capital in 2026. "The Government's Seed and Venture Capital Scheme 2025-29 (1) has a record allocation of €250 million, and we should see the benefits kicking in shortly. In addition, progress is being made on the Government's important enterprise scaling fund (2) as well as other policy measures to mobilise capital to Irish SMEs." She added: "Current geopolitical events have highlighted the need for us to be more self-reliant, have more access to local capital, and not be dependent on overseas investors to fund our indigenous tech sectors." Sarah-Jane Larkin, director general, IVCA, said that the fourth quarter highlighted the weakness of not being able to tap into local private capital. "A major reason for the 46% decline in fourth quarter funding was the 71% fall off in international investment." She added, "Another reason for the decrease in international funding may be that US investors may be overly focused on local AI opportunities, and certainly, the amount of money being invested there is sucking up a lot of venture capital. Unicorn status is being achieved by early-stage start-ups in generative AI in the US much quicker than in the past." Ms Larkin said that the decline in overseas funding in 2025 is reflected in the 33% fall in larger deals in the €30m + category to €540.8m. In the fourth quarter, this category fell by more than two-thirds (69%) to €111m. Funding in the €10m–€30m range for the year overall also fell, by 14% to €269.4m. The IVCA data suggests that transactions for smaller rounds held up reasonably well in 2025. Funding in the €3m-€5m category rose by 39% to €113.8m. There was a small decline (3%) in the €1-€3m range to €102.2m. Seed funding, or first rounds raised by SMEs, dropped by 5% to €141m. The top five deals in quarter four were quantum computing company, Equal 1, which raised €30m; Shorla Oncology (speciality pharmaceuticals, €25m); Aerska (biotech, €17m); Fresco (smart kitchen platform, €15m) and Luminate Medical (healthcare technology, €14m). Life science companies attracted most funding in 2025 in Ireland, raising €461m or 40% of the total. This was followed by software with €156m (14%); cybersecurity €136m (12%); AI and machine learning €104m (9%) and fintech €96m (8%). 186 deals were completed in 2025, down from 217 the previous year, a fall of 14%.
Different gym owners need different leadership books.In this episode of “Run a Profitable Gym,” Two-Brain founder Chris Cooper shares a simple plan to help you use the contents of any book to improve yourself and your business, and then he recommends specific books in each realm of leadership.Self-Leadership (to go fast, go alone):“Think Like a Monk” by Jay Shetty“The Creative Act” by Rick Rubin“Courage Is Calling” by Ryan Holiday“Dare to Lead” by Brené Brown “The War of Art” by Steven Pressfield“Drive” by Daniel PinkTeam Leadership (to go far, go together):“Be the Unicorn” by William Vanderbloemen “Good to Great” by Jim Collins“Vivid Vision” by Cameron Herold“Leadershift” by John Maxwell Peer Leadership (share the mission beyond your gym):“Influence” by Robert Cialdini “Building a StoryBrand” by Donald Miller“The Go-Giver” by Bob Burg and John David Mann “How to Win Friends and Influence People” by Dale CarnegieTribe Leadership (influence your community at scale): “The Wisdom of Joseph Campbell” by Michael Toms “The Dichotomy of Leadership” by Jocko Willink and Leif Babin “Enchantment” by Guy Kawasaki “Resilience” by Eric Greitens “Tribes” by Seth Godin Plus, from Feb. 9 to 13, 2026, only, Coop is giving away free digital copies of his 10 books for gym owners.Go to Gym Owners United (linked below), DM him with your biggest challenge, and he'll send you the right book to start with.LinksGym Owners UnitedBook a Call1:56 - Your leadership diagnostic6:31 - Self-leadership books8:44 - Team leadership books10:57 - Peer leadership books11:58 - Tribe leadership books
Less than 1% of venture-capital-backed startups will achieve a valuation in excess of $1 billion. Despite Black-led technology startups receiving less than half of 1 one percent of all venture-funding, there are SEVERAL black-led Unicorns. On this first episode of Black History Month, we highlight a few successful companies that you may not realize were founded by Black people, and dispel the notion that Black businesses need “support” – what they really need is for us to buy their stuff.
Karrierewege, Tech-Trends und Full-Circle-Momente
What does it really take to build a unicorn-scale company from Canada? At North Star, the Unicorn Panel brings together founders behind global tech leaders to talk candidly about ambition, scaling, capital, leadership, near-death moments, and the realities behind the highlight reel. Moderated by Sophie Boulanger, this conversation goes beyond success stories and digs into the hard decisions: when not to sell, how to think about control and cap tables, scaling teams, surviving crises, and building companies with long-term impact. You'll hear firsthand lessons from founders who have built and scaled category-defining businesses across gaming, mobility, and healthcare technology Topics covered include: Early exit offers vs long-term ambition Scaling from startup to global platform Fundraising strategy and cap table discipline Hiring executives and leadership evolution Crisis moments and near failures Building and keeping tech champions in Canada AI, moats, and the future of company building Recorded live at North Star — Inspiring Entrepreneurs in Montréal.
Learn More about Altanta Retreat and grab a seat HERE. About Business for Unicorns Business for Unicorns helps gym owners and fitness studio operators build profitable, sustainable businesses without burning out. Founded by Mark Fisher and Michael Keeler —who built and sold the $34-million Mark Fisher Fitness —BFU provides coaching, mentorship, courses, and events for gym owners ready to grow revenue, systemize operations, and create more freedom in their lives. To learn more, check out businessforunicorns.com. Get More BFU In Your Life: Claim your FREE copy of Gym Marketing Secrets HERE Follow BFU on Instagram HERE Subscribe to MF's YouTube Channel HERE Ready to Grow Your Gym? If you're a gym owner with 30+ clients looking to add $5k-$10k/month in the next 90 days, book your FREE Brainstorm Call HERE.
From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword
Show Notes On this week's podcast, Dan and Kris board the SAG Cartoon Express once again! This time, they stopped at a particularly crummy episode of Super Mario World. Just how bad is a cartoon that accidentally swaps Mario and Luigi's color palates AND inexplicably makes a bunch of tiny Marios come out of a block? And why is Yoshi so annoying? Let's find out! Then, in Week Old News, Nintendo announces unreleased Virtual Boy games for Switch 2, the game industry seems to hate AI just as much as the rest of us, God of War finds its Zeus, and the Disney Afternoon finally gets Bonkers. Finally, in the Checkpoint, Kris finishes the excellent Adventure of Samsara, Dan bought a fancy coffee machine, watched Death of a Unicorn, and they both froze their butts off for the good of the show. Enjoy! See Our Faces!!! Stone Age Gamer YouTube Just Hear Our Voices Audio only version Useful Links Support us on Patreon StoneAgeGamer.com The Gratuitous Rainbow Spectrum Safe at Home Rescue Shoot the Moon Stitches Art of Angela Dean's Substack SAG's theme Song “Squared Roots” by Banjo Guy Ollie Social Stuff Join us on Discord! Twitch Geekade Facebook Stone Age Gamer Facebook Geekade Twitter Stone Age Gamer Twitter Geekade Instagram Stone Age Gamer Instagram YouTube Geekade Contact Us
$300 billion wiped from software stocks in a single week. Is SaaS officially dead or is this the buying opportunity of a decade?In this episode of Bricks, Bucks & Bytes, Owen, Patric, Dustin, and Martin break down the so-called "SaaSpocalypse" rocking global markets, welcome back Bedrock CTO Kevin Peterson fresh off a massive $270M Series B that puts his autonomous excavator company into unicorn territory, and dissect the hilarious beef between Anthropic and OpenAI playing out on the Super Bowl stage.Topics discussed:The $300B+ SaaS valuation crash and whether it's a death sentence or a market correction back to normalWhy software engineers are now "juicing" with AI coding tools and what that means for the industryBedrock's journey from Series A to unicorn status, their autonomous excavators on real job sites, and why Google's Capital G is backing themWhere smart money is moving as investors flee software stocks (hint: Europe is winning)Anthropic's brilliant Super Bowl ad mocking OpenAI's plan to put ads in ChatGPT and Sam Altman's salty response38% of Stanford students claiming disability for perks and what it says about elite institutions
Short Stories for Kids: The Magical Podcast of Story Telling
Written by AlexCome and follow more adventures on our animated TV show on Youtube!
This week, Elle and Vee play Whoreible Life with sexologist, relationship coach, and host of In Bed with Alexa podcast, Alexa Andre. As usual, a few cards turn into larger conversations about being a unicorn, OnlyFans, orgies, the best kind of dildo, latex fantasies, receiving pleasure, and why couple privilege can make or break group sex. Messy, honest, hilarious, and very on-brand.What is Whoreible Life? Whoreible Life is a sex-positive card game designed to spark honest, funny, and sometimes unhinged conversations about sex and kink. Each card presents a prompt, scenario, or choice, and players can use the deck as an icebreaker, a storytelling tool, or, as in this episode, a Fuck/Marry/Kill framework that opens the door to deep discussion, personal revelations, and unexpected wisdom.Fuck/Marry/Kill: Full Body Latex, Sex Website Profile, Be a UnicornLatex: the pros and cons. (5:50)Sex Website Profile: should Vee make an OnlyFans? (8:08)OnlyFans, taking sexy pics of yourself, and how to do it. (12:27)Elle's Technique for a Sexy Selfie Video. (20:06)Being a Unicorn: couple privilege, and how to treat a unicorn. (21:13)Naughty Stories: Favorite Memories of Being a Third. (27:21)Learning to Receive: worshipping pussy. (34:53)Fuck/Marry/Kill: Orgy of 8+, Be Bound with Rope, Watch VR Porn (38:56)Apple Vision Pro: why Elle would kill VR porn. (43:01)Orgy stories: what makes a good orgy flow? (44:41) Inserting Toys: Dildos vs dicks vs hands vs Njoy. (52:29)Final Nuggets: “fuck first”, “eat pussy”, and “sex ends when you're satisfied”. (55:39)Support the showWhere to Find Us & How to Support the Show:
From a hair removal crystal and a life-changing contour stick, to a foundation unicorn and “Leigh Campbell in a body lotion”, we’ve truly outdone ourselves this week in the product testing department. On today’s episode of Spendy Savey, Leigh and Kelly share their best skincare, makeup, hair, body and fragrance recommendations, including affordable bodycare that has us in a chokehold, and the product Kelly didn’t know she needed, but now won’t do her makeup without. Plus, Leigh breaks down why she’s fallen back in love with a $377 rich cream, and we’ve uncovered a secret club that’ll save you hundreds of dollars on your active skincare favourites. Oh, and Kelly’s found a foundation that somehow manages to be high coverage, long-wearing, dewy AND radiant… for only $40 (or less when it’s on sale!). EVERYTHING MENTIONED SPENDY: Kelly: Smashbox Precision Contour Stick in Light, $45. Leigh: Sulwhasoo Concentrated Ginseng Rejuvenating Cream Rich 50ml, $377. SAVEY: Kelly: Vaseline Gluta Overnight Body Lotion 200mL, $11. Leigh: Pixi On-The-Glow Bronze in Soft Glow, $33. NEWBIES: Kelly: Youth Rx Ritual Melting Balm, $59 (Member Price), from skincare membership service Youth Lab Direct. Leigh: Clinique Moisture Surge Hydro-Infused Lotion 200ml, $75. SHOP MY STASH/EMPTY: Kelly: Nude By Nature Skin Radiance Foundation 30ml, $42.95. DON'T FORGET: Watch & Subscribe on YouTube, this episode drops tonight at 7pm! Catch it here. Follow us on Instagram: @youbeautypodcast Follow us on TikTok: @youbeautypod Join our You Beauty Facebook Group here GET IN TOUCH: Got a beauty question you want answered? Email us at youbeauty@mamamia.com.au or send us a voice note on Instagram! You Beauty is a podcast by Mamamia. Listen to more Mamamia podcasts here. For our product recommendations, exclusive beauty news, reviews, articles, deals and much more - sign up for our free You Beauty weekly newsletter here Subscribe to Mamamia here CREDITS: Hosts: Kelly McCarren & Leigh Campbell Producer: Sophie Campbell Audio Producer: Tegan Sadler Video Producer: Artemi Kokkaris Just so you know — some of the links in these notes are affiliate links, which means we might earn a small commission if you buy through them. It doesn’t cost you anything extra, and it helps support the show. Happy shopping! Mamamia acknowledges the Traditional Owners of the Land we have recorded this podcast on, the Gadigal people of the Eora Nation. We pay our respects to their Elders past and present, and extend that respect to all Aboriginal and Torres Strait Islander cultures.Become a Mamamia subscriber: https://www.mamamia.com.au/subscribeSee omnystudio.com/listener for privacy information.
Nick Smoot sits down with Doug Erwin, the entrepreneurship lead at EDAWN in Northern Nevada, for a wide-ranging, high-energy conversation about two things happening at once:AI is moving from “chat” to “do.” Agents are booking meetings, running workflows, and changing the cost and speed of building.Reno is quietly turning into a real startup boomtown. And EDAWN is literally putting money behind the people who want to strengthen the ecosystem.The punchline: Doug's team has a grant program called BESTI that funds community-led events, meetups, and builder gatherings. Nick shows how to submit an idea through BuildCities.com by linking a project to the BESTI challenge. What BESTIE Is (in plain English)BESTI = EDAWN money to help you host ecosystem-building events.Not top-down programming. Not “EDAWN's event.” It's community-led.Typical awards: around $1,000Larger requests: up to $5,000Purpose: reduce friction, validate community leaders, and get more builders in the room togetherExamples they want to fund: founder meetups, workshops, pitch nights, Startup Weekend, niche communities (AI, art, games, hardware), and simple gatherings that create collisions.Why This Matters Right NowDoug and Nick both make the same point in different language: the future is not coming. It is here.AI is turning into an “autonomous assistant” era, which means anyone can build faster than ever. That is awesome. It is also chaotic. The communities that win will be the ones that create places for people to:meet consistentlylearn togetherget unstuckbuild real things people actually wantNick's frame: we're splitting into two worlds.World A: amuse-yourself-to-death consumptionWorld B: disciplined creation with friendsBESTI is a lever to help people choose World B.The Best Stuff Doug SaysEcosystems are rainforests, not row crops. They need culture, collisions, and long-term consistency.The role of economic development is often to be the “ghost in the machine.” Reduce friction so the community can lead.BESTIE is partly funding, partly social validation. Sometimes people just need permission and support to lead.The Best Stuff Nick SaysIf you want to run an event, do three, not one. Your first one will be awkward. Your third one will have momentum.The real shift is from consumption-based social life (restaurants, concerts) to creation-based social life (build nights, learning nights, show-and-tell).The winning community model is Discover, Dream, Design, Deliver. Gather people through those phases so building becomes a rhythm.Reno's “Wait, What?” MomentDoug teases a big local milestone: Reno's first homegrown unicorn announcement is imminent.Nick uses it as a reminder: “overnight success” is usually two years plus a lifetime of experience, relationships, and timingHow to Apply for BESTIE (simple steps)Go to BuildCities.comCreate a profileCreate a Project for your event idea (name it, describe it, add collaborators)Search BESTIE and link your project to the challengeEDAWN gets notified, and you are in the pipelineNick's advice: keep it low lift. BBQ + builder conversation is valid. Pizza and folding chairs counts if the room is full of creators.Talked About in the EpisodeBrad Feld and the “entrepreneurs lead” thesisBetter ecosystem thinkingAI agents, MCP, automation, and the new workflow eraStartup Weekend as a proven collision engineThe deeper reason communities matter: loneliness, purpose, belonging, and getting unstuckGuest and ContactDoug ErwinDirector of Entrepreneurship, EDAWN (Economic Development Association of Western Nevada)Nick SmootEmail: nick@buildcities.com
In this episode of Lab Rats to Unicorns, John Flavin sits down with Dr. Andrew Pelling—trailblazing biophysicist, artist-trained scientist, and Co-Founder & Chief Scientific Officer of Spiderwort Biotechnologies. Andrew is best known for reimagining living systems, most famously by using decellularized apples and other plants as scaffolds to grow human tissue—work that helped spark an entirely new category of plant-derived biomaterials.Formerly a professor at the University of Ottawa, Andrew founded the Pelling Lab for Augmented Biology, an unconventional research environment where scientists and artists explored how physical forces—rather than genetic manipulation—shape cellular behavior. His approach focuses on stretching, compressing, and reshaping cells to unlock new biological possibilities.Andrew shares how his background in the arts shaped his scientific intuition, why curiosity-driven research led from grocery-store experiments to restoring movement in paralyzed rats, and how that breakthrough ultimately inspired the founding of Spiderwort. Along the way, he reflects on failure, leadership, and building imaginative teams—offering a compelling vision for how augmented biology could transform regenerative medicine and human health.
welcome to wall-e's tech briefing for wednesday, february 4th! join us as we explore today's essential tech stories: intel's gpu move: intel announces plans to produce gpus, traditionally dominated by nvidia, under ceo lip-bu tan, with kevork kechichian leading the initiative, as part of a strategy to expand beyond cpus. apple xcode update: apple launches xcode 26.3, integrating advanced ai tools from openai and anthropic, enhancing automation and transparency in the app development process. skyryse's unicorn status: aviation tech company skyryse secures $300 million in series c funding, reaching a $1.15 billion valuation, to revolutionize flight automation with its skyos system. french investigation into x: the paris offices of x, formerly twitter, are raided by french authorities over alleged data fraud, with elon musk summoned for questioning. paypal's new ceo: enrique lores, former board chair, takes over as ceo to steer the company through slow revenue growth and post-pandemic digital payment shifts. stay tuned for tomorrow's tech updates!
The last time I spoke to today's guest, James Watt, he fired me. It's the first thing we talk about in the episode, but I have nothing but respect for the man who built Brewdog into the Unicorn it is today. Through their impressive marketing stunts, focus on product quality and immense speed of execution, Brewdog successfully took on the beer behemoths and solidified its place in the industry. In this episode we talk about why James made a mistake hiring an executive team (including me), how they convinced a bank to give them a loan in the global financial crisis and how he almost lost £50m to a Russian bank account. Now, James has launched a new venture, Social Tip, a new way brands can connect with consumers.Sign up to our live event, The Calling, on April 21st here:https://event.uncensoredcmo.com/events/uncensoredcmo/2044861Timestamps00:00 - Intro00:58 - Why did James fire Jon?01:56 - Being in the detail and close to the customer06:07 - Brewdog founding story: how they got funding08:54 - Why Brewdog is so passionate about making a great product13:07 - How Brewdog won their Tesco listing15:58 - How a Tesco listing transformed Brewdog17:37 - The secret to an overnight success20:22 - Why constraints led to great marketing for Brewdog22:09 - Examples of Brewdog's incredible marketing stunts27:43 - Why James changed his name to Elvis29:28 - Collaborating with copycats: launching ALD IPA31:14 - When marketing stunts go wrong33:28 - How James almost lost £50 million37:48 - James Watt's favourite business books41:17 - How James Watt felt when he left Brewdog43:01 - Leadership lessons from James Watt about scaling46:05 - Dealing with public scrutiny47:26 - Why James started his new business: Social Tip54:03 - How to pitch your business to James Watt
Send us a textHost: Kendra BeavisGuest: Kayla MacDonald — Author, Podcast Host, Food Freedom & Embodiment GuideIn this episode of Tribe of Unicorns, Kendra Beavis sits down with Kayla McDonald for a grounded, imaginative conversation about food freedom, embodiment, and the power of rewriting the stories we live by.Kayla shares how writing and creativity helped her survive a painful childhood—and how those same tools now support women in building healthier relationships with food, self-worth, and self-expression. Together, they explore why food struggles are often rooted in deeper stories about safety, identity, and being told to shrink, rather than a lack of discipline or willpower.This episode blends lived experience with practical tools, including a playful journaling practice Kayla calls narrative alchemy, designed to make inner work feel lighter, more creative, and more sustainable.
Learn More about Altanta Retreat and grab a seat HERE. About Business for Unicorns Business for Unicorns helps gym owners and fitness studio operators build profitable, sustainable businesses without burning out. Founded by Mark Fisher and Michael Keeler —who built and sold the $34-million Mark Fisher Fitness —BFU provides coaching, mentorship, courses, and events for gym owners ready to grow revenue, systemize operations, and create more freedom in their lives. To learn more, check out businessforunicorns.com. Get More BFU In Your Life: Claim your FREE copy of Gym Marketing Secrets HERE Follow BFU on Instagram HERE Subscribe to MF's YouTube Channel HERE Ready to Grow Your Gym? If you're a gym owner with 30+ clients looking to add $5k-$10k/month in the next 90 days, book your FREE Brainstorm Call HERE.
Few English writers wielded a pen so sharply as George Orwell, the quintessential political writer of the twentieth century. His literary output at once responded to and sought to influence the tumultuous times in which he lived—decades during which Europe and eventually the entire world would be torn apart by war, while ideologies like fascism, socialism, and communism changed the stakes of global politics. In this study, Stanford historian and lifelong Orwell scholar Peter Stansky incisively demonstrates how Orwell's body of work was defined by the four major conflicts that punctuated his life: World War I, the Spanish Civil War, World War II, and the Cold War. Young Orwell came of age against the backdrop of the First World War, and published his final book, Nineteen Eighty-Four, nearly half a century later, at the outset of the Cold War. The intervening three decades of Orwell's life were marked by radical shifts in his personal politics: briefly a staunch pacifist, he was finally a fully committed socialist following his involvement in the Spanish Civil War. But just before the outbreak of World War II, he had adopted a strong anti-pacifist position, stating that to be a pacifist was equivalent to being pro-Fascist. By carefully combing through Orwell's published works, notably "My Country Right or Left," The Lion and the Unicorn, Animal Farm, and his most dystopian and prescient novel Nineteen Eighty-Four, Stansky teases apart Orwell's often paradoxical views on patriotism and socialism. The Socialist Patriot: George Orwell and War (Stanford UP, 2023) is ultimately an attempt to reconcile the apparent contradictions between Orwell's commitment to socialist ideals and his sharp critique of totalitarianism by demonstrating the centrality of his wartime experiences, giving twenty-first century readers greater insight into the inner world of one of the most influential writers of the modern age. Peter Stansky is the Frances and Charles Field Professor of History, Emeritus at Stanford University. He has published extensively on the cultural, political, and literary milieu of twentieth-century Britain, including (with William Abrahams) the Orwell biographies The Unknown Orwell (1972) and Orwell: The Transformation (1980), both finalists for the National Book Award. Morteza Hajizadeh is a Ph.D. graduate in English from the University of Auckland in New Zealand. His research interests are Cultural Studies; Critical Theory; Environmental History; Medieval (Intellectual) History; Gothic Studies; 18th and 19th Century British Literature. YouTube channel. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/military-history
251 | Clawdbot/Moltbot/OpenClaw - was ist das jetzt schon wieder? Wir kommen auch bald nicht mehr mit. Deshalb roasten wir Geschäftsideen und Alex beschreibt, wie traditionelle Firmen bei der Geschwindigkeit noch mitkommen können.Partner dieser Folge: Clockodo: https://www.clockodo.com/optimisten Gutschein-Code: optimisten25Finde eine Geschäftsidee, die perfekt zu dir passt: digitaleoptimisten.de/quizKapitel:(00:00) Intro(09:46) Clawdbot, Moltbot, Claude vs. ChatGPT(29:10) Traditionelle Unternehmen vs. Startup: Die Geschwindigkeit geht auseinander(39:15) Roast my Geschäftsidee von Til & Felix(54:40) Idee von Samuel: Human Proof(1:01:18) Idee von Alex: LearnCheckMehr Infos:In dieser Episode diskutieren Alexander und Samuel über die neuesten Entwicklungen in der digitalen Welt, insbesondere im Bereich der Künstlichen Intelligenz. Sie reflektieren über das World Economic Forum in Davos, die Psychologie des Fan-Seins im Sport, und die praktischen Anwendungen von AI-Tools wie Whisper Flow und ClaudeBot. Zudem wird die Bedeutung von Vertrauen in AI und die Herausforderungen der Mensch-Maschine-Interaktion thematisiert. Abschließend wird die Rolle von AI in der Unternehmenswelt und die Zukunft des Unternehmertums erörtert. In dieser Episode diskutieren Samuel und Alexander die Bedeutung traditioneller Kundenbeziehungen im Vergleich zu modernen Technologien, insbesondere KI. Sie betonen die Wichtigkeit von Qualität in Unternehmen und die Herausforderungen, die KI für den Arbeitsmarkt mit sich bringt. Zudem wird eine Geschäftsidee zu Sammelkartenspielen vorgestellt, gefolgt von der Idee einer Verifizierung von Menschen im Internet, um Vertrauen in Online-Inhalte zu schaffen. Abschließend wird eine innovative Lösung für Wissensvermittlung in Unternehmen durch Mini-Quizzes präsentiert.Keywords:Davos, World Economic Forum, Sport, Identität, AI-Tools, Whisper Flow, ClaudeBot, Vertrauen, Mensch-Maschine-Interaktion, Digital Twins, Unternehmenswelt, Unternehmertum, exponentielle Entwicklung, Kundenbeziehungen, Qualität, KI, Sammelkartenspiele, Verifizierung, Wissensvermittlung, Geschäftsideen
Three children find themselves playing a boardgame for real. Written especially for this podcast by Alice. If you enjoyed this story, please do leave us a review. And, if you'd like to suggest an animal for a future Animal Tales story, you can do so by emailing podcast@animaltales.uk. We would love to hear from you. Animal Tales Books! Collections of Animal Tales children's stories are available to buy exclusively at Amazon. Simply search for Animal Tales Short Stories or follow this link: https://www.amazon.co.uk/dp/B0CLJQZ9C9?binding=paperback&ref=dbs_dp_sirpi Become a PREMIUM Subscriber You can now enjoy Animal Tales by becoming a Premium Subscriber. This gets you:All episodes in our catalogue advert freeBonus Premium-only episodes (one per week) which will never be used on the main podcastWe guarantee to use one of your animal suggestions in a storyYou can sign up through Apple Podcasts or through Supercast and there are both monthly and yearly plans available. Discover a brand new story every Monday, Wednesday and Friday – just for you! You can find more Animal Tales at https://www.spreaker.com/show/animal-tales-the-kids-story-podcastA Note About The AdvertsIn order to allow us to make these stories we offer a premium subscription and run adverts. The adverts are not chosen by us, but played automatically depending on the platform you listen through (Apple Podcasts, Spotify, etc) and the country you live in. The adverts may even be different if you listen to the story twice.We have had a handful of instances where an advert has played that is not suitable for a family audience, despite the podcast clearly being labelled for children. If you're concerned about an advert you hear, please contact the platform you are listening to directly. Spotify, in particular, has proven problematic in the past, for both inappropriate adverts and the volume at which the adverts play. If you find this happening, please let Spotify know via their Facebook customer care page. As creators, we want your child's experience to be a pleasurable one. Running adverts is necessary to allow us to operate, but please do consider the premium subscription service as an alternative – it's advert free.
Written by AlexCome and follow more adventures on our animated TV show on Youtube!
Peter Stansky discusses Orwell's wartime work for the BBC and The Lion and the Unicorn advocating Englishsocialism, arguing that Animal Farm was not anti-socialist but a critique of revolutionary leaders corrupted by absolute power who inevitably betray their ideals.1951
Few English writers wielded a pen so sharply as George Orwell, the quintessential political writer of the twentieth century. His literary output at once responded to and sought to influence the tumultuous times in which he lived—decades during which Europe and eventually the entire world would be torn apart by war, while ideologies like fascism, socialism, and communism changed the stakes of global politics. In this study, Stanford historian and lifelong Orwell scholar Peter Stansky incisively demonstrates how Orwell's body of work was defined by the four major conflicts that punctuated his life: World War I, the Spanish Civil War, World War II, and the Cold War. Young Orwell came of age against the backdrop of the First World War, and published his final book, Nineteen Eighty-Four, nearly half a century later, at the outset of the Cold War. The intervening three decades of Orwell's life were marked by radical shifts in his personal politics: briefly a staunch pacifist, he was finally a fully committed socialist following his involvement in the Spanish Civil War. But just before the outbreak of World War II, he had adopted a strong anti-pacifist position, stating that to be a pacifist was equivalent to being pro-Fascist. By carefully combing through Orwell's published works, notably "My Country Right or Left," The Lion and the Unicorn, Animal Farm, and his most dystopian and prescient novel Nineteen Eighty-Four, Stansky teases apart Orwell's often paradoxical views on patriotism and socialism. The Socialist Patriot: George Orwell and War (Stanford UP, 2023) is ultimately an attempt to reconcile the apparent contradictions between Orwell's commitment to socialist ideals and his sharp critique of totalitarianism by demonstrating the centrality of his wartime experiences, giving twenty-first century readers greater insight into the inner world of one of the most influential writers of the modern age. Peter Stansky is the Frances and Charles Field Professor of History, Emeritus at Stanford University. He has published extensively on the cultural, political, and literary milieu of twentieth-century Britain, including (with William Abrahams) the Orwell biographies The Unknown Orwell (1972) and Orwell: The Transformation (1980), both finalists for the National Book Award. Morteza Hajizadeh is a Ph.D. graduate in English from the University of Auckland in New Zealand. His research interests are Cultural Studies; Critical Theory; Environmental History; Medieval (Intellectual) History; Gothic Studies; 18th and 19th Century British Literature. YouTube channel. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Few English writers wielded a pen so sharply as George Orwell, the quintessential political writer of the twentieth century. His literary output at once responded to and sought to influence the tumultuous times in which he lived—decades during which Europe and eventually the entire world would be torn apart by war, while ideologies like fascism, socialism, and communism changed the stakes of global politics. In this study, Stanford historian and lifelong Orwell scholar Peter Stansky incisively demonstrates how Orwell's body of work was defined by the four major conflicts that punctuated his life: World War I, the Spanish Civil War, World War II, and the Cold War. Young Orwell came of age against the backdrop of the First World War, and published his final book, Nineteen Eighty-Four, nearly half a century later, at the outset of the Cold War. The intervening three decades of Orwell's life were marked by radical shifts in his personal politics: briefly a staunch pacifist, he was finally a fully committed socialist following his involvement in the Spanish Civil War. But just before the outbreak of World War II, he had adopted a strong anti-pacifist position, stating that to be a pacifist was equivalent to being pro-Fascist. By carefully combing through Orwell's published works, notably "My Country Right or Left," The Lion and the Unicorn, Animal Farm, and his most dystopian and prescient novel Nineteen Eighty-Four, Stansky teases apart Orwell's often paradoxical views on patriotism and socialism. The Socialist Patriot: George Orwell and War (Stanford UP, 2023) is ultimately an attempt to reconcile the apparent contradictions between Orwell's commitment to socialist ideals and his sharp critique of totalitarianism by demonstrating the centrality of his wartime experiences, giving twenty-first century readers greater insight into the inner world of one of the most influential writers of the modern age. Peter Stansky is the Frances and Charles Field Professor of History, Emeritus at Stanford University. He has published extensively on the cultural, political, and literary milieu of twentieth-century Britain, including (with William Abrahams) the Orwell biographies The Unknown Orwell (1972) and Orwell: The Transformation (1980), both finalists for the National Book Award. Morteza Hajizadeh is a Ph.D. graduate in English from the University of Auckland in New Zealand. His research interests are Cultural Studies; Critical Theory; Environmental History; Medieval (Intellectual) History; Gothic Studies; 18th and 19th Century British Literature. YouTube channel. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/literary-studies
The golden trio is finally reunited by the silver doe: Join Ev, Sophia, Taavi and our guest Stephanie Bailey as we discuss chapter 19 in Harry Potter and the Deathly Hallows: The Silver Doe. Join the discussion on our website In this episode: Not your average stick! Assassin's Creed crossover Harry failing upwards Hermione needs a book club badly Voldemort is lazy and evil Unicorn's revenge arc The Mirror of Erised and spiritual alchemy Between bravery and fear Ron apologists unite! Freudian angle For more from our guest Stephanie: @bystephaniejoyce on Instagram Expecto Podronum – A podcast for all things Patronuses! @expectopod on Twitter @expectopodtronum on Instagram Expecto Podronum on Facebook expectopodronum on TikTok Expecto Podronum: Season 1, Episode 15 - Deer Resources: Christmas in the Forest of Dean (Part 1) by Beatrice Groves Christmas in the Forest of Dean (Part 2) by Beatrice Groves Pub's Jukebox: The Locket by Split Seven Ways Contact: Website: https://threebroomstickspod.com/ Facebook: https://www.facebook.com/threebroomstickspod/ Instagram: https://www.instagram.com/threebroomstickspodcast/ Twitter: https://twitter.com/threebroompod Email: 3broomstickspod@gmail.com Patreon: https://www.patreon.com/3broomsticks
Few English writers wielded a pen so sharply as George Orwell, the quintessential political writer of the twentieth century. His literary output at once responded to and sought to influence the tumultuous times in which he lived—decades during which Europe and eventually the entire world would be torn apart by war, while ideologies like fascism, socialism, and communism changed the stakes of global politics. In this study, Stanford historian and lifelong Orwell scholar Peter Stansky incisively demonstrates how Orwell's body of work was defined by the four major conflicts that punctuated his life: World War I, the Spanish Civil War, World War II, and the Cold War. Young Orwell came of age against the backdrop of the First World War, and published his final book, Nineteen Eighty-Four, nearly half a century later, at the outset of the Cold War. The intervening three decades of Orwell's life were marked by radical shifts in his personal politics: briefly a staunch pacifist, he was finally a fully committed socialist following his involvement in the Spanish Civil War. But just before the outbreak of World War II, he had adopted a strong anti-pacifist position, stating that to be a pacifist was equivalent to being pro-Fascist. By carefully combing through Orwell's published works, notably "My Country Right or Left," The Lion and the Unicorn, Animal Farm, and his most dystopian and prescient novel Nineteen Eighty-Four, Stansky teases apart Orwell's often paradoxical views on patriotism and socialism. The Socialist Patriot: George Orwell and War (Stanford UP, 2023) is ultimately an attempt to reconcile the apparent contradictions between Orwell's commitment to socialist ideals and his sharp critique of totalitarianism by demonstrating the centrality of his wartime experiences, giving twenty-first century readers greater insight into the inner world of one of the most influential writers of the modern age. Peter Stansky is the Frances and Charles Field Professor of History, Emeritus at Stanford University. He has published extensively on the cultural, political, and literary milieu of twentieth-century Britain, including (with William Abrahams) the Orwell biographies The Unknown Orwell (1972) and Orwell: The Transformation (1980), both finalists for the National Book Award. Morteza Hajizadeh is a Ph.D. graduate in English from the University of Auckland in New Zealand. His research interests are Cultural Studies; Critical Theory; Environmental History; Medieval (Intellectual) History; Gothic Studies; 18th and 19th Century British Literature. YouTube channel. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/biography
Few English writers wielded a pen so sharply as George Orwell, the quintessential political writer of the twentieth century. His literary output at once responded to and sought to influence the tumultuous times in which he lived—decades during which Europe and eventually the entire world would be torn apart by war, while ideologies like fascism, socialism, and communism changed the stakes of global politics. In this study, Stanford historian and lifelong Orwell scholar Peter Stansky incisively demonstrates how Orwell's body of work was defined by the four major conflicts that punctuated his life: World War I, the Spanish Civil War, World War II, and the Cold War. Young Orwell came of age against the backdrop of the First World War, and published his final book, Nineteen Eighty-Four, nearly half a century later, at the outset of the Cold War. The intervening three decades of Orwell's life were marked by radical shifts in his personal politics: briefly a staunch pacifist, he was finally a fully committed socialist following his involvement in the Spanish Civil War. But just before the outbreak of World War II, he had adopted a strong anti-pacifist position, stating that to be a pacifist was equivalent to being pro-Fascist. By carefully combing through Orwell's published works, notably "My Country Right or Left," The Lion and the Unicorn, Animal Farm, and his most dystopian and prescient novel Nineteen Eighty-Four, Stansky teases apart Orwell's often paradoxical views on patriotism and socialism. The Socialist Patriot: George Orwell and War (Stanford UP, 2023) is ultimately an attempt to reconcile the apparent contradictions between Orwell's commitment to socialist ideals and his sharp critique of totalitarianism by demonstrating the centrality of his wartime experiences, giving twenty-first century readers greater insight into the inner world of one of the most influential writers of the modern age. Peter Stansky is the Frances and Charles Field Professor of History, Emeritus at Stanford University. He has published extensively on the cultural, political, and literary milieu of twentieth-century Britain, including (with William Abrahams) the Orwell biographies The Unknown Orwell (1972) and Orwell: The Transformation (1980), both finalists for the National Book Award. Morteza Hajizadeh is a Ph.D. graduate in English from the University of Auckland in New Zealand. His research interests are Cultural Studies; Critical Theory; Environmental History; Medieval (Intellectual) History; Gothic Studies; 18th and 19th Century British Literature. YouTube channel. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/intellectual-history
Learn More about Altanta Retreat and grab a seat HERE. About Business for Unicorns Business for Unicorns helps gym owners and fitness studio operators build profitable, sustainable businesses without burning out. Founded by Mark Fisher and Michael Keeler —who built and sold the $34-million Mark Fisher Fitness —BFU provides coaching, mentorship, courses, and events for gym owners ready to grow revenue, systemize operations, and create more freedom in their lives. To learn more, check out businessforunicorns.com. Get More BFU In Your Life: Claim your FREE copy of Gym Marketing Secrets HERE Follow BFU on Instagram HERE Subscribe to MF's YouTube Channel HERE Ready to Grow Your Gym? If you're a gym owner with 30+ clients looking to add $5k-$10k/month in the next 90 days, book your FREE Brainstorm Call HERE.
Madness ensues towards the end of Thursday's edition of 'The Drive' as Ron The Show realizes a coach in the NFL has MUCH in common with guys like.. Travis Kelce, White Chocolate, &.... Clint Stoerner??!! LOL-
Thank you to our sponsor, Walrus! Walrus is where the world's data becomes reliable, valuable, and governable. --- In this exclusive Unchained interview, Griff Green, one of the original DAO curators and a member of the White Hat Group that helped recover funds after the 2016 DAO hack, reveals how tens of thousands of unclaimed ETH are being transformed into a long-term security fund for the Ethereum ecosystem. Nearly ten years after the most infamous exploit in crypto history, the community is repurposing its leftovers, not to rewrite history, but to prevent it from repeating itself. The new DAO Security Fund will deploy grants for Ethereum security research, infrastructure, incident response, and user protection, while also reviving DAO-based governance experiments that have fallen out of favor. Griff explains how the fund will work, why the Ethereum Foundation is involved, how staking will generate sustainable funding, and why, despite Ethereum's strength, crypto still isn't safe enough for everyday users. Guests: Griff Green, Co-Founder at Giveth, q/acc & Unicorn.eth Learn more about your ad choices. Visit megaphone.fm/adchoices
This week Julian hunts the medical mysteries behind the mystical unicorn horn, and Trace reads a loooooooot of government regulations looking to get tripped up by one tiny thing.QUESTIONSJulian: "How would we go about assessing the medical properties of unicorn horn?" from MattTrace: "How tall does an object have to be to be tripped over?" from BryannaDo you have an absurd question? Maybe it's a silly idea that popped into your head, a shower thought about the nature of reality, or a ridiculous musing about your favorite food? Whatever your question, we want to answer it—tell us!HOW TO ASK A QUESTION
Can't find the right hire? You might be asking one person to do too much. Here's a better approach.
Tiff and Monica provide exclusive, step-by-step insight into creating hiring ads that both grasp your specific practice's strengths and needs, and attract those rockstar team members you've been looking for. Episode resources: Subscribe to The Dental A-Team podcast Schedule a Practice Assessment Leave us a review Transcript: The Dental A Team (00:00) Hello, Dental A Team listeners. We are back today. I think we used to call it Consultant Takeover and now it's just literally a podcast, but it's my consulting team. And gosh, I love these days. You guys get to hear me just brag about the consultant team and the Dental A Team in general. And I have one of my faves. All these women are some of my favorite human beings in life. we got to recently, we were able to spend some really good quality time together and really just. Honestly, it was just fun. Like it wasn't even one-on-one, like getting to know each other. It was just really getting to know each other by proxy of having fun together. And if no one here listening knows, fun is one of our core values. And that is one of the core values. I don't even know. You may know this, but it was one of the core values that Kiera and I have had literally since day one. That was one that has never changed. Fun has always been there. We did have it really high. used to be our number one core value. And then we realized like, it was setting a weird precedent. So it's not quite as high anymore, but fun is massive for us. And we really, truly believe if you're not having fun, what are you even doing? I think there's oftentimes in life that things are not necessarily quote unquote fun, but there should be some air of sprinkles on top that can make life a little bit more fun in the end. So Monica, you make podcasting fun, your way of thinking. I like following. Monica (00:59) Yeah. The Dental A Team (01:22) your thought process. really, it is really fun for me to follow the thought process, especially because I tend to think I'm pretty good at knowing where someone's going or what they're doing, what their intent is. Like I'm usually pretty good at keying into it, but sometimes you catch me and I'm like, dang, I love how your brain works. So I love it because you make me think a little bit harder, like a little... more intentional, I like the word intentional. You make me think a little more intentional than I do on my day to day. So thank you for being here today, Monica, and shedding your brilliance on the Dental A Team, listeners and team. How are you this morning? It's a Monday morning for us and we're just at it bright and early. How are you? Monica (02:09) I'm doing great, Tiff. Thank you. Thanks for inviting me. You know, I think one of the things that attracted me and I was really curious about the Dental A Team was that core value of having fun because, you know, we just kind of get a little serious and boring as we become adults. And I'm like, gosh, I don't ever remember thinking like, my gosh, this work should be fun. Sometimes it is fun, but should it be fun? And I was so curious about that. to be honest, I think I shared this. I struggled a little bit with adding the fun factor in, you know, when I think about work and in my day. And I'm so glad that I decided to join the Dental A Team because that is my core value. Every day that I wake up, I'm like, I'm going to make this so fun today. What's going to be fun about my day? And I think fun is just a mindset. Right? Because you can do hard things. You can have hard conversations. You can have a hard day and still look for the fun and still make it fun. ⁓ So thank you. Thanks for inviting me today. This is one of my, I guess, new hobbies, newfound passions is podcasting, unscripted podcasting, ⁓ going with it. Right. And just seeing what our brains kind of like come up with. So. The Dental A Team (03:31) Yep. Monica (03:35) just so you all know, this is unscripted. This is ⁓ just real time and I love it. I love the authenticity of it, right? Because this is where great ideas are born with no agenda. it's, you you and I have like this really great kind of cadence and engagement. One thought leads to another. And sometimes I have to stop myself because I can see myself going down and, you know, a rabbit hole and just, you know, and so we'll... We'll keep it short and sweet and impactful. The Dental A Team (04:07) Awesome. I love that. Thank you. And I'm glad that you, I'm actually glad that you mentioned that I do. I do think it's important for listeners to know, like this is literally off the cuff. Uh, 99.9 % of it, have a topic and we like brainstorm for a quick minute, but really we have no idea what's going to come out of these podcasts until it's done. So it is a really magical experience. And I like it that way because I think that, like you said, there's, there's more ideas that are born in that kind of a mindset and it does, it keeps it fun. Monica (04:12) Thank Yeah. Yeah. The Dental A Team (04:36) Monica, been actually, it's been really fun. From my seat, I get to see the evolution of consulting and the evolution of ⁓ your position at the, at the company. And it's, was, it was been really cool. And I think for the listeners to know the Monica that you see today has always been here, but she was more reserved. And I think that comes from, it's fair. It's fair. comes from just, like you said, the older we get, more. we lose that like the Santa Claus effect. Like we lose that magic and the sparkles and reality sets in and we get, my boyfriend likes to call himself a realist. He's a pessimist a lot of times, but he calls himself a realist, which is fair. But we do become this like realist mindset when we're just like factual and we're like checking off the lists and we're so diligent and we tend not to laugh as much as we wanted to before. for us fun just means that We enjoy what we're doing. We bring an element of fun to our consulting. So when we work with you guys, like we're having fun. We're enjoying what we're doing. We're laughing. We're making light of the situations that we can. And we're making massive changes with easy implementations that totally just change the game for you guys. And that to us is so much fun for us to sit back and see. We can make a millimeter tweak on something that feels so massive in your world. and then it all falls into place. And it's just really cool. think of like the implants, know, they're tweaking, tweaking, tweaking and torquing. And you can go a millimeter too far or be a millimeter too short or be like spot on. And that's what I think of like that, that implant torque. We're not making massive adjustments, but we're making massive impacts and implant changes someone's life, but it's minimal adjustments. Monica (06:25) Yeah. Yeah, I think ⁓ just to add to what you said, it's really important to take some time to see through the childlike eyes, right? With awe and wonder. ⁓ think it's also important not to confuse fun with playtime, right? Because work is serious stuff, what we do, right? We can have fun and get the job done and have those really impactful days ⁓ because it is a serious The Dental A Team (06:45) Yes. Yes. Monica (06:58) business, right? ⁓ And fun doesn't equal playtime. And I remember a client of mine saying, well, you know, it's all you guys sound to like fun. And I'm like, yeah, because work should be fun. What you're doing is amazing. You are helping people achieve their goals, their wellness goals, the smiles that they've wanted, maybe relieving someone out of pain and, you know, shame and everything that goes around, you know, dentistry. The Dental A Team (07:23) Yeah. Monica (07:28) And you're impacting, you know, your team's lives and your community's lives and your own life. And it should be fun. You should see the beauty of that, right? ⁓ So it's not all fun and games, guys. It is serious stuff. When we say fun, that means live life, you know, with joy and wonder and childlike lenses. So ⁓ if you, I think if you approach, you know, the day with that mindset, everything is lighter, everything is ⁓ easier, right? There's a place to things and there's just, you you're operating from a heart space versus your mental space, you know? The Dental A Team (08:08) I agree. Yeah, I agree. I agree. And I understand that everybody's going to have a core value of fun and that's okay, too. You don't have to. You truly don't have to. And you don't have to come in in the same mindset as we do. Just know that's how we operate. And when you work with us, that's what you're getting. You're getting that fun mindset of how can we make small changes that make massive impacts. And one of those spaces, I've actually watched you, make some pretty massive impacts with a client of yours. So this is Monica (08:17) Yeah. The Dental A Team (08:36) This is a good topic that came up for us. We get our topics from our list and I thought this was a great one for Monica actually because you have been working with a client recently. This is on hiring and building hiring ads and I know you've got at least one and you've got multiple that are hiring, but you've got one that you've worked like literally hand in hand as though you're part of their, you are part of their team, but if your boots on the ground, part of their team because of their capacity and the type of hiring that they're doing. We've done this with a lot of clients, especially when they're hiring like office managers, because the office manager would do the hiring, right? So when they don't, which is the situation you're in now where they don't technically have like a dedicated office manager ⁓ with enough space in their world to do the hiring, even down to writing the ads and let's face it, the dentist CEOs, it's not your forte, first of all, most of the time, and you don't have time for it either. So building that out and Monica, have... the reason that it's applicable is you have had fun doing this with them. And there's been challenges too, but I think you've made some massive strides with them in this aspect. And the first space that you started was really learning the team, learning the practice, and writing ads that attracted the right type of people. And now you guys are in the interview process, but Monica. From the lens of consultant working with this practice or other practices you've done as well, how do you advise them and how do you help craft those ads to speak to their needs? And everyone's so different, right? So how do you do that and what suggestions do you think people could take away from today to do that on their own as well? Monica (10:18) Yeah, that's a great question, Tiff. And you're right, I've had a fun time ⁓ kind of wrapping my arms around this one client that does need it. I think one really important factor is like, remember who you are. Why are you great? When we are in a space of lack, whether we're lacking appropriate team members or we're not, you know, doing what we want to do. We're not doing the dentistry that we want. We're not where we should be. ⁓ I think we go down this rabbit hole of like, and almost inertia kicks in, you know? So let's reframe that mindset. Why are you great? Why did you start this? You know, how do you measure success? So I always say great leaders at ask great questions. ⁓ Ask yourself those questions. Why am I great? What's different about me? Do I want to be different? And if so, what makes me different? What do I do differently than my colleagues in the area? And identify who you are. What's your persona? What do patients say about you? Go to your reviews, pull up your Google reviews, do a little kind of investigative work, a self-discovery. about your practice. What are your Google reviews? What are patients saying about you? What do you want to be known for? The Dental A Team (11:52) Yeah, I love that. actually, Simon Sinek, I listen him a lot and he speaks about your why, right? Finding your why. And he says often, if you're trying to figure out what is your special gift, like what is your talent? What is it that you're here? What is your purpose? He says, ask your friends. Ask your friends why they are friends with you. What is it about you that your friends... Monica (12:16) I love that. The Dental A Team (12:19) stick around for or that they lean on you for. And I think Monica, that makes me think of the Google reviews. Like ask your patients, like why do your patients come back? Why does your team who you have currently, why are they your team? What are they speaking about you? And I think that can go twofold, right? Cause you're gonna find out some things that you may not have, that might not be super fun. It might not feel really good to hear some of the things, but those are areas that fantastic. Let's pull that into a KPI. How can we improve that? But you're gonna hear so many things as to why people love you, why they love your practice, why you. so I love that self-discovery start. I think it's really, really massive. Monica (12:55) Yes. Yeah, you're absolutely right. You're gonna hear some things that you may not want to hear, but I would say approach this exercise with the mindset of gratitude. Be grateful for the feedback. Be grateful for the positive feedback and be grateful for the not so positive feedback. That's a little harder to swallow, you know? ⁓ Don't attach yourself to the outcome. Attach yourself to the process of, right? And so even when you don't have a good The Dental A Team (13:03) Yeah. Monica (13:29) ⁓ you know, less than favorable outcome or response or feedback. would say, thank you. Thank you for sharing that because this is how I'm going to get better. That's not how I want to be perceived. Right. Would you mind sharing more? Would you mind if I asked further questions? Right. Get curious and be grateful, grateful that people are willing to step out and be honest. Right. ⁓ it takes courage. It takes courage for that. so I always say don't. attach yourself to the outcome, but the process of. You can learn a lot about yourself in that process and your practice. I think Google reviews are really important because it's your patient's perception of their experience, not of you per se. And it may not be your reality, but it's their reality, right? So it allows us an opportunity to dig deeper and learn more, right? Opportunity, gratitude, what's your story? I would say start with those things, those three things, right? One of the other things that I like to do is I like to see what's out there. What are other ads that are posted in my area? What are the competitors looking for? What are they saying about their practice, right? And I put on my hat of I'm looking for a job and I am a rock star. ⁓ whatever it is, office manager, treatment coordinator. I'm a Rockstar treatment coordinator and I am looking for my next home. What ad calls my attention? Which one am I going to click on? What ads am I scrolling through? Right? Because if you want Rockstar employees, you have to have a Rockstar ad. Right? And so if your ad is ⁓ The Dental A Team (15:17) Mm-hmm. Mm-hmm. Monica (15:24) the same as the other ads, ⁓ it's gonna be about the money. So who are you? Why should a Rockstar treatment coordinator come work for you? Start with your story. What are the qualities that you're looking for? What are the qualities that you have as a practice ⁓ to offer this Rockstar candidate? So start there, who are you? The Dental A Team (15:49) I love that. Monica (15:54) Why should I come work for you versus come work for me, right? Flip the framework of how you're posting your ad and how you're crafting your ad. The Dental A Team (16:02) Yeah, yeah, I love that because it makes me think of, you know, back in the day when I was in office and writing ads, it was like, I'm putting up like, five, six, seven sentences that's like, this is what we're hiring for. I need a treatment coordinator. Like, this is what you're going to get paid and people apply and we'd get thousands of applications, you know, and it's just, it's different now because I think people are looking for a place they want to have fun. They want to enjoy their job. They want to really work for a place that they feel Monica (16:13) Yes. The Dental A Team (16:30) seen valued heard and they want to know what that place is before they walk in and so rather than the old model of This is what we're you know, these are all of the things we're looking for ⁓ As far as the job. I think the new model is this is who we are. This is our vision These are our core values We're looking for somebody who aligns with that and this is the job that they're gonna do and I think it's flipped so much in the last five years Realistically almost six years now, right? that we just have to think differently and we have to speak differently. Monica (17:01) Yeah, and social media has also changed the game, right? mean, people can get an insight of your culture and who you are just by following your page, your Instagram page or your Facebook page or going on your website. But I truly believe social media and like those quick clicks, Instagram, Facebook are the persona of your practice, right? And so you gotta make sure that that's aligned with The Dental A Team (17:06) Absolutely. Monica (17:29) with the ad that you're posting and the reality of a day-to-day in your practice, right? And so social media can be so impactful, like Google reviews, whatever social media you use, a lot of people have TikToks, they have other things that I don't participate in because it can get overwhelming, you know? But social media is powerful. mean, you use that as your platform, your patience. The Dental A Team (17:51) I can. Monica (17:57) You best believe that people are going and checking your reviews and your space, your social media space, because they want to know. They want to know, is this truly who they say they are? Because we have choices, right? And let's be honest, our phone is glued to our fingertips 24 seven, even if we don't want to, it is. And there's power in that, right? So those platforms give a visual. The Dental A Team (18:19) Yeah. Monica (18:27) ⁓ to anyone basically. I mean, it's a world of referral nowadays, right? So I think use that to your advantage. The Dental A Team (18:35) Mm-hmm. Yeah, I do. I do too. I think earlier, I don't remember if it was this one or a different recording that we did, but you were speaking about, I think it was this one, being authentic. We're off the cuff here. And so being authentic in your ad and being authentic in your social presence. So what are people on the outside seeing of you is really, really huge because if you're saying you're someone and it Monica (18:48) Yeah. The Dental A Team (19:08) doesn't line up for you, first and foremost, the powers that you believe in are not gonna send you the people. They're not gonna come if it's not in alignment with who you are. But also, those who do come are gonna see that really easily, very quickly, and honestly, the ads usually, if we were to write an ad, if Monica wrote an ad, just blanket, wrote an ad, said, hey, five practices, you guys all use this ad. It might work for one of them because it may align closely for one of them, but I think something you did, Monica, with this specific practice and the ones that you've worked with is you learned the practice. You actually, like you said, you put that hat on and you learned the demographics of the area and the demographics of other practices hiring, but you also learned that practice. You scoured their website, you talked to the team members, you asked them really hard questions, you scoured their reviews, you literally did your due diligence to learn who they are so that as you helped them create the actual ad, it was speaking to them. And that authenticity really shines through. So if your social media is portraying someone that you're not, you're gonna hate it, you're gonna feel really icky, it's not gonna translate, and it's gonna make hiring really, really hard. So I think, these pieces, this introspection, this really knowing, this asking the questions, the hard questions, the easy questions, whatever you wanna call them, asking those questions and learning who you want to hire. who you are and who you want to attract are gonna be massive pieces and really what highlights a good ad. Monica (20:45) Yeah, I agree. And you're absolutely right. I I did a little bit of ⁓ of digging more than I than I normally would because I really want them to have the person that they deserve. And listen, sometimes we're not where we want to be. Right. So your reality right now may not be your desired space. And dream a big dream. OK, like create that that your ad can can be your future self. The Dental A Team (20:53) Yeah. Monica (21:14) But you need the right person to create that dream with you. So maybe you're not where you want to be or how you vision your office, but you want to get there and you're attracting, you want to attract the team members that will help you get there to that space. allow yourself to dream a big dream, be your biggest fan. I always say that, be your biggest fan, right? Fan your own flame. The Dental A Team (21:20) Absolutely. Monica (21:42) It takes, there's nobody else to motivate you but you. Motivation is from within, right? So yes, we have mentors and coaches and listening to them, they help us kind of reignite that flame that we all have, our internal flame, but you've got to be able to fan that flame. mean, if it's dim, you got to give it some air and some oxygen, right? And what gives us oxygen? It's all the feel good stuff. What are my patients saying about me? Right. So one of the things that I did, cause it was, it was difficult for this client to kind of say like, gosh, what am I really great at? You know? And I'm like, well, let's go see what your patients are saying. And we pulled up some amazing reviews and here we thought like, nobody's asking for reviews. No one's giving us reviews. Well, guess what? There was some amazing reviews and I read them, you know, we, we went through the process of like, I went through the most recent ones. The Dental A Team (22:14) Mm-hmm. Monica (22:38) And I read this, not just one, but one that really sticks out. And I got to tell you, like, by the end of that, ⁓ reading that Google review, the doctor was sitting taller. eyes were like, there was sparkles in his eyes and, you know, his chest was out. I'm like, this is the kind of stuff that you need to be sharing on your social media stories. This is what people need to hear. He's like, wow, that's amazing. I said, The Dental A Team (22:49) No. Monica (23:06) print these every single day and read them to your team. Read them to yourself out loud. It feels good to know what others are seeing. We're our hardest critics sometimes. And we're not necessarily seeing the good stuff, because we're focused on all the other things that we haven't done and accomplished, but our patients see us through a different lens. And it feels great. The Dental A Team (23:21) Yeah. Mm-hmm. Yeah. Monica (23:34) to be reminded of why you are who you are, why you have this amazing practice. you know, there's just so many things that we overlook because we're so focused on the things that we didn't do. Focus on the things that you did do great. What did I accomplish this year? If you change one life, guess what? That's amazing. You know, if you focus on changing one life at a time, at the end of the year, it's a trickling effect. I mean, you're... The Dental A Team (23:57) Yeah, I agree. Monica (24:05) your bowl is going to be or your cup is going to be overflowing. A thousand percent. The Dental A Team (24:08) Mm-hmm. Yeah, I totally agree. I totally agree. love the, ⁓ I focus on making, if I can make one person smile today, that, you know, I just want to make one person smile today. So I love this. Thank you, Monica. I knew it being so fresh, you're still in the process of helping them hire for this position, but I know it's been really, really close to your heart and you've been very intentional on getting them the results that they need and they need this hire to push for the results. I... Monica (24:30) Yeah. The Dental A Team (24:38) I've enjoyed watching you work with them and I know they've enjoyed working with you and will continue to enjoy working with you. You'll be hands on with them here this week. So I'm super excited for this. And Monica, think some of the pieces I've pulled out, we want those, know, actual pieces for you guys. And I think do the digging, do your research, figure out why people are referring you. Why do people want to come to you? What makes you special and different? And cultivate your ad around that plus who it is that you want to hire and be super clear on what the position actually is. And front office reception is really hard to explain. So be super clear on what that position is, what will they actually be doing? What are your expectations so that somebody coming in can say, yes, I can do that. And Monica, thank you so much for your time. Thank you for everything you've poured into the clients that you have. But specifically today, we're talking about this one client and everything you poured into them. Monica (25:18) Yes. The Dental A Team (25:36) I know because I've chatted with them, I've talked with them, I know how much they appreciate it and how much they need your love and your support and your guidance. So thank you for everything you do for all of us every single day. Monica (25:49) Thanks, Tiff. Likewise, thanks for your leadership, for your invitation, for this space where we can brainstorm and share our wisdom and ideas and impact the world of dentistry. The Dental A Team (26:02) Yeah, in the greatest way as possible, right? I love it. That's our mission, everyone, just so you know. Thank you, Monica. And guys, I love podcasting. know that, Monica. She's a big podcast fan now. We're podcaster fan, which I appreciate and I love. So let us know what you think. Five star review below, but also Hello@TheDentalATeam.com. If you have further questions, if you need help, if you need guidance, or you're really not sure how to dig in and figure this out, please just... Monica (26:03) Hahaha The Dental A Team (26:29) Reach out, Hello@TheDentalATeam.com. We're here. A lot of those emails, most of those emails actually do get forwarded to the consultant team. So we are the ones that you're gonna be hearing from. Thanks so much and we'll catch you guys next time. Monica (26:42) Thanks everyone.
Unicorns Unite: The Freelancer Digital Media Virtual Assistant Community
We're in the middle of a trust recession. It's totally changing everything about buyer behavior and marketing in 2026. Selling feels slower and more personal. And maybe a little heavier than it used to. If you've been feeling that shift, you're not doing anything wrong.In this episode, I'm sitting down with my friend Sheri Moise, and she offers a really refreshing (and honestly relieving) reframe that every freelancer and service provider needs right now. We talk about why so many old marketing tactics just aren't landing anymore, and what actually builds trust, authority, and momentum in a much more discerning market.Sheri Moise is an Intuitive Business Astrologer and Strategist who helps entrepreneurs plan, launch, and lead using timing and data rather than guesswork. With over 25 years in corporate sales and marketing, Sheri brings a practical, grounded lens to astrology—focusing on audience readiness, market cycles, and decision-making. She's known for helping clients see what's coming next and respond strategically instead of reactively.Listen to learn more aboutWhy the trust recession is actually a wisdom evolutionHow buyer behavior in 2026 is changing (and why slower isn't bad)The shift from know-like-trust to trust-first marketingWhy guru-style marketing is collapsingHow freelancers can build authority without hype or pressureFreelancers and service providers, it's time to stop running the same tired marketing playbook. This conversation shows what actually works with buyers in 2026.Sponsored by The Digital Marketer's Workgroup Already doing marketing work and ready for more clients and better referrals? Join a supportive, tight-knit community of freelancers where you'll get behind-the-scenes conversations, ongoing support, advanced training, and exclusive job leads. Apply here!Links Mentioned in Show:Grab Sheri's Success Planet Diagnostic for 50% off with code: Unicorn. You'll get a personalized PDF report that shows how your internal operating system works in business. It helps you understand your natural timing, decision-making rhythm, and pacing so you can plan and move forward without pressure or guesswork.Connect with Sheri:Instagram: @sherimoiseFacebook: Sheri Moise Astro BizWebsite: http://sherimoise.com/ Connect with
Hernosity and Pixicato must save the world from a plot so ridiculous and convoluted that it just might work! Lessons: Technology can be amazing when we learn to use it in healthy, sustainable ways. Subscribe, Support the show, and get our Yoto Cards! Want more kids podcasts for the whole family? Grown-ups, subscribe to Starglow+ here. Learn more about Starglow Media here. Follow Starglow on Instagram and YouTube Share questions with a grownup's help via email: hello@whatifworldpodcast.com or voicemail: 205-605-WHAT (9428) Eric and Karen O'Keeffe make What If World. Our producer is Miss Lynn. Character art by Ana Stretcu, episode art by Lynn Hickernell, podcast art by Jason O'Keefe, and theme song by Craig Martinson.
John and Chrissy share how they turned early inheritance and smart planning into a successful first home purchase in pricey California.After renting for over 18 years, John and Chrissy navigated skyrocketing rents, family support, and strategic planning to buy a $700,000 home in San Luis Obispo. With only $5,000 in savings, an unexpected offer of early inheritance shifted their mindset from surviving rent hikes to buying a home. They used the How to Buy a Home system and their Unicorn team to align monthly affordability with realistic home options. From dealing with open house stress to choosing a planned urban development (PUD) for detached living without breaking the bank, their journey reflects persistence, planning, and prioritizing what matters most.“You have no idea what you're capable of until you put it in front of you and crunch the numbers.” — ChrissyHighlights: How do you go from $5,000 in savings to owning a $700,000 home with confidence?What options exist for buyers who want a detached home but can't afford traditional single-family prices?How can early inheritance or family gifts be used responsibly without guilt or confusion?What happens when you stop asking, “Can I afford this?” and start asking, “How do I make this work?”Check out our EPISODE GUIDE for more information and interviews!Connect with me to find a trusted realtor in your area or to answer your burning questions!Subscribe to our YouTube Channel @HowToBuyaHomeInstagram @HowtoBuyAHomePodcastTik Tok @HowToBuyAHomeVisit our Resource Center to "Ask David" AND get your FREE Home Buying Starter Kit!David Sidoni, the "How to Buy a Home Guy," is a seasoned real estate professional and consumer advocate with two decades of experience helping first-time homebuyers navigate the real estate market. His podcast, "How to Buy a Home," is a trusted resource for anyone looking to buy their first home. It offers expert advice, actionable tips, and inspiring stories from real first-time homebuyers. With a focus on making the home-buying process accessible and understandable, David breaks down complex topics into easy-to-follow steps, covering everything from budgeting and financing to finding the right home and making an offer. Subscribe for regular market updates, and leave a review to help us reach more people. Ready for an honest, informed home-buying experience? Viva la Unicorn Revolution - join us!
Learn More about Altanta Retreat and grab a seat HERE. About Business for Unicorns Business for Unicorns helps gym owners and fitness studio operators build profitable, sustainable businesses without burning out. Founded by Mark Fisher and Michael Keeler —who built and sold the $34-million Mark Fisher Fitness —BFU provides coaching, mentorship, courses, and events for gym owners ready to grow revenue, systemize operations, and create more freedom in their lives. To learn more, check out businessforunicorns.com. Get More BFU In Your Life: Claim your FREE copy of Gym Marketing Secrets HERE Follow BFU on Instagram HERE Subscribe to MF's YouTube Channel HERE Ready to Grow Your Gym? If you're a gym owner with 30+ clients looking to add $5k-$10k/month in the next 90 days, book your FREE Brainstorm Call HERE.
William Vanderbloemen discusses how professionals can find both success and satisfaction in their careers. — YOU'LL LEARN — 1) The one habit that puts you ahead of 90% of people2) How to learn what you don't know about yourself3) The one skill to work on—regardless of your jobSubscribe or visit AwesomeAtYourJob.com/ep1122 for clickable versions of the links below. — ABOUT WILLIAM — William Vanderbloemen has been leading the Vanderbloemen Search Group for 15 years, where they are regularly retained to identify the best talent for teams, manage succession planning, and consult on all issues regarding teams. This year, Vanderbloemen will complete their 3,000th executive search. Prior to founding Vanderbloemen Search Group, William studied executive search under a mentor with 25+ years of executive search at the highest level. His learning taught him the very best corporate practices, including the search strategies used by the internationally known firm Russell Reynolds. Prior to that, William served as a Senior Pastor at one of the largest Presbyterian Churches in the United States.• Book: Be the Unicorn: 12 Data-Driven Habits that Separate the Best Leaders from the Rest• Book: Work How You Are Wired: 12 Data-Driven Steps to Finding a Job You Love• Website: Vanderbloemen.com— RESOURCES MENTIONED IN THE SHOW — • Tool: reMarkable• Book: Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones by James Clear• Past episode: 971: Mastering The Three Keys to Getting Noticed with Jay Baer• Past episode: 1066: How to Thrive When Your Resilience Runs Out with Dr. Tasha Eurich— THANK YOU SPONSORS! — • Monarch.com. Get 50% off your first year on with the code AWESOME.• Vanguard. Give your clients consistent results year in and year out with vanguard.com/AUDIOSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.