POPULARITY
TLDR: It was Claude :-)When I set out to compare ChatGPT, Claude, Gemini, Grok, and ChatPRD for writing Product Requirement Documents, I figured they'd all be roughly equivalent. Maybe some subtle variations in tone or structure, but nothing earth-shattering. They're all built on similar transformer architectures, trained on massive datasets, and marketed as capable of handling complex business writing.What I discovered over 45 minutes of hands-on testing revealed not just which tools are better for PRD creation, but why they're better, and more importantly, how you should actually be using AI to accelerate your product work without sacrificing quality or strategic thinking.If you're an early or mid-career PM in Silicon Valley, this matters to you. Because here's the uncomfortable truth: your peers are already using AI to write PRDs, analyze features, and generate documentation. The question isn't whether to use these tools. The question is whether you're using the right ones most effectively.So let me walk you through exactly what I did, what I learned, and what you should do differently.The Setup: A Real-World Test CaseHere's how I structured the experiment. As I said at the beginning of my recording, “We are back in the Fireside PM podcast and I did that review of the ChatGPT browser and people seemed to like it and then I asked, uh, in a poll, I think it was a LinkedIn poll maybe, what should my next PM product review be? And, people asked for ChatPRD.”So I had my marching orders from the audience. But I wanted to make this more comprehensive than just testing ChatPRD in isolation. I opened up five tabs: ChatGPT, Claude, Gemini, Grok, and ChatPRD.For the test case, I chose something realistic and relevant: an AI-powered tutor for high school students. Think KhanAmigo or similar edtech platforms. This gave me a concrete product scenario that's complex enough to stress-test these tools but straightforward enough that I could iterate quickly.But here's the critical part that too many PMs get wrong when they start using AI for product work: I didn't just throw a single sentence at these tools and expect magic.The “Back of the Napkin” Approach: Why You Still Need to Think“I presume everybody agrees that you should have some formulated thinking before you dump it into the chatbot for your PRD,” I noted early in my experiment. “I suppose in the future maybe you could just do, like, a one-sentence prompt and come out with the perfect PRD because it would just know everything about you and your company in the context, but for now we're gonna do this more, a little old-school AI approach where we're gonna do some original human thinking.”This is crucial. I see so many PMs, especially those newer to the field, treat AI like a magic oracle. They type in “Write me a PRD for a social feature” and then wonder why the output is generic, unfocused, and useless.Your job as a PM isn't to become obsolete. It's to become more effective. And that means doing the strategic thinking work that AI cannot do for you.So I started in Google Docs with what I call a “back of the napkin” PRD structure. Here's what I included:Why: The strategic rationale. In this case: “Want to complement our existing edtech business with a personalized AI tutor, uh, want to maintain position industry, and grow through innovation. on mission for learners.”Target User: Who are we building for? “High school students interested in improving their grades and fundamentals. Fundamental knowledge topics. Specifically science and math. Students who are not in the top ten percent, nor in the bottom ten percent.”This is key—I got specific. Not just “students,” but students in the middle 80%. Not just “any subject,” but science and math. This specificity is what separates useful AI output from garbage.Problem to Solve: What's broken? “Students want better grades. Students are impatient. Students currently use AI just for finding the answers and less to, uh, understand concepts and practice using them.”Key Elements: The feature set and approach.Success Metrics: How we'd measure success.Now, was this a perfectly polished PRD outline? Hell no. As you can see from my transcript, I was literally thinking out loud, making typos, restructuring on the fly. But that's exactly the point. I put in maybe 10-15 minutes of human strategic thinking. That's all it took to create a foundation that would dramatically improve what came out of the AI tools.Round One: Generating the Full PRDWith my back-of-the-napkin outline ready, I copied it into each tool with a simple prompt asking them to expand it into a more complete PRD.ChatGPT: The Reliable GeneralistChatGPT gave me something that was... fine. Competent. Professional. But also deeply uninspiring.The document it produced checked all the boxes. It had the sections you'd expect. The writing was clear. But when I read it, I couldn't shake the feeling that I was reading something that could have been written for literally any product in any company. It felt like “an average of everything out there,” as I noted in my evaluation.Here's what ChatGPT did well: It understood the basic structure of a PRD. It generated appropriate sections. The grammar and formatting were clean. If you needed to hand something in by EOD and had literally no time for refinement, ChatGPT would save you from complete embarrassment.But here's what it lacked: Depth. Nuance. Strategic thinking that felt connected to real product decisions. When it described the target user, it used phrases that could apply to any edtech product. When it outlined success metrics, they were the obvious ones (engagement, retention, test scores) without any interesting thinking about leading indicators or proxy metrics.The problem with generic output isn't that it's wrong, it's that it's invisible. When you're trying to get buy-in from leadership or alignment from engineering, you need your PRD to feel specific, considered, and connected to your company's actual strategy. ChatGPT's output felt like it was written by someone who'd read a lot of PRDs but never actually shipped a product.One specific example: When I asked for success metrics, ChatGPT gave me “Student engagement rate, Time spent on platform, Test score improvement.” These aren't wrong, but they're lazy. They don't show any thinking about what specifically matters for an AI tutor versus any other educational product. Compare that to Claude's output, which got more specific about things like “concept mastery rate” and “question-to-understanding ratio.”Actionable Insight: Use ChatGPT when you need fast, serviceable documentation that doesn't need to be exceptional. Think: internal updates, status reports, routine communications. Don't rely on it for strategic documents where differentiation matters. If you do use ChatGPT for important documents, treat its output as a starting point that needs significant human refinement to add strategic depth and company-specific context.Gemini: Better Than ExpectedGoogle's Gemini actually impressed me more than I anticipated. The structure was solid, and it had a nice balance of detail without being overwhelming.What Gemini got right: The writing had a nice flow to it. The document felt organized and logical. It did a better job than ChatGPT at providing specific examples and thinking through edge cases. For instance, when describing the target user, it went beyond demographics to consider behavioral characteristics and motivations.Gemini also showed some interesting strategic thinking. It considered competitive positioning more thoughtfully than ChatGPT and proposed some differentiation angles that weren't in my original outline. Good AI tools should add insight, not just regurgitate your input with better formatting.But here's where it fell short: the visual elements. When I asked for mockups, Gemini produced images that looked more like stock photos than actual product designs. They weren't terrible, but they weren't compelling either. They had that AI-generated sheen that makes it obvious they came from an image model rather than a designer's brain.For a PRD that you're going to use internally with a team that already understands the context, Gemini's output would work well. The text quality is strong enough, and if you're in the Google ecosystem (Docs, Sheets, Meet, etc.), the integration is seamless. You can paste Gemini's output directly into Google Docs and continue iterating there.But if you need to create something compelling enough to win over skeptics or secure budget, Gemini falls just short. It's good, but not great. It's the solid B+ student: reliably competent but rarely exceptional.Actionable Insight: Gemini is a strong choice if you're working in the Google ecosystem and need good integration with Docs, Sheets, and other Google Workspace tools. The quality is sufficient for most internal documentation needs. It's particularly good if you're working with cross-functional partners who are already in Google Workspace. You can share and collaborate on AI-generated drafts without friction. But don't expect visual mockups that will wow anyone, and plan to add your own strategic polish for high-stakes documents.Grok: Not Ready for Prime TimeLet's just say my expectations were low, and Grok still managed to underdeliver. The PRD felt thin, generic, and lacked the depth you need for real product work.“I don't have high expectations for grok, unfortunately,” I said before testing it. Spoiler alert: my low expectations were validated.Actionable Insight: Skip Grok for product documentation work right now. Maybe it'll improve, but as of my testing, it's simply not competitive with the other options. It felt like 1-2 years behind the others.ChatPRD: The Specialized ToolNow this was interesting. ChatPRD is purpose-built for PRDs, using foundational models underneath but with specific tuning and structure for product documentation.The result? The structure was logical, the depth was appropriate, and it included elements that showed understanding of what actually matters in a PRD. As I reflected: “Cause this one feels like, A human wrote this PRD.”The interface guides you through the process more deliberately than just dumping text into a general chat interface. It asks clarifying questions. It structures the output more thoughtfully.Actionable Insight: If you're a technical lead without a dedicated PM, or you're a PM who wants a more structured approach to using AI for PRDs, ChatPRD is worth the specialized focus. It's particularly good when you need something that feels authentic enough to share with stakeholders without heavy editing.Claude: The Clear WinnerBut the standout performer, and I'm ranking these, was Claude.“I think we know that for now, I'm gonna say Claude did the best job,” I concluded after all the testing. Claude produced the most comprehensive, thoughtful, and strategically sound PRD. But what really set it apart were the concept mocks.When I asked each tool to generate visual mockups of the product, Claude produced HTML prototypes that, while not fully functional, looked genuinely compelling. They had thoughtful UI design, clear information architecture, and felt like something that could actually guide development.“They were, like, closer to, like, what a Lovable would produce or something like that,” I noted, referring to the quality of low-fidelity prototypes that good designers create.The text quality was also superior: more nuanced, better structured, and with more strategic depth. It felt like Claude understood not just what a PRD should contain, but why it should contain those elements.Actionable Insight: For any PRD that matters, meaning anything you'll share with leadership, use to get buy-in, or guide actual product development, you might as well start with Claude. The quality difference is significant enough that it's worth using Claude even if you primarily use another tool for other tasks.Final Rankings: The Definitive HierarchyAfter testing all five tools on multiple dimensions: initial PRD generation, visual mockups, and even crafting a pitch paragraph for a skeptical VP of Engineering, here's my final ranking:* Claude - Best overall quality, most compelling mockups, strongest strategic thinking* ChatPRD - Best for structured PRD creation, feels most “human”* Gemini - Solid all-around performance, good Google integration* ChatGPT - Reliable but generic, lacks differentiation* Grok - Not competitive for this use case“I'd probably say Claude, then chat PRD, then Gemini, then chat GPT, and then Grock,” I concluded.The Deeper Lesson: Garbage In, Garbage Out (Still Applies)But here's what matters more than which tool wins: the realization that hit me partway through this experiment.“I think it really does come down to, like, you know, the quality of the prompt,” I observed. “So if our prompt were a little more detailed, all that were more thought-through, then I'm sure the output would have been better. But as you can see we didn't really put in brain trust prompting here. Just a little bit of, kind of hand-wavy prompting, but a little better than just one or two sentences.”And we still got pretty good results.This is the meta-insight that should change how you approach AI tools in your product work: The quality of your input determines the quality of your output, but the baseline quality of the tool determines the ceiling of what's possible.No amount of great prompting will make Grok produce Claude-level output. But even mediocre prompting with Claude will beat great prompting with lesser tools.So the dual strategy is:* Use the best tool available (currently Claude for PRDs)* Invest in improving your prompting skills ideally with as much original and insightful human, company aware, and context aware thinking as possible.Real-World Workflows: How to Actually Use This in Your Day-to-Day PM WorkTheory is great. Here's how to incorporate these insights into your actual product management workflows.The Weekly Sprint Planning WorkflowEvery PM I know spends hours each week preparing for sprint planning. You need to refine user stories, clarify acceptance criteria, anticipate engineering questions, and align with design and data science. AI can compress this work significantly.Here's an example workflow:Monday morning (30 minutes):* Review upcoming priorities and open your rough notes/outline in Google Docs* Open Claude and paste your outline with this prompt:“I'm preparing for sprint planning. Based on these priorities [paste notes], generate detailed user stories with acceptance criteria. Format each as: User story, Business context, Technical considerations, Acceptance criteria, Dependencies, Open questions.”Monday afternoon (20 minutes):* Review Claude's output critically* Identify gaps, unclear requirements, or missing context* Follow up with targeted prompts:“The user story about authentication is too vague. Break it down into separate stories for: social login, email/password, session management, and password reset. For each, specify security requirements and edge cases.”Tuesday morning (15 minutes):* Generate mockups for any UI-heavy stories:“Create an HTML mockup for the login flow showing: landing page, social login options, email/password form, error states, and success redirect.”* Even if the HTML doesn't work perfectly, it gives your designers a starting pointBefore sprint planning (10 minutes):* Ask Claude to anticipate engineering questions:“Review these user stories as if you're a senior engineer. What questions would you ask? What concerns would you raise about technical feasibility, dependencies, or edge cases?”* This preparation makes you look thoughtful and helps the meeting run smoothlyTotal time investment: ~75 minutes. Typical time saved: 3-4 hours compared to doing this manually.The Stakeholder Alignment WorkflowGetting alignment from multiple stakeholders (product leadership, engineering, design, data science, legal, marketing) is one of the hardest parts of PM work. AI can help you think through different stakeholder perspectives and craft compelling communications for each.Here's how:Step 1: Map your stakeholders (10 minutes)Create a quick table in a doc:Stakeholder | Primary Concern | Decision Criteria | Likely Objections VP Product | Strategic fit, ROI | Company OKRs, market opportunity | Resource allocation vs other priorities VP Eng | Technical risk, capacity | Engineering capacity, tech debt | Complexity, unclear requirements Design Lead | User experience | User research, design principles | Timeline doesn't allow proper design process Legal | Compliance, risk | Regulatory requirements | Data privacy, user consent flowsStep 2: Generate stakeholder-specific communications (20 minutes)For each key stakeholder, ask Claude:“I need to pitch this product idea to [Stakeholder]. Based on this PRD, create a 1-page brief addressing their primary concern of [concern from your table]. Open with the specific value for them, address their likely objection of [objection], and close with a clear ask. Tone should be [professional/technical/strategic] based on their role.”Then you'll have customized one-pagers for your pre-meetings with each stakeholder, dramatically increasing your alignment rate.Step 3: Synthesize feedback (15 minutes)After gathering stakeholder input, ask Claude to help you synthesize:“I got the following feedback from stakeholders: [paste feedback]. Identify: (1) Common themes, (2) Conflicting requirements, (3) Legitimate concerns vs organizational politics, (4) Recommended compromises that might satisfy multiple parties.”This pattern-matching across stakeholder feedback is something AI does really well and saves you hours of mental processing.The Quarterly Planning WorkflowQuarterly or annual planning is where product strategy gets real. You need to synthesize market trends, customer feedback, technical capabilities, and business objectives into a coherent roadmap. AI can accelerate this dramatically.Six weeks before planning:* Start collecting input (customer interviews, market research, competitive analysis, engineering feedback)* Don't wait until the last minuteFour weeks before planning:Dump everything into Claude with this structure:“I'm creating our Q2 roadmap. Context:* Business objectives: [paste from leadership]* Customer feedback themes: [paste synthesis]* Technical capabilities/constraints: [paste from engineering]* Competitive landscape: [paste analysis]* Current product gaps: [paste from your analysis]Generate 5 strategic themes that could anchor our Q2 roadmap. For each theme:* Strategic rationale (how it connects to business objectives)* Key initiatives (2-3 major features/projects)* Success metrics* Resource requirements (rough estimate)* Risks and mitigations* Customer segments addressed”This gives you a strategic framework to react to rather than starting from a blank page.Three weeks before planning:Iterate on the most promising themes:“Deep dive on Theme 3. Generate:* Detailed initiative breakdown* Dependencies on platform/infrastructure* Phasing options (MVP vs full build)* Go-to-market considerations* Data requirements* Open questions requiring research”Two weeks before planning:Pressure-test your thinking:“Play devil's advocate on this roadmap. What are the strongest arguments against each initiative? What am I likely missing? What failure modes should I plan for?”This adversarial prompting forces you to strengthen weak points before your leadership reviews it.One week before planning:Generate your presentation:“Create an executive presentation for this roadmap. Structure: (1) Market context and strategic imperative, (2) Q2 themes and initiatives, (3) Expected outcomes and metrics, (4) Resource requirements, (5) Key risks and mitigations, (6) Success criteria for decision. Make it compelling but data-driven. Tone: confident but not overselling.”Then add your company-specific context, visual brand, and personal voice.The Customer Research WorkflowAI can't replace talking to customers, but it can help you prepare better questions, analyze feedback more systematically, and identify patterns faster.Before customer interviews:“I'm interviewing customers about [topic]. Generate:* 10 open-ended questions that avoid leading the witness* 5 follow-up questions for each main question* Common cognitive biases I should watch for* A framework for categorizing responses”This prep work helps you conduct better interviews.After interviews:“I conducted 15 customer interviews. Here are the key quotes: [paste anonymized quotes]. Identify:* Recurring themes and patterns* Surprising insights that contradict our assumptions* Segments with different needs* Implied needs customers didn't articulate directly* Recommended next steps for validation”AI is excellent at pattern-matching across qualitative data at scale.The Crisis Management WorkflowSomething broke. The site is down. Data was lost. A feature shipped with a critical bug. You need to move fast.Immediate response (5 minutes):“Critical incident. Details: [brief description]. Generate:* Incident classification (Sev 1-4)* Immediate stakeholders to notify* Draft customer communication (honest, apologetic, specific about what happened and what we're doing)* Draft internal communication for leadership* Key questions to ask engineering during investigation”Having these drafted in 5 minutes lets you focus on coordination and decision-making rather than wordsmithing.Post-incident (30 minutes):“Write a post-mortem based on this incident timeline: [paste timeline]. Include:* What happened (technical details)* Root cause analysis* Impact quantification (users affected, revenue impact, time to resolution)* What went well in our response* What could have been better* Specific action items with owners and deadlines* Process changes to prevent recurrence Tone: Blameless, focused on learning and improvement.”This gives you a strong first draft to refine with your team.Common Pitfalls: What Not to Do with AI in Product ManagementNow let's talk about the mistakes I see PMs making with AI tools. Pitfall #1: Treating AI Output as FinalThe biggest mistake is copy-pasting AI output directly into your PRD, roadmap presentation, or stakeholder email without critical review.The result? Documents that are grammatically perfect but strategically shallow. Presentations that sound impressive but don't hold up under questioning. Emails that are professionally worded but miss the subtext of organizational politics.The fix: Always ask yourself:* Does this reflect my actual strategic thinking, or generic best practices?* Would my CEO/engineering lead/biggest customer find this compelling and specific?* Are there company-specific details, customer insights, or technical constraints that only I know?* Does this sound like me, or like a robot?Add those elements. That's where your value as a PM comes through.Pitfall #2: Using AI as a Crutch Instead of a ToolSome PMs use AI because they don't want to think deeply about the product. They're looking for AI to do the hard work of strategy, prioritization, and trade-off analysis.This never works. AI can help you think more systematically, but it can't replace thinking.If you find yourself using AI to avoid wrestling with hard questions (”Should we build X or Y?” “What's our actual competitive advantage?” “Why would customers switch from the incumbent?”), you're using it wrong.The fix: Use AI to explore options, not to make decisions. Generate three alternatives, pressure-test each one, then use your judgment to decide. The AI can help you think through implications, but you're still the one choosing.Pitfall #3: Not IteratingGetting mediocre AI output and just accepting it is a waste of the technology's potential.The PMs who get exceptional results from AI are the ones who iterate. They generate an initial response, identify what's weak or missing, and ask follow-up questions. They might go through 5-10 iterations on a key section of a PRD.Each iteration is quick (30 seconds to type a follow-up prompt, 30 seconds to read the response), but the cumulative effect is dramatically better output.The fix: Budget time for iteration. Don't try to generate a complete, polished PRD in one prompt. Instead, generate a rough draft, then spend 30 minutes iterating on specific sections that matter most.Pitfall #4: Ignoring the Political and Human ContextAI tools have no understanding of organizational politics, interpersonal relationships, or the specific humans you're working with.They don't know that your VP of Engineering is burned out and skeptical of any new initiatives. They don't know that your CEO has a personal obsession with a specific competitor. They don't know that your lead designer is sensitive about not being included early enough in the process.If you use AI-generated communications without layering in this human context, you'll create perfectly worded documents that land badly because they miss the subtext.The fix: After generating AI content, explicitly ask yourself: “What human context am I missing? What relationships do I need to consider? What political dynamics are in play?” Then modify the AI output accordingly.Pitfall #5: Over-Relying on a Single ToolDifferent AI tools have different strengths. Claude is great for strategic depth, ChatPRD is great for structure, Gemini integrates well with Google Workspace.If you only ever use one tool, you're missing opportunities to leverage different strengths for different tasks.The fix: Keep 2-3 tools in your toolkit. Use Claude for important PRDs and strategic documents. Use Gemini for quick internal documentation that needs to integrate with Google Docs. Use ChatPRD when you want more guided structure. Match the tool to the task.Pitfall #6: Not Fact-Checking AI OutputAI tools hallucinate. They make up statistics, misrepresent competitors, and confidently state things that aren't true. If you include those hallucinations in a PRD that goes to leadership, you look incompetent.The fix: Fact-check everything, especially:* Statistics and market data* Competitive feature claims* Technical capabilities and limitations* Regulatory and compliance requirementsIf the AI cites a number or makes a factual claim, verify it independently before including it in your document.The Meta-Skill: Prompt Engineering for PMsLet's zoom out and talk about the underlying skill that makes all of this work: prompt engineering.This is a real skill. The difference between a mediocre prompt and a great prompt can be 10x difference in output quality. And unlike coding or design, where there's a steep learning curve, prompt engineering is something you can get good at quickly.Principle 1: Provide Context Before InstructionsBad prompt:“Write a PRD for an AI tutor”Good prompt:“I'm a PM at an edtech company with 2M users, primarily high school students. We're exploring an AI tutor feature to complement our existing video content library and practice problems. Our main competitors are Khan Academy and Course Hero. Our differentiation is personalized learning paths based on student performance data.Write a PRD for an AI tutor feature targeting students in the middle 80% academically who struggle with science and math.”The second prompt gives Claude the context it needs to generate something specific and strategic rather than generic.Principle 2: Specify Format and ConstraintsBad prompt:“Generate success metrics”Good prompt:“Generate 5-7 success metrics for this feature. Include a mix of:* Leading indicators (early signals of success)* Lagging indicators (definitive success measures)* User behavior metrics* Business impact metricsFor each metric, specify: name, definition, target value, measurement method, and why it matters.”The structure you provide shapes the structure you get back.Principle 3: Ask for Multiple OptionsBad prompt:“What should our Q2 priorities be?”Good prompt:“Generate 3 different strategic approaches for Q2:* Option A: Focus on user acquisition* Option B: Focus on engagement and retention* Option C: Focus on monetizationFor each option, detail: key initiatives, expected outcomes, resource requirements, risks, and recommendation for or against.”Asking for multiple options forces the AI (and forces you) to think through trade-offs systematically.Principle 4: Specify Audience and ToneBad prompt:“Summarize this PRD”Good prompt:“Create a 1-paragraph summary of this PRD for our skeptical VP of Engineering. Tone: Technical, concise, addresses engineering concerns upfront. Focus on: technical architecture, resource requirements, risks, and expected engineering effort. Avoid marketing language.”The audience and tone specification ensures the output will actually work for your intended use.Principle 5: Use Iterative RefinementDon't try to get perfect output in one prompt. Instead:First prompt: Generate rough draft Second prompt: “This is too generic. Add specific examples from [our company context].” Third prompt: “The technical section is weak. Expand with architecture details and dependencies.” Fourth prompt: “Good. Now make it 30% more concise while keeping the key details.”Each iteration improves the output incrementally.Let me break down the prompting approach that worked in this experiment, because this is immediately actionable for your work tomorrow.Strategy 1: The Structured Outline ApproachDon't go from zero to full PRD in one prompt. Instead:* Start with strategic thinking - Spend 10-15 minutes outlining why you're building this, who it's for, and what problem it solves* Get specific - Don't say “users,” say “high school students in the middle 80% of academic performance”* Include constraints - Budget, timeline, technical limitations, competitive landscape* Dump your outline into the AI - Now ask it to expand into a full PRD* Iterate section by section - Don't try to perfect everything at onceThis is exactly what I did in my experiment, and even with my somewhat sloppy outline, the results were dramatically better than they would have been with a single-sentence prompt.Strategy 2: The Comparative Analysis PatternOne technique I used that worked particularly well: asking each tool to do the same specific task and comparing results.For example, I asked all five tools: “Please compose a one paragraph exact summary I can share over DM with a highly influential VP of engineering who is generally a skeptic but super smart.”This forced each tool to synthesize the entire PRD into a compelling pitch while accounting for a specific, challenging audience. The variation in quality was revealing—and it gave me multiple options to choose from or blend together.Actionable tip: When you need something critical (a pitch, an executive summary, a key decision framework), generate it with 2-3 different AI tools and take the best elements from each. This “ensemble approach” often produces better results than any single tool.Strategy 3: The Iterative Refinement LoopDon't treat the AI output as final. Use it as a first draft that you then refine through conversation with the AI.After getting the initial PRD, I could have asked follow-up questions like:* “What's missing from this PRD?”* “How would you strengthen the success metrics section?”* “Generate 3 alternative approaches to the core feature set”Each iteration improves the output and, more importantly, forces me to think more deeply about the product.What This Means for Your CareerIf you're an early or mid-career PM reading this, you might be thinking: “Great, so AI can write PRDs now. Am I becoming obsolete?”Absolutely not. But your role is evolving, and understanding that evolution is critical.The PMs who will thrive in the AI era are those who:* Excel at strategic thinking - AI can generate options, but you need to know which options align with company strategy, customer needs, and technical feasibility* Master the art of prompting - This is a genuine skill that separates mediocre AI users from exceptional ones* Know when to use AI and when not to - Some aspects of product work benefit enormously from AI. Others (user interviews, stakeholder negotiation, cross-functional relationship building) require human judgment and empathy* Can evaluate AI output critically - You need to spot the hallucinations, the generic fluff, and the strategic misalignments that AI inevitably producesThink of AI tools as incredibly capable interns. They can produce impressive work quickly, but they need direction, oversight, and strategic guidance. Your job is to provide that guidance while leveraging their speed and breadth.The Real-World Application: What to Do Monday MorningLet's get tactical. Here's exactly how to apply these insights to your actual product work:For Your Next PRD:* Block 30 minutes for strategic thinking - Write your back-of-the-napkin outline in Google Docs or your tool of choice* Open Claude (or ChatPRD if you want more structure)* Copy your outline with this prompt:“I'm a product manager at [company] working on [product area]. I need to create a comprehensive PRD based on this outline. Please expand this into a complete PRD with the following sections: [list your preferred sections]. Make it detailed enough for engineering to start breaking down into user stories, but concise enough for leadership to read in 15 minutes. [Paste your outline]”* Review the output critically - Look for generic statements, missing details, or strategic misalignments* Iterate on specific sections:“The success metrics section is too vague. Please provide 3-5 specific, measurable KPIs with target values and explanation of why these metrics matter.”* Generate supporting materials:“Create a visual mockup of the core user flow showing the key interaction points.”* Synthesize the best elements - Don't just copy-paste the AI output. Use it as raw material that you shape into your final documentFor Stakeholder Communication:When you need to pitch something to leadership or engineering:* Generate 3 versions of your pitch using different tools (Claude, ChatPRD, and one other)* Compare them for:* Clarity and conciseness* Strategic framing* Compelling value proposition* Addressing likely objections* Blend the best elements into your final version* Add your personal voice - This is crucial. AI output often lacks personality and specific company context. Add that yourself.For Feature Prioritization:AI tools can help you think through trade-offs more systematically:“I'm deciding between three features for our next release: [Feature A], [Feature B], and [Feature C]. For each feature, analyze: (1) Estimated engineering effort, (2) Expected user impact, (3) Strategic alignment with making our platform the go-to solution for [your market], (4) Risk factors. Then recommend a prioritization with rationale.”This doesn't replace your judgment, but it forces you to think through each dimension systematically and often surfaces considerations you hadn't thought of.The Uncomfortable Truth About AI and Product ManagementLet me be direct about something that makes many PMs uncomfortable: AI will make some PM skills less valuable while making others more valuable.Less valuable:* Writing boilerplate documentation* Creating standard frameworks and templates* Generating routine status updates* Synthesizing information from existing sourcesMore valuable:* Strategic product vision and roadmapping* Deep customer empathy and insight generation* Cross-functional leadership and influence* Critical evaluation of options and trade-offs* Creative problem-solving for novel situationsIf your PM role primarily involves the first category of tasks, you should be concerned. But if you're focused on the second category while leveraging AI for the first, you're going to be exponentially more effective than your peers who resist these tools.The PMs I see succeeding aren't those who can write the best PRD manually. They're those who can write the best PRD with AI assistance in one-tenth the time, then use the saved time to talk to more customers, think more deeply about strategy, and build stronger cross-functional relationships.Advanced Techniques: Beyond Basic PRD GenerationOnce you've mastered the basics, here are some advanced applications I've found valuable:Competitive Analysis at Scale“Research our top 5 competitors in [market]. For each one, analyze: their core value proposition, key features, pricing strategy, target customer, and likely product roadmap based on recent releases and job postings. Create a comparison matrix showing where we have advantages and gaps.”Then use web search tools in Claude or Perplexity to fact-check and expand the analysis.Scenario Planning“We're considering three strategic directions for our product: [Direction A], [Direction B], [Direction C]. For each direction, map out: likely customer adoption curve, required technical investments, competitive positioning in 12 months, and potential pivots if the hypothesis proves wrong. Then identify the highest-risk assumptions we should test first for each direction.”This kind of structured scenario thinking is exactly what AI excels at—generating multiple well-reasoned perspectives quickly.User Story GenerationAfter your PRD is solid:“Based on this PRD, generate a complete set of user stories following the format ‘As a [user type], I want to [action] so that [benefit].' Include acceptance criteria for each story. Organize them into epics by functional area.”This can save your engineering team hours of grooming meetings.The Tools Will Keep Evolving. Your Process Shouldn'tHere's something important to remember: by the time you read this, the specific rankings might have shifted. Maybe ChatGPT-5 has leapfrogged Claude. Maybe a new specialized tool has emerged.But the core principles won't change:* Do strategic thinking before touching AI* Use the best tool available for your specific task* Iterate and refine rather than accepting first outputs* Blend AI capabilities with human judgment* Focus your time on the uniquely human aspects of product managementThe specific tools matter less than your process for using them effectively.A Final Experiment: The Skeptical VP TestI want to share one more insight from my testing that I think is particularly relevant for early and mid-career PMs.Toward the end of my experiment, I gave each tool this prompt: “Please compose a one paragraph exact summary I can share over DM with a highly influential VP of engineering who is generally a skeptic but super smart.”This is such a realistic scenario. How many times have you needed to pitch an idea to a skeptical technical leader via Slack or email? Someone who's brilliant, who's seen a thousand product ideas fail, and who can spot b******t from a mile away?The quality variation in the responses was fascinating. ChatGPT gave me something that felt generic and safe. Gemini was better but still a bit too enthusiastic. Grok was... well, Grok.But Claude and ChatPRD both produced messages that felt authentic, technically credible, and appropriately confident without being overselling. They acknowledged the engineering challenges while framing the opportunity compellingly.The lesson: When the stakes are high and the audience is sophisticated, the quality of your AI tool matters even more. That skeptical VP can tell the difference between a carefully crafted message and AI-generated fluff. So can your CEO. So can your biggest customers.Use the best tools available, but more importantly, always add your own strategic thinking and authentic voice on top.Questions to Consider: A Framework for Your Own ExperimentsAs I wrapped up my Loom, I posed some questions to the audience that I'll pose to you:“Let me know in the comments, if you do your PRDs using AI differently, do you start with back of the envelope? Do you say, oh no, I just start with one sentence, and then I let the chatbot refine it with me? Or do you go way more detailed and then use the chatbot to kind of pressure test it?”These aren't rhetorical questions. Your answer reveals your approach to AI-augmented product work, and different approaches work for different people and contexts.For early-career PMs: I'd recommend starting with more detailed outlines. The discipline of thinking through your product strategy before touching AI will make you a stronger PM. You can always compress that process later as you get more experienced.For mid-career PMs: Experiment with different approaches for different types of documents. Maybe you do detailed outlines for major feature PRDs but use more iterative AI-assisted refinement for smaller features or updates. Find what optimizes your personal productivity while maintaining quality.For senior PMs and product leaders: Consider how AI changes what you should expect from your PM team. Should you be reviewing more AI-generated first drafts and spending more time on strategic guidance? Should you be training your team on effective AI usage? These are leadership questions worth grappling with.The Path Forward: Continuous ExperimentationMy experiment with these five AI tools took 45 minutes. But I'm not done experimenting.The field of AI-assisted product management is evolving rapidly. New tools launch monthly. Existing tools get smarter weekly. Prompting techniques that work today might be obsolete in three months.Your job, if you want to stay at the forefront of product management, is to continuously experiment. Try new tools. Share what works with your peers. Build a personal knowledge base of effective prompts and workflows. And be generous with what you learn. The PM community gets stronger when we share insights rather than hoarding them.That's why I created this Loom and why I'm writing this post. Not because I have all the answers, but because I'm figuring it out in real-time and want to share the journey.A Personal Note on Coaching and ConsultingIf this kind of practical advice resonates with you, I'm happy to work with you directly.Through my pm coaching practice, I offer 1:1 executive, career, and product coaching for PMs and product leaders. We can dig into your specific challenges: whether that's leveling up your AI workflows, navigating a career transition, or developing your strategic product thinking.I also work with companies (usually startups or incubation teams) on product strategy, helping teams figure out PMF for new explorations and improving their product management function.The format is flexible. Some clients want ongoing coaching, others prefer project-based consulting, and some just want a strategic sounding board for a specific decision. Whatever works for you.Reach out through tomleungcoaching.com if you're interested in working together.OK. Enough pontificating. Let's ship greatness. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit firesidepm.substack.com
TLDR: It was better than the microphones for this taping…and you can hear itMike and James give their review on the first Anime Revoution Toronto the previous weekend, and in particular the Japanese guests. With a few thoughts on the Toronto Game Expo which was also the same weekend. With a longer conversation about the convention scene for Toronto. And a throw away thought on Amazon's AI Dubbing attemptRemind Mike to do a mic check next timeMike Nicolas and James AustinCheck out our linktr.ee for more information: animeroundtable.com
In this laid back, chillaxed episode of the podcast Alexei and Talal discuss planes and tanks, Croydon (aka The Cronx), Digital ID cards, Your Party shenanigans and whether being paid by the government to perform comedy in Saudi Arabia is ethical. TLDR: It's not. Be a comrade and support the show! Become a Patron and get access to the video version of the podcast, live episodes and more - patreon.com/AlexeiSaylePodcast Send your fan art, thoughts and questions to alexeisaylepodcast@gmail.com Please consider leaving us a review on Apple Podcasts or wherever you get your podcasts. Subscribe to Alexei's YouTube channel here and join him for his Bike Rides. The Alexei Sayle Podcast is produced and edited by Talal Karkouti Music by Tarboosh Records Photograph from the Andy Hollingworth Archive
Ready to refresh your approach to holistic living? In today's episode, host Megan Swan welcomes you to Season 7 and invites you to embrace evolution, not by overhauling everything at once, but with quiet, intentional shifts toward wellness as a way of life.Key Points Discussed:Intentional Evolution: Why the most powerful changes often come from slowing down and tuning into what your mind, body, and soul truly need.Leadership & Wellness: How building the future as visionary women means putting well-being at the center of your journey.Mindset Shifts: Reframing progress from constant hustle to purposeful growth and sustainable self-care habits.Community & Support: An invitation to join Megan's Substack for exclusive insights, resources, and a supportive hub curated for future builders.Integration as Lifestyle: Strategies for making wellness a seamless part of daily life, so you show up vibrant, impactful, and ready to lead.TLDR: It's a new season and new vibe: less is more, and slowing down is your secret superpower. Intentional wellness is the foundation for personal leadership and building a brighter future.Thank you so much for tuning in! If you enjoyed this episode, share it with a friend who's also building the future.Screenshot your listening and tag @meganswanwellness on social media—we love seeing our community thrive!Substack: https://meganswan.substack.com/Connect with Megan Swanhttp://www.instagram.com/meganswanwellnesshttp://www.linkedin.com/in/megan-swan-wellnesswww.meganswanwellness.comKeywordsintentional evolution, holistic wellness, women leadership, slow living, mindset shifts, building the future, visionary women, self-care, wellness integration, community, sustainable habits, lifestyle strategies, purposeful growth, entrepreneurial women, Substack community, well-being, leadership development
John Owens CFP®, EA, ECA, CPWA® and Ed Zolotarev CPA, MST, CFP® fill in for AJ and Shane on this milestone 150th episode of The Liquidity Event. FICO (Fair Isaac Corporation) is finally figuring out what to do about your outstanding Klarna loans. We discuss how “buy now, pay later” services might soon impact your credit score. TLDR: It probably isn't the best idea to finance your DoorDash order over 4 months. Plus, alternative investments are getting a lot of attention right now. Why are private equity firms suddenly sliding into your DMs? We examine the dry IPO market as one of the root causes. Then a Reddit question: how many kids should you have from a financial perspective? Finally, our biggest takeaways from the first half of 2025. Key Timestamps: (6:25) FICO's buy now, pay later integration rollout (9:30) Why financing experiences is financial quicksand (12:20) Ed's Affirm baby monitor financing experiment (15:56) Alternative investments moving down market (19:20) Why the IPO drought is forcing PE firms to get creative (24:10) Reddit question: How many kids should you have? (29:25) First half of 2025 takeaways
Olly runs through some of Barnier's choicest cabinet picks TLDR: It's a cabinet choc full of reactionary catholics, the policy in New Caledonia is likely to remain repressive, Macronism's left wing wakes up with a hangover and the NFP pursues censure. Send us questions and suggestions for our Friday interviews at flep24pod@gmail.com. Cover our newspaper expenses. How about you buy us one today? https://buymeacoffee.com/flep24 Want your book, magazine, or website advertised at the beginning or end of the show? Get in touch! Fighting Fund: https://buymeacoffee.com/flep24 Flep24's Twitter @flep24pod Marlon's Twitter @MarlonEttinger Olly's Twitter @reality_manager
Tldr:It takes really long and in the current scenario its forever and you better be ready with another source of income! Did I tell you I am amongst the top 25 Indian Doctor's Blog on Feedspot ?It's always great to reach the people I want to help and Feedspot is a big help. Read the blog post --- Send in a voice message: https://podcasters.spotify.com/pod/show/healthwealthbridge/message
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ideological Bayesians, published by Kevin Dorst on February 26, 2024 on LessWrong. TLDR: It's often said that Bayesian updating is unbiased and converges to the truth - and, therefore, that biases must emerge from non-Bayesian sources. That's not quite right. The convergence results require updating on your total evidence - but for agents at all like us, that's impossible - instead, we must selectively attend to certain questions, ignoring others. Yet correlations between what we see and what questions we ask - "ideological" Bayesian updating - can lead to predictable biases and polarization. Professor Polder is a polarizing figure. His fans praise him for his insight; his critics denounce him for his aggression. Ask his fans, and they'll supply you with a bunch of instances when he made an insightful comment during discussions. They'll admit that he's sometimes aggressive, but they can't remember too many cases - he certainly doesn't seem any more aggressive than the average professor. Ask his critics, and they'll supply you with a bunch of instances when he made an aggressive comment during discussions. They'll admit that he's sometimes insightful, but they can't remember too many cases - he certainly doesn't seem any more insightful than the average professor. This sort of polarization is, I assume, familiar. But let me tell you a secret: Professor Polder is, in fact, perfectly average - he has an unremarkably average number of both insightful and aggressive comments. So what's going on? His fans are better at noticing his insights, while his critics are better at noticing his aggression. As a result, their estimates are off: his fans think he's more insightful than he is, and his critics think he's more aggressive than he is. Each are correct about individual bits of the picture - when they notice aggression or insight, he is being aggressive or insightful. But none are correct about the overall picture. This source of polarization is also, I assume, familiar. It's widely appreciated that background beliefs and ideology - habits of mind, patterns of salience, and default forms of explanation - can lead to bias, disagreement, and polarization. In this broad sense of "ideology", we're familiar with the observation that real people - especially fans and critics - are often ideological.[1] But let me tell you another secret: Polder's fans and critics are all Bayesians. More carefully: they all maintain precise probability distributions over the relevant possibilities, and they always update their opinions by conditioning their priors on the (unambiguous) true answer to a partitional question. How is that possible? Don't Bayesians, in such contexts, update in unbiased[2] ways, always converge to the truth, and therefore avoid persistent disagreement? Not necessarily. The trick is that which question they update on is correlated with what they see - they have different patterns of salience. For example, when Polder makes a comment that is both insightful and aggressive, his fans are more likely to notice (just) the insight, while his critics are more likely to notice (just) the aggression. This can lead to predictable polarization. I'm going to give a model of how such correlations - between what you see, and what questions you ask about it - can lead otherwise rational Bayesians to diverge from both each other and the truth. Though simplified, I think it sheds light on how ideology might work. Limited-Attention Bayesians Standard Bayesian epistemology says you must update on your total evidence. That's nuts. To see just how infeasible that is, take a look at the following video. Consider the question: what happens to the exercise ball? I assume you noticed that the exercise ball disappeared. Did you also notice that the Christmas tree gained lights, the bowl changed c...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ideological Bayesians, published by Kevin Dorst on February 26, 2024 on LessWrong. TLDR: It's often said that Bayesian updating is unbiased and converges to the truth - and, therefore, that biases must emerge from non-Bayesian sources. That's not quite right. The convergence results require updating on your total evidence - but for agents at all like us, that's impossible - instead, we must selectively attend to certain questions, ignoring others. Yet correlations between what we see and what questions we ask - "ideological" Bayesian updating - can lead to predictable biases and polarization. Professor Polder is a polarizing figure. His fans praise him for his insight; his critics denounce him for his aggression. Ask his fans, and they'll supply you with a bunch of instances when he made an insightful comment during discussions. They'll admit that he's sometimes aggressive, but they can't remember too many cases - he certainly doesn't seem any more aggressive than the average professor. Ask his critics, and they'll supply you with a bunch of instances when he made an aggressive comment during discussions. They'll admit that he's sometimes insightful, but they can't remember too many cases - he certainly doesn't seem any more insightful than the average professor. This sort of polarization is, I assume, familiar. But let me tell you a secret: Professor Polder is, in fact, perfectly average - he has an unremarkably average number of both insightful and aggressive comments. So what's going on? His fans are better at noticing his insights, while his critics are better at noticing his aggression. As a result, their estimates are off: his fans think he's more insightful than he is, and his critics think he's more aggressive than he is. Each are correct about individual bits of the picture - when they notice aggression or insight, he is being aggressive or insightful. But none are correct about the overall picture. This source of polarization is also, I assume, familiar. It's widely appreciated that background beliefs and ideology - habits of mind, patterns of salience, and default forms of explanation - can lead to bias, disagreement, and polarization. In this broad sense of "ideology", we're familiar with the observation that real people - especially fans and critics - are often ideological.[1] But let me tell you another secret: Polder's fans and critics are all Bayesians. More carefully: they all maintain precise probability distributions over the relevant possibilities, and they always update their opinions by conditioning their priors on the (unambiguous) true answer to a partitional question. How is that possible? Don't Bayesians, in such contexts, update in unbiased[2] ways, always converge to the truth, and therefore avoid persistent disagreement? Not necessarily. The trick is that which question they update on is correlated with what they see - they have different patterns of salience. For example, when Polder makes a comment that is both insightful and aggressive, his fans are more likely to notice (just) the insight, while his critics are more likely to notice (just) the aggression. This can lead to predictable polarization. I'm going to give a model of how such correlations - between what you see, and what questions you ask about it - can lead otherwise rational Bayesians to diverge from both each other and the truth. Though simplified, I think it sheds light on how ideology might work. Limited-Attention Bayesians Standard Bayesian epistemology says you must update on your total evidence. That's nuts. To see just how infeasible that is, take a look at the following video. Consider the question: what happens to the exercise ball? I assume you noticed that the exercise ball disappeared. Did you also notice that the Christmas tree gained lights, the bowl changed c...
The quest to find the perfect search engine continues with Swisscows.com. TLDR: It sucks. Club goes over moves/roster, Nordecke elections, and gets to talk to Ravi head of Partnerships. Get stuck in! Bird: https://twitter.com/President_Birb https://ahernandezart.com/ Check links below: Supporter Supply https://www.supportersupply.co/ Code for free delivery: upper90boyz (that's boys with a Z) https://nordecke.com/ Podcasts are available on Spotify, Apple Podcast, and all podcast apps. Now on YouTube, with video, and the faces! Not seeing us somewhere? Email us Check us out on our Social Media Platforms and feel free to email us! We're totally literate and will 100% read anything you send, promise. Songs by Nick Tolford and Company https://ntac.bandcamp.com/track/boys-night-out SIGN UP TO BE PART OF THE NORDECKE! Here - https://nordecke.com/ Become part of the Discord family: https://discord.gg/crew96 Subscribe to our channel for more soccer content: -Email us: podcast@upper90club.com -Follow us on Twitter: https://twitter.com/Upper90ClubPod -Like us on Facebook: https://www.facebook.com/groups/upper90clubpod -Follow us on Instagram: https://www.instagram.com/upper90clubpod/ -Apple Music: https://podcasts.apple.com/us/podcast/upper-90-club/id1647214221 -Spotify: https://open.spotify.com/show/1xnYAtnQ8tThdn5JWX6c24 -Linktree: https://linktr.ee/upper90clubpod #VamosColumbus | #Crew96 | #Upper90Club https://sirkbook.com/ https://www.amazon.com/stores/Steve-Sirk/author/B0821YJYT8?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true
Today we offer you some first impressions of OVHcloud and how we're seriously considering moving our Light Pentest LITE training class to it! TLDR: It runs on vCenter, my first and only virtualization love! Unlimited VM "powered on" time and unlimited bandwidth Intergration with PowerShell so you can run a single script to "heal" your environment to a gold image Easy integration with pfSense to be able to manage the firewall and internal/external IPs Price comparable to what we're paying now in Azure land
This episode contains: Ben joins Steven in person, and Devon phones it in this week. Your heart has ventricles and… dorsicles? No Ben; It's atriums. Steven reminds us that he knows the beginning of a Barenaked Ladies song. Devon chats about his first cruise! TLDR: It was… about as he expected. Was Devon on a cruise long enough to get scurvy? Are drink packages on cruise ships worth it? Everything's still wrong with Ben this week. Looks like one of Ben's bedroom walls will need to be replaced. FEMA declared the recent rainstorms in California an Avengers-level event. Ben is getting his septic tank pumped this week. Goodbye wooden deck. Ben found a lot of surprising things when crawling under his house. You wouldn't illegally download a house, would you? Steven needs a break from his vacations. Listen to Steven guest host the Plastic Plesiosaur podcast this month! Brain Matters: Orienteering may help fight cognitive decline. Orienteering is a new sport, kinda like extreme hiking. Devon is our resident sports advocate and expert. De-stress and unplug with Orienteering, and improve your cognitive fitness. The recent study about Orienteering is self-reported, so we're kinda skeptical. Ben loves exploring new routes in his town. Steven & Devon say "no thanks." Steven thought Ben was going to murder him when driving him the wrong way home. https://www.sciencedaily.com/releases/2023/01/230120154924.htm Charge it! Could gravity batteries really store excess renewable energy? Gravity batteries could theoretically be located in abandoned mines. Gravity batteries are one of the few ways to generate energy without steam! Gravity batteries are not feasible: maintenance costs outweigh the benefits. We chat about hydroenergy between two lakes. Why do all our tech advances go back to the Romans? Thanks, middle ages! Steven's kids just watched Ratatouille for the first time... is it for kids? Steven has a quick Rant-atouille. Ben invites Steven to come over and watch Everything, Everywhere, All at Once. https://www.techspot.com/news/97306-gravity-batteries-abandoned-mines-could-power-whole-planet.html Mid-pod Patreon only: Renee reminded Steven he's talking about Bridgerton. Devon recommends the second Downton Abbey film. Doesn't recommend the first. Devon lawyersplains Downton Abbey to Ben. He could go on for a half hour, easy. Ben guesses the footmen in Downton Abbey are the ninjas working for Shredder. Science Fiction / Big Question: We gush about The Last of Us, video game and show. Cordyceps is a real fungal thing that set up the premise of The Last of Us. We compare The Last of Us to a couple other zombie media. The Walking Dead refers to the people still left... just like The Last of Us. Starting The Last of Us with scientists in a 1968-era show is a good choice. Anna Torv is FANTASTIC in The Last of Us, AND Fringe. Watch Fringe. Wanna see more Pedro Pascal? We recommend Prospect. Did anybody see the FBI marriage subplot in Sonic 2 coming? Devon did. Big question this week: we are old men grumbling about streaming services. HBO Max has been doing Ben dirty, y'all. Babylon 5 going away and a price hike? The Remastered version of Babylon 5 has been incredible. Could HBO Max have waited a little after The Last of Us premier to raise price? Is it time to begin rotating streaming services? Should we go back to physical media? When we bought shows that weren't revoked? Who would be hurt most from rotating streaming services? Is it just time for us to grow up? Devon grew up 30 years ago. Git gud n00bs. Devon recommends Kevin Can F**k Himself on AMC. After-pod Patreon only: Devon tries to tell a joke. He gets there eventually. Joke Orienteering is our new sport: here's a map, let's find the punch line! We like Wednesday despite it being a Tim Burton show. Why did Tim Burton spend 25 years making movies that didn't speak to Ben? Wednesday gives Ben some Pushing Daisies vibes. Tim Burton was an Imagineer! That explains a lot, in a good way. Steven and Devon gush about The Last Kingdom. Very fast paced show. Slow book. Steven LOVES having maps in books, especially The Last Kingdom. The same characters are in both The Last Kingdom and Assassins Creed: Valhalla. Ben would love to take a nap every day. We say good night to Devon, and then Steven and Ben WE KEEP ON RECORDING! Steven tries to get Ben to name the people he hates.
The team turns their sights on Pixar's newest flick, "Turning Red". TLDR: It's not good. For more movie recommendations and commentary, go to our website, www.FoxfireFarmhouse.com. If you have any comments, questions, or suggestions for future episode topics, reach out to us on Facebook or Instagram or email us at podcast@foxfirefarmhouse.com. Cheers!
With the cost of living crisis becoming ever more apparent, The Bevan Foundation have just published a new report focussing on the problems people on low incomes face finding a home to rent. TLDR: It's not good but to expand on that we are joined by two of the report's authors: Hugh Kocan: https://twitter.com/HughKocan Steff Evans: https://twitter.com/SteffHEvans You can read the report here: https://www.bevanfoundation.org/current-projects/preventing-homelessness-through-improving-the-local-housing-allowance/ and follow The Bevan Foundation here: https://twitter.com/BevanFoundation If you're enjoying the pod, please leave us a rating and review in your podcast app of choice and follow us for all the latest pods, videos, and live events: https://twitter.com/HiraethPod
In this episode of The SXS Guys Offroad Podcast, I'm joined via Zoom by Brent Gilliam from 212 Gloves to watch the spring launch of the Can-Am Offroad 2022 lineup expansion. TLDR: It was pretty short and without a lot of news... so we make up for it with some King of the Hammers talk, Polaris Pro R experiences, and more!
A longie but goodie just in time for the fall of US imperialist projects in Afghanistan...and autumn. TLDR: It's giving very much hating on the not rich who did not understand the assignment of shutting the fuck up and running politicians their propers for re-election. A tax on vibes. If "Taliban" means "student" in Pashto--the US is the teacher. Have you ever asked yourself why tax reform is such a contentious issue and has been for decades? Have you ever stopped to think that politicking, grandstanding and "fashion diplomacy" is part of moderate, centrist Democrats' pandering to a more leftist, "progressive" voter base but without alienating their prioritized white voter base too much (think Nancy Pelosi kneeling in the kente cloth sash in 2020)? There's a slavery era explanation for that: The tobacco rice and indigo that our ancestors grew and produced were considered in Alexander Hamilton's own words “which must be capital objects in treaties of commerce w foreign nations” and precipitated the events that led to this country's independence in 1776 following the Revolutionary War. His more famous quote, though, "no taxation without representation" portends a more glaring contradiction: imagine being mad you have to pay taxes to enrich the British whose royal companies, militaries, fleets and slavers sold and transported the Indigenous Africans you enslaved and calling TAXATION slavery while actually enslaving people? Tax the rich is again policy sloganeering and is a liberal democratic/partisan rallying cry that can't happen in good faith in an oligarchy that siphons all its resources to an already massive defense budget. Just earmark our reparations, will ya? Maybe the majority of folks who at least have some semblance of a revolutionary or radical politic aren't hating on rich people or Met Gala attendees or AOC herself, but instead recognizing that she and other politicians across the spectrum are sworn to serve a country that used the international slave trade between Britain and Spain to generate revenue to build a domestic manufacturing infrastructure to keep taxes lowered for their most invaluable taxable import--Black people and whose Central Intelligence Agency created the Taliban to wrest power from the USSR's puppet pro-soviet government during the Soviet Afghan War in 1979-1989 as part of plan to distort the public's view of the barbaric, warmongering, misogynist primitive backward United States to continue competing for oil and other natural resources in Central Asia, including the global heroin trade to finance the invasion of other countries, arm the genocide of their number one ally in the region (the Israeli Government. See Kamala Harris's comments on Israel)while pretending to care about "women's right's" and immigration while whipping Haitian immigrants at the border. That said, shirking accountability for their complicities in systems of oppression to remain elected isn't above any of their paygrades--even the politicians you like! Eyes On Haiti Oligarchy: https://haitiantimes.com/2021/07/16/haitians-can-no-longer-hide-behind-the-caste-system-killing-our-country/ US mass deportations of Haitian Immigrants: https://www.azcentral.com/story/opinion/op-ed/elviadiaz/2021/09/20/border-patrol-using-whips-del-rio-round-up-haitian-immigrants/5789596001/ Recommended Reading: https://blackallianceforpeace.com/newsletter/afghanistannograveyard?fbclid=IwAR20EeWyNUk4MpwUubcXZ53usiqemg3Rl6u4ZkeSioLAO14DiY5A2E4KrRA https://blackallianceforpeace.com/afghanistan Tariq Ali: https://newleftreview.org/sidecar/posts/debacle-in-afghanistan http://www.taxhistory.org/thp/readings.nsf/ArtWeb/4AF487C90CA14FB985256E000057B5EB?OpenDocument Global Heroin Trade Links to US presence in Afghanistan: https://www.theguardian.com/news/2018/jan/09/how-the-heroin-trade-explains-the-us-uk-failure-in-afghanistan https://www.research.manchester.ac.uk/portal/files/84028515/FULL_TEXT.PDF AOC: https://www.dsausa.org/democratic-left/aoc/
This week's episode is a spoilercast discussion of Chainsaw Man creator Tatsuki Fujimoto's newest one-shot, Look Back. It's a manga about manga (don't worry, it's not cringeworthy), validation, grief, and just generally being alive. TLDR: It's pretty great. Content warning: We're a fairly NSFW podcast that makes frequent use of reclaimed homophobic slurs. This episode also includes discussions of both real and fictionalized acts of violence and mass murder.
Heyyyyy Heifer! Quarantine seems to be slowly ending and it’s been weirder than I thought it would be. Like, I knew I’d be socially rusty and would have to remember how to order food, etc. But I wasn’t expecting to have weird depression, or that my brain would short circuit and make it feel like things had always stayed open and that the last 1.5 years didn’t happen. So I figured I’d talk about that whiplash today! Enjoy this episode and get vaccinated! Moo, Elaine TIMESTAMPS 00:00 No masks, no problems? LOL. 01:23 The transition from 1.5 years of quarantine to pre-COVID routines is so weird to me. It feels like quarantine has gone by in the wink of an eye, like I want to forget all this ever happened. 06:10 For so long we’ve had external stressors affecting us that we may have never acknowledged or processed. 08:09 With things opening again, and us getting to return to our old habits/routines/haunts, there’s this fear that it will get taken away again. 12:01 Some people are going to adjust faster than others (or that’s how it may seem!). 13:15 I’ve had another wave of depression and I think part of it is the overwhelming idea of getting to “come back” to life even though I’ve continued to live life? Like, quarantine wasn’t a pause. We kept going. How do I merge the two? 17:36 TLDR: It’s really okay if your feelings about quarantine ending are: scared, happy, worried, depressed, anxious, elated, overwhelmed, etc. 19:11 My weird outro where today’s scenario is that you get a kickass MEET CUTE OMG HEART EYES. *** If you enjoy this podcast, please consider giving it a review! It also is incredibly helpful for me to hear what is resonating with you, so feel free to DM me! You can find the show notes for this episode at www.anguseyetea.com. Follow me on Insta and Twitter @AngusEyeTea. Email: anguseyetea@gmail.com Want additional content including a secret blog? Check out my Patreon at www.patreon.com/anguseyetea! I am not a health professional. I am simply someone who was diagnosed with bipolar disorder. Please talk to your friends, family, teachers, doctor, trusted human, etc. if you need help. I also have a resources page on my website that can direct you to different hotlines, therapy websites, and more at https://anguseyetea.com/resources/
Rob and Chris learn about Viking magic and shamanism. TLDR: It's time to strut.
What do you know about cybersecurity in healthcare? You've probably heard news reports about the surge in cybersecurity incidents across the healthcare industry throughout 2020. The reality is that developing and maintaining a robust cybersecurity and risk management program is a challenge for any organization. For healthcare companies in particular, it's difficult to stay up to date with a rapidly evolving cybersecurity compliance landscape. Our guest on the pod today saw these problems firsthand when he was CISO at the health IT company Voxiva and he decided to do something about it. Grant Elliott is CEO and Founder of Ostendio. Grant tells the story of how he realized there was a market opportunity in healthcare security and compliance, and how he made the leap from the steady ride of the corporate C-suite to the extremely uncertain world of tech entrepreneurship. Grant gives his advice for founders (“Just don't do it”) and shares his secret for successfully scaling a technology company (TLDR: It's the title to this episode.) Grant teaches courses on business and entrepreneurship at the Pratt Institute, he is the Founder and Former President of the Healthcare Cloud Coalition, and in his spare time he mentors and advises entrepreneurs. You can check out all the healthcare compliance and risk management tools Grant and his team are building at Ostendio.com and you can follow them on Twitter @Ostendio.
Rob and Chris are still learning about philosophy. TLDR: It's a mechy situation.
We helped create a fun episode about a drink that has helped the Master of Some Team (Daren Lake and Phil Cross) compete faster and better. TLDR - It doesn’t have EPO in it, it tastes great and it’s 100% legal!Watch the full episode HEREREAD/Download the whole episode PDF Here---On this episode, we break down the components of the ultimate performance enhancing drink, WAR JUICE! We were so excited to share this with you because we think it has great beneficial impacts on endurance performance. There are only three ingredients to War juice. Each ingredient has its own health benefits that we dive into but combined together, these foods create a Godly drink that gets your body ready for a solid race! Pretty please with a tart cherry on top, hit the subscribe or like button on whatever app you use to listen to us so that you are alerted and updated on our latest episodes. You can find us on Acast, Apple podcast, Spotify and Stitcher. There is so mushroom for doper stuff we can create for you, but we need your support so please rate us, leave us a comment, and share this out to all your fave people! Let us know what we can do better!If you like the beets and music you hear in this episode, it was created mostly by Daren and you can check it out here. LINKSSuggested episode to listen to next: Race Week NutritionAll the amazing puns above will link you to further information about these ingredients.Happy listening and learning! PODCAST PRODUCTION BY POD PASTE See acast.com/privacy for privacy and opt-out information.
In this episode Chris Thoreau joins me to explore the idea - Can you actually make a living growing microgreens? Is it actually possible or just an unrealistic dream? If it is possible what does that look like? How big does the business need to be? How much do you need to be selling? Would you need employees? TLDR: It is possible, with the right combination of luck, skills and hard work. Some people will find it’s a fit for them and their lifestyle, but they’ll need to be committed. Learn how to start a Profitable Microgreens Business https://microgreens.teachable.com/ Increase farm efficiency with the Paperpot Transplanter and Other Small Farm Equipment at https://www.paperpot.co/ Follow Diego on IG https://instagram.com/diegofooter Follow PaperpotCo on IG https://instagram.com/paperpot Podcasts by Diego Footer: Microgreens: https://apple.co/2m1QXmW Vegetable Farming: https://apple.co/2lCuv3m Livestock Farming: https://apple.co/2m75EVG Large Scale Farming: https://apple.co/2kxj39i Small Farm Tools https://www.paperpot.co/
In this episode Chris Thoreau joins me to explore the idea - Can you actually make a living growing microgreens? Is it actually possible or just an unrealistic dream? If it is possible what does that look like? How big does the business need to be? How much do you need to be selling? Would you need employees? TLDR: It is possible, with the right combination of luck, skills and hard work. Some people will find it’s a fit for them and their lifestyle, but they’ll need to be committed. Learn how to start a Profitable Microgreens Business https://microgreens.teachable.com/ Increase farm efficiency with the Paperpot Transplanter and Other Small Farm Equipment at https://www.paperpot.co/ Follow Diego on IG https://instagram.com/diegofooter Follow PaperpotCo on IG https://instagram.com/paperpot Podcasts by Diego Footer: Microgreens: https://apple.co/2m1QXmW Vegetable Farming: https://apple.co/2lCuv3m Livestock Farming: https://apple.co/2m75EVG Large Scale Farming: https://apple.co/2kxj39i Small Farm Tools https://www.paperpot.co/
In this episode Chris Thoreau joins me to explore the idea - Can you actually make a living growing microgreens? Is it actually possible or just an unrealistic dream? If it is possible what does that look like? How big does the business need to be? How much do you need to be selling? Would you need employees? TLDR: It is possible, with the right combination of luck, skills and hard work. Some people will find it’s a fit for them and their lifestyle, but they’ll need to be committed. Chris's online course Growing Your Profitable Microgreens Business https://microgreens.teachable.com/courses Follow Diego on IG https://instagram.com/diegofooter Small Farm Tools and Microgreens Supplies https://www.paperpot.co/ Support my content while you shop at Amazon: https://amzn.to/32FYCqW
Rob and Chris wrap up their discussion of Plato, the Republic, and god. TLDR: It's lonely up here.
Brett Kavanaugh's hearing was on track.... then off..... then on.... and now maybe off track again? We break down what's happening between Kavanaugh and Christine Blasey Ford, the woman who has accused him of sexual assault in the 80s. TLDR: It's a mess for everybody. Then, in our second segment we talk about the midterm races shaping up to be real nail biters. Will Texas go blue for Beto? Charlie's all "nah." Can Steve change his mind?
Hurray. The guys are back...in ancient China...again. As always they're in a constant state of battle to stay happy. But then this rich kid's tutors die and he seeks revenge, but then becomes just another shitty king. Also the guys discuss the logistics of the 4 horse-pull-apart death and more about Confucius bull shit. TLDR: It's the same goddamn time every WEEK!
Rob and Chris are back in Egypt learning about what it's like to live in a desert with no rain. They also talk about dental dams and how to have a proper crab fight. TLDR: It's sooooo dry.
This week the guys learn about some new cities they can't pronounce and the crazy war freaks that ruled them. The guys decide to write a children's book. The guy's ice swords break in battle and they sell their kids for a new 4K TV! TLDR: It gets political?
In this “bonus” episode, Rob and Chris take a break from learning and instead get distracted by their favorite topic of discussion: food. Learn everything you didn't want to know about wraps, fried chicken, burritos, chicken fingers and more on this very special episode of The Dunce Caps. TLDR: It's all hot dogs!
Synopsis: In this bonus episode, Alex, Aleen, and Tempest discuss the abysmal Jem and the Holograms live action movie. TLDR: It stinks. Duration: 50:08:00 Present: Alex, Aleen, Tempest. Episode Links […]
Rob and Chris learn about what early colonial life was like and what lead to the French and Indian War. A moving sermon of Fireball and New England clam chowder leaves us quite full and sleepy. A young surveyor by the name of George Washington gets punk'd by the French, but will he have the last laugh? TLDR: It's a Colonial Life.