POPULARITY
AI developments are moving fast. In the first episode of our AI Now series, Cheng Lim and Bryony Evans break down Australia's National AI Plan - what's changing, what's not and what it means for business.Key topics discussedThe three pillars of the National AI Plan: infrastructure, sharing benefits and safetyNo new standalone AI laws - existing regulation still appliesWorkplace consultation and its impact on AI adoptionData centres, investment hurdles and practical challengesSteps for effective AI governance and privacy complianceWhere ‘sovereign' AI fits in and what to watch next
TLDR: It was Claude :-)When I set out to compare ChatGPT, Claude, Gemini, Grok, and ChatPRD for writing Product Requirement Documents, I figured they'd all be roughly equivalent. Maybe some subtle variations in tone or structure, but nothing earth-shattering. They're all built on similar transformer architectures, trained on massive datasets, and marketed as capable of handling complex business writing.What I discovered over 45 minutes of hands-on testing revealed not just which tools are better for PRD creation, but why they're better, and more importantly, how you should actually be using AI to accelerate your product work without sacrificing quality or strategic thinking.If you're an early or mid-career PM in Silicon Valley, this matters to you. Because here's the uncomfortable truth: your peers are already using AI to write PRDs, analyze features, and generate documentation. The question isn't whether to use these tools. The question is whether you're using the right ones most effectively.So let me walk you through exactly what I did, what I learned, and what you should do differently.The Setup: A Real-World Test CaseHere's how I structured the experiment. As I said at the beginning of my recording, “We are back in the Fireside PM podcast and I did that review of the ChatGPT browser and people seemed to like it and then I asked, uh, in a poll, I think it was a LinkedIn poll maybe, what should my next PM product review be? And, people asked for ChatPRD.”So I had my marching orders from the audience. But I wanted to make this more comprehensive than just testing ChatPRD in isolation. I opened up five tabs: ChatGPT, Claude, Gemini, Grok, and ChatPRD.For the test case, I chose something realistic and relevant: an AI-powered tutor for high school students. Think KhanAmigo or similar edtech platforms. This gave me a concrete product scenario that's complex enough to stress-test these tools but straightforward enough that I could iterate quickly.But here's the critical part that too many PMs get wrong when they start using AI for product work: I didn't just throw a single sentence at these tools and expect magic.The “Back of the Napkin” Approach: Why You Still Need to Think“I presume everybody agrees that you should have some formulated thinking before you dump it into the chatbot for your PRD,” I noted early in my experiment. “I suppose in the future maybe you could just do, like, a one-sentence prompt and come out with the perfect PRD because it would just know everything about you and your company in the context, but for now we're gonna do this more, a little old-school AI approach where we're gonna do some original human thinking.”This is crucial. I see so many PMs, especially those newer to the field, treat AI like a magic oracle. They type in “Write me a PRD for a social feature” and then wonder why the output is generic, unfocused, and useless.Your job as a PM isn't to become obsolete. It's to become more effective. And that means doing the strategic thinking work that AI cannot do for you.So I started in Google Docs with what I call a “back of the napkin” PRD structure. Here's what I included:Why: The strategic rationale. In this case: “Want to complement our existing edtech business with a personalized AI tutor, uh, want to maintain position industry, and grow through innovation. on mission for learners.”Target User: Who are we building for? “High school students interested in improving their grades and fundamentals. Fundamental knowledge topics. Specifically science and math. Students who are not in the top ten percent, nor in the bottom ten percent.”This is key—I got specific. Not just “students,” but students in the middle 80%. Not just “any subject,” but science and math. This specificity is what separates useful AI output from garbage.Problem to Solve: What's broken? “Students want better grades. Students are impatient. Students currently use AI just for finding the answers and less to, uh, understand concepts and practice using them.”Key Elements: The feature set and approach.Success Metrics: How we'd measure success.Now, was this a perfectly polished PRD outline? Hell no. As you can see from my transcript, I was literally thinking out loud, making typos, restructuring on the fly. But that's exactly the point. I put in maybe 10-15 minutes of human strategic thinking. That's all it took to create a foundation that would dramatically improve what came out of the AI tools.Round One: Generating the Full PRDWith my back-of-the-napkin outline ready, I copied it into each tool with a simple prompt asking them to expand it into a more complete PRD.ChatGPT: The Reliable GeneralistChatGPT gave me something that was... fine. Competent. Professional. But also deeply uninspiring.The document it produced checked all the boxes. It had the sections you'd expect. The writing was clear. But when I read it, I couldn't shake the feeling that I was reading something that could have been written for literally any product in any company. It felt like “an average of everything out there,” as I noted in my evaluation.Here's what ChatGPT did well: It understood the basic structure of a PRD. It generated appropriate sections. The grammar and formatting were clean. If you needed to hand something in by EOD and had literally no time for refinement, ChatGPT would save you from complete embarrassment.But here's what it lacked: Depth. Nuance. Strategic thinking that felt connected to real product decisions. When it described the target user, it used phrases that could apply to any edtech product. When it outlined success metrics, they were the obvious ones (engagement, retention, test scores) without any interesting thinking about leading indicators or proxy metrics.The problem with generic output isn't that it's wrong, it's that it's invisible. When you're trying to get buy-in from leadership or alignment from engineering, you need your PRD to feel specific, considered, and connected to your company's actual strategy. ChatGPT's output felt like it was written by someone who'd read a lot of PRDs but never actually shipped a product.One specific example: When I asked for success metrics, ChatGPT gave me “Student engagement rate, Time spent on platform, Test score improvement.” These aren't wrong, but they're lazy. They don't show any thinking about what specifically matters for an AI tutor versus any other educational product. Compare that to Claude's output, which got more specific about things like “concept mastery rate” and “question-to-understanding ratio.”Actionable Insight: Use ChatGPT when you need fast, serviceable documentation that doesn't need to be exceptional. Think: internal updates, status reports, routine communications. Don't rely on it for strategic documents where differentiation matters. If you do use ChatGPT for important documents, treat its output as a starting point that needs significant human refinement to add strategic depth and company-specific context.Gemini: Better Than ExpectedGoogle's Gemini actually impressed me more than I anticipated. The structure was solid, and it had a nice balance of detail without being overwhelming.What Gemini got right: The writing had a nice flow to it. The document felt organized and logical. It did a better job than ChatGPT at providing specific examples and thinking through edge cases. For instance, when describing the target user, it went beyond demographics to consider behavioral characteristics and motivations.Gemini also showed some interesting strategic thinking. It considered competitive positioning more thoughtfully than ChatGPT and proposed some differentiation angles that weren't in my original outline. Good AI tools should add insight, not just regurgitate your input with better formatting.But here's where it fell short: the visual elements. When I asked for mockups, Gemini produced images that looked more like stock photos than actual product designs. They weren't terrible, but they weren't compelling either. They had that AI-generated sheen that makes it obvious they came from an image model rather than a designer's brain.For a PRD that you're going to use internally with a team that already understands the context, Gemini's output would work well. The text quality is strong enough, and if you're in the Google ecosystem (Docs, Sheets, Meet, etc.), the integration is seamless. You can paste Gemini's output directly into Google Docs and continue iterating there.But if you need to create something compelling enough to win over skeptics or secure budget, Gemini falls just short. It's good, but not great. It's the solid B+ student: reliably competent but rarely exceptional.Actionable Insight: Gemini is a strong choice if you're working in the Google ecosystem and need good integration with Docs, Sheets, and other Google Workspace tools. The quality is sufficient for most internal documentation needs. It's particularly good if you're working with cross-functional partners who are already in Google Workspace. You can share and collaborate on AI-generated drafts without friction. But don't expect visual mockups that will wow anyone, and plan to add your own strategic polish for high-stakes documents.Grok: Not Ready for Prime TimeLet's just say my expectations were low, and Grok still managed to underdeliver. The PRD felt thin, generic, and lacked the depth you need for real product work.“I don't have high expectations for grok, unfortunately,” I said before testing it. Spoiler alert: my low expectations were validated.Actionable Insight: Skip Grok for product documentation work right now. Maybe it'll improve, but as of my testing, it's simply not competitive with the other options. It felt like 1-2 years behind the others.ChatPRD: The Specialized ToolNow this was interesting. ChatPRD is purpose-built for PRDs, using foundational models underneath but with specific tuning and structure for product documentation.The result? The structure was logical, the depth was appropriate, and it included elements that showed understanding of what actually matters in a PRD. As I reflected: “Cause this one feels like, A human wrote this PRD.”The interface guides you through the process more deliberately than just dumping text into a general chat interface. It asks clarifying questions. It structures the output more thoughtfully.Actionable Insight: If you're a technical lead without a dedicated PM, or you're a PM who wants a more structured approach to using AI for PRDs, ChatPRD is worth the specialized focus. It's particularly good when you need something that feels authentic enough to share with stakeholders without heavy editing.Claude: The Clear WinnerBut the standout performer, and I'm ranking these, was Claude.“I think we know that for now, I'm gonna say Claude did the best job,” I concluded after all the testing. Claude produced the most comprehensive, thoughtful, and strategically sound PRD. But what really set it apart were the concept mocks.When I asked each tool to generate visual mockups of the product, Claude produced HTML prototypes that, while not fully functional, looked genuinely compelling. They had thoughtful UI design, clear information architecture, and felt like something that could actually guide development.“They were, like, closer to, like, what a Lovable would produce or something like that,” I noted, referring to the quality of low-fidelity prototypes that good designers create.The text quality was also superior: more nuanced, better structured, and with more strategic depth. It felt like Claude understood not just what a PRD should contain, but why it should contain those elements.Actionable Insight: For any PRD that matters, meaning anything you'll share with leadership, use to get buy-in, or guide actual product development, you might as well start with Claude. The quality difference is significant enough that it's worth using Claude even if you primarily use another tool for other tasks.Final Rankings: The Definitive HierarchyAfter testing all five tools on multiple dimensions: initial PRD generation, visual mockups, and even crafting a pitch paragraph for a skeptical VP of Engineering, here's my final ranking:* Claude - Best overall quality, most compelling mockups, strongest strategic thinking* ChatPRD - Best for structured PRD creation, feels most “human”* Gemini - Solid all-around performance, good Google integration* ChatGPT - Reliable but generic, lacks differentiation* Grok - Not competitive for this use case“I'd probably say Claude, then chat PRD, then Gemini, then chat GPT, and then Grock,” I concluded.The Deeper Lesson: Garbage In, Garbage Out (Still Applies)But here's what matters more than which tool wins: the realization that hit me partway through this experiment.“I think it really does come down to, like, you know, the quality of the prompt,” I observed. “So if our prompt were a little more detailed, all that were more thought-through, then I'm sure the output would have been better. But as you can see we didn't really put in brain trust prompting here. Just a little bit of, kind of hand-wavy prompting, but a little better than just one or two sentences.”And we still got pretty good results.This is the meta-insight that should change how you approach AI tools in your product work: The quality of your input determines the quality of your output, but the baseline quality of the tool determines the ceiling of what's possible.No amount of great prompting will make Grok produce Claude-level output. But even mediocre prompting with Claude will beat great prompting with lesser tools.So the dual strategy is:* Use the best tool available (currently Claude for PRDs)* Invest in improving your prompting skills ideally with as much original and insightful human, company aware, and context aware thinking as possible.Real-World Workflows: How to Actually Use This in Your Day-to-Day PM WorkTheory is great. Here's how to incorporate these insights into your actual product management workflows.The Weekly Sprint Planning WorkflowEvery PM I know spends hours each week preparing for sprint planning. You need to refine user stories, clarify acceptance criteria, anticipate engineering questions, and align with design and data science. AI can compress this work significantly.Here's an example workflow:Monday morning (30 minutes):* Review upcoming priorities and open your rough notes/outline in Google Docs* Open Claude and paste your outline with this prompt:“I'm preparing for sprint planning. Based on these priorities [paste notes], generate detailed user stories with acceptance criteria. Format each as: User story, Business context, Technical considerations, Acceptance criteria, Dependencies, Open questions.”Monday afternoon (20 minutes):* Review Claude's output critically* Identify gaps, unclear requirements, or missing context* Follow up with targeted prompts:“The user story about authentication is too vague. Break it down into separate stories for: social login, email/password, session management, and password reset. For each, specify security requirements and edge cases.”Tuesday morning (15 minutes):* Generate mockups for any UI-heavy stories:“Create an HTML mockup for the login flow showing: landing page, social login options, email/password form, error states, and success redirect.”* Even if the HTML doesn't work perfectly, it gives your designers a starting pointBefore sprint planning (10 minutes):* Ask Claude to anticipate engineering questions:“Review these user stories as if you're a senior engineer. What questions would you ask? What concerns would you raise about technical feasibility, dependencies, or edge cases?”* This preparation makes you look thoughtful and helps the meeting run smoothlyTotal time investment: ~75 minutes. Typical time saved: 3-4 hours compared to doing this manually.The Stakeholder Alignment WorkflowGetting alignment from multiple stakeholders (product leadership, engineering, design, data science, legal, marketing) is one of the hardest parts of PM work. AI can help you think through different stakeholder perspectives and craft compelling communications for each.Here's how:Step 1: Map your stakeholders (10 minutes)Create a quick table in a doc:Stakeholder | Primary Concern | Decision Criteria | Likely Objections VP Product | Strategic fit, ROI | Company OKRs, market opportunity | Resource allocation vs other priorities VP Eng | Technical risk, capacity | Engineering capacity, tech debt | Complexity, unclear requirements Design Lead | User experience | User research, design principles | Timeline doesn't allow proper design process Legal | Compliance, risk | Regulatory requirements | Data privacy, user consent flowsStep 2: Generate stakeholder-specific communications (20 minutes)For each key stakeholder, ask Claude:“I need to pitch this product idea to [Stakeholder]. Based on this PRD, create a 1-page brief addressing their primary concern of [concern from your table]. Open with the specific value for them, address their likely objection of [objection], and close with a clear ask. Tone should be [professional/technical/strategic] based on their role.”Then you'll have customized one-pagers for your pre-meetings with each stakeholder, dramatically increasing your alignment rate.Step 3: Synthesize feedback (15 minutes)After gathering stakeholder input, ask Claude to help you synthesize:“I got the following feedback from stakeholders: [paste feedback]. Identify: (1) Common themes, (2) Conflicting requirements, (3) Legitimate concerns vs organizational politics, (4) Recommended compromises that might satisfy multiple parties.”This pattern-matching across stakeholder feedback is something AI does really well and saves you hours of mental processing.The Quarterly Planning WorkflowQuarterly or annual planning is where product strategy gets real. You need to synthesize market trends, customer feedback, technical capabilities, and business objectives into a coherent roadmap. AI can accelerate this dramatically.Six weeks before planning:* Start collecting input (customer interviews, market research, competitive analysis, engineering feedback)* Don't wait until the last minuteFour weeks before planning:Dump everything into Claude with this structure:“I'm creating our Q2 roadmap. Context:* Business objectives: [paste from leadership]* Customer feedback themes: [paste synthesis]* Technical capabilities/constraints: [paste from engineering]* Competitive landscape: [paste analysis]* Current product gaps: [paste from your analysis]Generate 5 strategic themes that could anchor our Q2 roadmap. For each theme:* Strategic rationale (how it connects to business objectives)* Key initiatives (2-3 major features/projects)* Success metrics* Resource requirements (rough estimate)* Risks and mitigations* Customer segments addressed”This gives you a strategic framework to react to rather than starting from a blank page.Three weeks before planning:Iterate on the most promising themes:“Deep dive on Theme 3. Generate:* Detailed initiative breakdown* Dependencies on platform/infrastructure* Phasing options (MVP vs full build)* Go-to-market considerations* Data requirements* Open questions requiring research”Two weeks before planning:Pressure-test your thinking:“Play devil's advocate on this roadmap. What are the strongest arguments against each initiative? What am I likely missing? What failure modes should I plan for?”This adversarial prompting forces you to strengthen weak points before your leadership reviews it.One week before planning:Generate your presentation:“Create an executive presentation for this roadmap. Structure: (1) Market context and strategic imperative, (2) Q2 themes and initiatives, (3) Expected outcomes and metrics, (4) Resource requirements, (5) Key risks and mitigations, (6) Success criteria for decision. Make it compelling but data-driven. Tone: confident but not overselling.”Then add your company-specific context, visual brand, and personal voice.The Customer Research WorkflowAI can't replace talking to customers, but it can help you prepare better questions, analyze feedback more systematically, and identify patterns faster.Before customer interviews:“I'm interviewing customers about [topic]. Generate:* 10 open-ended questions that avoid leading the witness* 5 follow-up questions for each main question* Common cognitive biases I should watch for* A framework for categorizing responses”This prep work helps you conduct better interviews.After interviews:“I conducted 15 customer interviews. Here are the key quotes: [paste anonymized quotes]. Identify:* Recurring themes and patterns* Surprising insights that contradict our assumptions* Segments with different needs* Implied needs customers didn't articulate directly* Recommended next steps for validation”AI is excellent at pattern-matching across qualitative data at scale.The Crisis Management WorkflowSomething broke. The site is down. Data was lost. A feature shipped with a critical bug. You need to move fast.Immediate response (5 minutes):“Critical incident. Details: [brief description]. Generate:* Incident classification (Sev 1-4)* Immediate stakeholders to notify* Draft customer communication (honest, apologetic, specific about what happened and what we're doing)* Draft internal communication for leadership* Key questions to ask engineering during investigation”Having these drafted in 5 minutes lets you focus on coordination and decision-making rather than wordsmithing.Post-incident (30 minutes):“Write a post-mortem based on this incident timeline: [paste timeline]. Include:* What happened (technical details)* Root cause analysis* Impact quantification (users affected, revenue impact, time to resolution)* What went well in our response* What could have been better* Specific action items with owners and deadlines* Process changes to prevent recurrence Tone: Blameless, focused on learning and improvement.”This gives you a strong first draft to refine with your team.Common Pitfalls: What Not to Do with AI in Product ManagementNow let's talk about the mistakes I see PMs making with AI tools. Pitfall #1: Treating AI Output as FinalThe biggest mistake is copy-pasting AI output directly into your PRD, roadmap presentation, or stakeholder email without critical review.The result? Documents that are grammatically perfect but strategically shallow. Presentations that sound impressive but don't hold up under questioning. Emails that are professionally worded but miss the subtext of organizational politics.The fix: Always ask yourself:* Does this reflect my actual strategic thinking, or generic best practices?* Would my CEO/engineering lead/biggest customer find this compelling and specific?* Are there company-specific details, customer insights, or technical constraints that only I know?* Does this sound like me, or like a robot?Add those elements. That's where your value as a PM comes through.Pitfall #2: Using AI as a Crutch Instead of a ToolSome PMs use AI because they don't want to think deeply about the product. They're looking for AI to do the hard work of strategy, prioritization, and trade-off analysis.This never works. AI can help you think more systematically, but it can't replace thinking.If you find yourself using AI to avoid wrestling with hard questions (”Should we build X or Y?” “What's our actual competitive advantage?” “Why would customers switch from the incumbent?”), you're using it wrong.The fix: Use AI to explore options, not to make decisions. Generate three alternatives, pressure-test each one, then use your judgment to decide. The AI can help you think through implications, but you're still the one choosing.Pitfall #3: Not IteratingGetting mediocre AI output and just accepting it is a waste of the technology's potential.The PMs who get exceptional results from AI are the ones who iterate. They generate an initial response, identify what's weak or missing, and ask follow-up questions. They might go through 5-10 iterations on a key section of a PRD.Each iteration is quick (30 seconds to type a follow-up prompt, 30 seconds to read the response), but the cumulative effect is dramatically better output.The fix: Budget time for iteration. Don't try to generate a complete, polished PRD in one prompt. Instead, generate a rough draft, then spend 30 minutes iterating on specific sections that matter most.Pitfall #4: Ignoring the Political and Human ContextAI tools have no understanding of organizational politics, interpersonal relationships, or the specific humans you're working with.They don't know that your VP of Engineering is burned out and skeptical of any new initiatives. They don't know that your CEO has a personal obsession with a specific competitor. They don't know that your lead designer is sensitive about not being included early enough in the process.If you use AI-generated communications without layering in this human context, you'll create perfectly worded documents that land badly because they miss the subtext.The fix: After generating AI content, explicitly ask yourself: “What human context am I missing? What relationships do I need to consider? What political dynamics are in play?” Then modify the AI output accordingly.Pitfall #5: Over-Relying on a Single ToolDifferent AI tools have different strengths. Claude is great for strategic depth, ChatPRD is great for structure, Gemini integrates well with Google Workspace.If you only ever use one tool, you're missing opportunities to leverage different strengths for different tasks.The fix: Keep 2-3 tools in your toolkit. Use Claude for important PRDs and strategic documents. Use Gemini for quick internal documentation that needs to integrate with Google Docs. Use ChatPRD when you want more guided structure. Match the tool to the task.Pitfall #6: Not Fact-Checking AI OutputAI tools hallucinate. They make up statistics, misrepresent competitors, and confidently state things that aren't true. If you include those hallucinations in a PRD that goes to leadership, you look incompetent.The fix: Fact-check everything, especially:* Statistics and market data* Competitive feature claims* Technical capabilities and limitations* Regulatory and compliance requirementsIf the AI cites a number or makes a factual claim, verify it independently before including it in your document.The Meta-Skill: Prompt Engineering for PMsLet's zoom out and talk about the underlying skill that makes all of this work: prompt engineering.This is a real skill. The difference between a mediocre prompt and a great prompt can be 10x difference in output quality. And unlike coding or design, where there's a steep learning curve, prompt engineering is something you can get good at quickly.Principle 1: Provide Context Before InstructionsBad prompt:“Write a PRD for an AI tutor”Good prompt:“I'm a PM at an edtech company with 2M users, primarily high school students. We're exploring an AI tutor feature to complement our existing video content library and practice problems. Our main competitors are Khan Academy and Course Hero. Our differentiation is personalized learning paths based on student performance data.Write a PRD for an AI tutor feature targeting students in the middle 80% academically who struggle with science and math.”The second prompt gives Claude the context it needs to generate something specific and strategic rather than generic.Principle 2: Specify Format and ConstraintsBad prompt:“Generate success metrics”Good prompt:“Generate 5-7 success metrics for this feature. Include a mix of:* Leading indicators (early signals of success)* Lagging indicators (definitive success measures)* User behavior metrics* Business impact metricsFor each metric, specify: name, definition, target value, measurement method, and why it matters.”The structure you provide shapes the structure you get back.Principle 3: Ask for Multiple OptionsBad prompt:“What should our Q2 priorities be?”Good prompt:“Generate 3 different strategic approaches for Q2:* Option A: Focus on user acquisition* Option B: Focus on engagement and retention* Option C: Focus on monetizationFor each option, detail: key initiatives, expected outcomes, resource requirements, risks, and recommendation for or against.”Asking for multiple options forces the AI (and forces you) to think through trade-offs systematically.Principle 4: Specify Audience and ToneBad prompt:“Summarize this PRD”Good prompt:“Create a 1-paragraph summary of this PRD for our skeptical VP of Engineering. Tone: Technical, concise, addresses engineering concerns upfront. Focus on: technical architecture, resource requirements, risks, and expected engineering effort. Avoid marketing language.”The audience and tone specification ensures the output will actually work for your intended use.Principle 5: Use Iterative RefinementDon't try to get perfect output in one prompt. Instead:First prompt: Generate rough draft Second prompt: “This is too generic. Add specific examples from [our company context].” Third prompt: “The technical section is weak. Expand with architecture details and dependencies.” Fourth prompt: “Good. Now make it 30% more concise while keeping the key details.”Each iteration improves the output incrementally.Let me break down the prompting approach that worked in this experiment, because this is immediately actionable for your work tomorrow.Strategy 1: The Structured Outline ApproachDon't go from zero to full PRD in one prompt. Instead:* Start with strategic thinking - Spend 10-15 minutes outlining why you're building this, who it's for, and what problem it solves* Get specific - Don't say “users,” say “high school students in the middle 80% of academic performance”* Include constraints - Budget, timeline, technical limitations, competitive landscape* Dump your outline into the AI - Now ask it to expand into a full PRD* Iterate section by section - Don't try to perfect everything at onceThis is exactly what I did in my experiment, and even with my somewhat sloppy outline, the results were dramatically better than they would have been with a single-sentence prompt.Strategy 2: The Comparative Analysis PatternOne technique I used that worked particularly well: asking each tool to do the same specific task and comparing results.For example, I asked all five tools: “Please compose a one paragraph exact summary I can share over DM with a highly influential VP of engineering who is generally a skeptic but super smart.”This forced each tool to synthesize the entire PRD into a compelling pitch while accounting for a specific, challenging audience. The variation in quality was revealing—and it gave me multiple options to choose from or blend together.Actionable tip: When you need something critical (a pitch, an executive summary, a key decision framework), generate it with 2-3 different AI tools and take the best elements from each. This “ensemble approach” often produces better results than any single tool.Strategy 3: The Iterative Refinement LoopDon't treat the AI output as final. Use it as a first draft that you then refine through conversation with the AI.After getting the initial PRD, I could have asked follow-up questions like:* “What's missing from this PRD?”* “How would you strengthen the success metrics section?”* “Generate 3 alternative approaches to the core feature set”Each iteration improves the output and, more importantly, forces me to think more deeply about the product.What This Means for Your CareerIf you're an early or mid-career PM reading this, you might be thinking: “Great, so AI can write PRDs now. Am I becoming obsolete?”Absolutely not. But your role is evolving, and understanding that evolution is critical.The PMs who will thrive in the AI era are those who:* Excel at strategic thinking - AI can generate options, but you need to know which options align with company strategy, customer needs, and technical feasibility* Master the art of prompting - This is a genuine skill that separates mediocre AI users from exceptional ones* Know when to use AI and when not to - Some aspects of product work benefit enormously from AI. Others (user interviews, stakeholder negotiation, cross-functional relationship building) require human judgment and empathy* Can evaluate AI output critically - You need to spot the hallucinations, the generic fluff, and the strategic misalignments that AI inevitably producesThink of AI tools as incredibly capable interns. They can produce impressive work quickly, but they need direction, oversight, and strategic guidance. Your job is to provide that guidance while leveraging their speed and breadth.The Real-World Application: What to Do Monday MorningLet's get tactical. Here's exactly how to apply these insights to your actual product work:For Your Next PRD:* Block 30 minutes for strategic thinking - Write your back-of-the-napkin outline in Google Docs or your tool of choice* Open Claude (or ChatPRD if you want more structure)* Copy your outline with this prompt:“I'm a product manager at [company] working on [product area]. I need to create a comprehensive PRD based on this outline. Please expand this into a complete PRD with the following sections: [list your preferred sections]. Make it detailed enough for engineering to start breaking down into user stories, but concise enough for leadership to read in 15 minutes. [Paste your outline]”* Review the output critically - Look for generic statements, missing details, or strategic misalignments* Iterate on specific sections:“The success metrics section is too vague. Please provide 3-5 specific, measurable KPIs with target values and explanation of why these metrics matter.”* Generate supporting materials:“Create a visual mockup of the core user flow showing the key interaction points.”* Synthesize the best elements - Don't just copy-paste the AI output. Use it as raw material that you shape into your final documentFor Stakeholder Communication:When you need to pitch something to leadership or engineering:* Generate 3 versions of your pitch using different tools (Claude, ChatPRD, and one other)* Compare them for:* Clarity and conciseness* Strategic framing* Compelling value proposition* Addressing likely objections* Blend the best elements into your final version* Add your personal voice - This is crucial. AI output often lacks personality and specific company context. Add that yourself.For Feature Prioritization:AI tools can help you think through trade-offs more systematically:“I'm deciding between three features for our next release: [Feature A], [Feature B], and [Feature C]. For each feature, analyze: (1) Estimated engineering effort, (2) Expected user impact, (3) Strategic alignment with making our platform the go-to solution for [your market], (4) Risk factors. Then recommend a prioritization with rationale.”This doesn't replace your judgment, but it forces you to think through each dimension systematically and often surfaces considerations you hadn't thought of.The Uncomfortable Truth About AI and Product ManagementLet me be direct about something that makes many PMs uncomfortable: AI will make some PM skills less valuable while making others more valuable.Less valuable:* Writing boilerplate documentation* Creating standard frameworks and templates* Generating routine status updates* Synthesizing information from existing sourcesMore valuable:* Strategic product vision and roadmapping* Deep customer empathy and insight generation* Cross-functional leadership and influence* Critical evaluation of options and trade-offs* Creative problem-solving for novel situationsIf your PM role primarily involves the first category of tasks, you should be concerned. But if you're focused on the second category while leveraging AI for the first, you're going to be exponentially more effective than your peers who resist these tools.The PMs I see succeeding aren't those who can write the best PRD manually. They're those who can write the best PRD with AI assistance in one-tenth the time, then use the saved time to talk to more customers, think more deeply about strategy, and build stronger cross-functional relationships.Advanced Techniques: Beyond Basic PRD GenerationOnce you've mastered the basics, here are some advanced applications I've found valuable:Competitive Analysis at Scale“Research our top 5 competitors in [market]. For each one, analyze: their core value proposition, key features, pricing strategy, target customer, and likely product roadmap based on recent releases and job postings. Create a comparison matrix showing where we have advantages and gaps.”Then use web search tools in Claude or Perplexity to fact-check and expand the analysis.Scenario Planning“We're considering three strategic directions for our product: [Direction A], [Direction B], [Direction C]. For each direction, map out: likely customer adoption curve, required technical investments, competitive positioning in 12 months, and potential pivots if the hypothesis proves wrong. Then identify the highest-risk assumptions we should test first for each direction.”This kind of structured scenario thinking is exactly what AI excels at—generating multiple well-reasoned perspectives quickly.User Story GenerationAfter your PRD is solid:“Based on this PRD, generate a complete set of user stories following the format ‘As a [user type], I want to [action] so that [benefit].' Include acceptance criteria for each story. Organize them into epics by functional area.”This can save your engineering team hours of grooming meetings.The Tools Will Keep Evolving. Your Process Shouldn'tHere's something important to remember: by the time you read this, the specific rankings might have shifted. Maybe ChatGPT-5 has leapfrogged Claude. Maybe a new specialized tool has emerged.But the core principles won't change:* Do strategic thinking before touching AI* Use the best tool available for your specific task* Iterate and refine rather than accepting first outputs* Blend AI capabilities with human judgment* Focus your time on the uniquely human aspects of product managementThe specific tools matter less than your process for using them effectively.A Final Experiment: The Skeptical VP TestI want to share one more insight from my testing that I think is particularly relevant for early and mid-career PMs.Toward the end of my experiment, I gave each tool this prompt: “Please compose a one paragraph exact summary I can share over DM with a highly influential VP of engineering who is generally a skeptic but super smart.”This is such a realistic scenario. How many times have you needed to pitch an idea to a skeptical technical leader via Slack or email? Someone who's brilliant, who's seen a thousand product ideas fail, and who can spot b******t from a mile away?The quality variation in the responses was fascinating. ChatGPT gave me something that felt generic and safe. Gemini was better but still a bit too enthusiastic. Grok was... well, Grok.But Claude and ChatPRD both produced messages that felt authentic, technically credible, and appropriately confident without being overselling. They acknowledged the engineering challenges while framing the opportunity compellingly.The lesson: When the stakes are high and the audience is sophisticated, the quality of your AI tool matters even more. That skeptical VP can tell the difference between a carefully crafted message and AI-generated fluff. So can your CEO. So can your biggest customers.Use the best tools available, but more importantly, always add your own strategic thinking and authentic voice on top.Questions to Consider: A Framework for Your Own ExperimentsAs I wrapped up my Loom, I posed some questions to the audience that I'll pose to you:“Let me know in the comments, if you do your PRDs using AI differently, do you start with back of the envelope? Do you say, oh no, I just start with one sentence, and then I let the chatbot refine it with me? Or do you go way more detailed and then use the chatbot to kind of pressure test it?”These aren't rhetorical questions. Your answer reveals your approach to AI-augmented product work, and different approaches work for different people and contexts.For early-career PMs: I'd recommend starting with more detailed outlines. The discipline of thinking through your product strategy before touching AI will make you a stronger PM. You can always compress that process later as you get more experienced.For mid-career PMs: Experiment with different approaches for different types of documents. Maybe you do detailed outlines for major feature PRDs but use more iterative AI-assisted refinement for smaller features or updates. Find what optimizes your personal productivity while maintaining quality.For senior PMs and product leaders: Consider how AI changes what you should expect from your PM team. Should you be reviewing more AI-generated first drafts and spending more time on strategic guidance? Should you be training your team on effective AI usage? These are leadership questions worth grappling with.The Path Forward: Continuous ExperimentationMy experiment with these five AI tools took 45 minutes. But I'm not done experimenting.The field of AI-assisted product management is evolving rapidly. New tools launch monthly. Existing tools get smarter weekly. Prompting techniques that work today might be obsolete in three months.Your job, if you want to stay at the forefront of product management, is to continuously experiment. Try new tools. Share what works with your peers. Build a personal knowledge base of effective prompts and workflows. And be generous with what you learn. The PM community gets stronger when we share insights rather than hoarding them.That's why I created this Loom and why I'm writing this post. Not because I have all the answers, but because I'm figuring it out in real-time and want to share the journey.A Personal Note on Coaching and ConsultingIf this kind of practical advice resonates with you, I'm happy to work with you directly.Through my pm coaching practice, I offer 1:1 executive, career, and product coaching for PMs and product leaders. We can dig into your specific challenges: whether that's leveling up your AI workflows, navigating a career transition, or developing your strategic product thinking.I also work with companies (usually startups or incubation teams) on product strategy, helping teams figure out PMF for new explorations and improving their product management function.The format is flexible. Some clients want ongoing coaching, others prefer project-based consulting, and some just want a strategic sounding board for a specific decision. Whatever works for you.Reach out through tomleungcoaching.com if you're interested in working together.OK. Enough pontificating. Let's ship greatness. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit firesidepm.substack.com
What separates great franchise founders from the ones who hold brands back? In this unfiltered episode, Erik Van Horn and Bobby Brennan break down: -The traits elite founders all share -The biggest red flags Erik sees behind the scenes -Why "franchisee-first" founders always win -How to succeed even if your brand doesn't have a strong founder -The mindset shift founders MUST embrace ("It's not my fault, but it's my problem") -Why honesty, transparency, and problem-solving beat charisma and hype every time -How the best founders build trust with franchisees -What happens when ego gets in the way of growth Whether you're a franchisee evaluating leadership, a founder trying to improve, or part of a growing system, this episode is PACKED with insider truths you won't hear anywhere else. Timestamps: 00:00 – People are people: why founders and franchisees both make mistakes 00:47 – Why this episode exists: what Bobby has learned from 20+ founders 02:15 – "Great founder vs. disaster founder" — why it matters to franchisees 03:51 – Early red flags when evaluating a founder 04:17 – The power of a clear vision + the "bridge" analogy 05:16 – The problem with visionary founders hiring clones 05:44 – Why founders need charisma AND scars 06:37 – Hard times create seasoned leaders 07:33 – Behind closed doors: the questions Front Street gets to ask 09:49 – Franchisee-focused founders vs. bottom-line founders 11:37 – The integrity test: Item 7, Item 19, and honesty 13:25 – Ego, feedback, and what founders must get right 16:07 – The rough-around-the-edges founder who became a rocket ship 18:21 – Why selling "the hard truth" builds stronger franchisees 19:50 – The problem-solving mindset: "It's not my fault, but it's my problem" 21:40 – What GREAT founders do every day 25:30 – Who gets credit for success? (Spoiler: not the franchisor) 27:05 – How the best founders deal with unfair criticism 30:46 – How founders build trust 34:47 – Trust killers: legal letters, pointing to the FA, hiding problems 36:48 – Do the best founders invest in coaching and mastermind groups? 41:53 – How two top founders grew in totally different ways 45:49 – Why franchise brands must lean into AI NOW 46:37 – The final takeaway: quantify the leader YOU want to follow Connect with Erik Van Horn:
The U.S. government shutdown has officially started. What does this mean for the economy, markets, and your investments? Lance Roberts covers the immediate fallout from the shutdown on federal spending, workers, and services ; how markets have historically reacted during shutdowns—and what to expect this time; the risk to GDP, delayed economic data releases, and consumer confidence. This shutdown is more than politics—it's a real test for the economy and financial markets. Stay informed and prepared. [*NOTE: YouTube sustained severe streaming issues during this morning's live feed. This is a separate recording of the show.]
Looking for daily inspiration? Get a quote from the top leaders in the industry in your inbox every morning. What's the one premier event that brings the global attractions industry together? IAAPA Expo 2025, happening in Orlando, Florida, from November 17th through 21st. From breakthrough technology to world-class networking and immersive education, IAAPA Expo 2025 is where you find possible. And, just for our audience, you'll save $10 when you register at IAAPA.org/IAAPAExpo and use promo code EXPOAPROSTEN. Don't miss it — we won't! Running a modern trampoline or adventure park isn't as simple as “put trampolines in a warehouse and open the doors” anymore. Operators juggle guest expectations, evolving tech stacks, labor realities, and the need to turn first-time visitors into loyal fans. In this conversation, Matt and Josh surface practical solutions with a live panel—Phillip Howell (Best American Trampolines), Greg Spittle (ROLLER), and Brandon Willey (Intelliplay)—covering design, data, kiosks vs. people, post-visit marketing, gamification, and AI. In this episode, Phillip, Greg, and Brandon share how the trampoline park model has matured and what tech-enabled moves will define the next five years. From Warehouses to Polished, Parent-Friendly Parks “We were going into warehouses… 10 to 15,000 square feet of actual trampolines… no party rooms, no decoration on the wall.” Early parks were bare-bones. Today, Phillip emphasizes warm, inviting environments: clean sightlines, framed netting, wrinkle-free pads, murals, and real seating and TVs for parents. The aesthetic isn't vanity - it sets the perceived cleanliness and quality bar the moment guests walk in. Match Online Promises with Onsite Reality “That upfront experience needs to match the experience when I walk through the door.” Brandon flags a common miss: aspirational websites and social feeds that don't reflect the actual facility. Greg adds that outdated online checkout flows lose guests before they arrive. Align visuals and copy with the real experience, and make the digital path to purchase smooth. Before–During–After: Design the Whole Journey “There's a bit of technology in every piece of that journey.” Before the visit: modern web and frictionless online booking. During the visit: clear wayfinding, staffed self-service kiosks (never kiosks alone), and trained team members who intercept stress and upsell thoughtfully. After the visit: structured follow-ups—survey, intercept negative feedback before it hits Google, and segmented re-engagement. Kiosks Need Humans “You can't just leave the kiosks out there and expect success.” Automation works best with people in the loop. The winning model pairs one well-trained team member with multiple kiosks to guide choices, protect the experience, and enable upsells… without leaving a 16-year-old “on an island.” Own the Post-Visit Moment (and the Data) “Trampoline parks have a massive advantage. You have mandatory waivers… it's marketing data.” Use waivers to power segmentation: birthday clubs (30–45 days out), membership offers, and interest-based campaigns. Greg notes birthday bookings often happen ~3 weeks in advance, so time your messages. Automate when possible, but always deliver genuine value in every send. Wearables & Gamification Drive Repeat Visits “After the bands were in place, repeat visitation went up to 78%.” Intelliplay's wristbands track activity, show session status (green to red), reduce PA “time's up” moments, and fuel leaderboards. With demographic data and in-park behavior, operators can create attraction-specific events (e.g., dodgeball nights) and reward systems that keep families coming back. Clean Lines = Clean Minds “You see a wrinkled pad and it looks dirty.” Optics shape reviews. Details like pad tension, framed netting, and tidy sightlines communicate safety and care, prevent “dirty” perceptions that damage ratings even when facilities are spotless. AI Now & Next: Practical, Not Hype “AI is still in its infancy… but options matter.” Today: load SOPs into a private assistant for staff training and guest FAQs; use AI for campaign ideation and drafting. Tomorrow: agentic AI will act on your data, building and running segmented campaigns, surfacing decisions from noise, and personalizing in-park and post-visit experiences. Humans stay central; AI reduces drudgery. Operator Priorities That Don't Change “What's driving my revenue, costs, and guest experience?” Greg's three pillars: Revenue engines (birthday parties remain foundational; memberships rising). Costs (especially labor forecasting by day/week/season). Guest experience (measure, intercept, and improve). Brandon adds: audit your attraction mix and secret shop your own venue regularly, end to end. The Park of the Near Future “Immersive, gamified, personalized.” Expect lighting tied to activity, unified scoring across attractions, persistent profiles, and app-based rewards that feel like arcade redemption—physical prizes today, digital skins tomorrow. Most of all: keep experimenting; iterate quickly, learn, and evolve. What tech or tactics have moved the needle most in your venue: kiosks, leaderboards, birthday automation, staff training tools, or something else? Share your ideas and questions in the YouTube comments or on social media. This podcast wouldn't be possible without the incredible work of our faaaaaantastic team: Scheduling and correspondence by Kristen Karaliunas To connect with AttractionPros: AttractionPros.com AttractionPros@gmail.com AttractionPros on Facebook AttractionPros on LinkedIn AttractionPros on Instagram AttractionPros on Twitter (X)
AICON will look at how artificial intelligence is reshaping key industries and society now and in future AICON is back for its seventh year, reaffirming its position as the largest and most influential AI conference in Ireland. As AI is increasingly adopted in all aspects of life, the two-day conference will draw together technologists with business leaders and researchers, to show how getting to grips with AI can help businesses achieve further success and growth. AICON Belfast 2025 will return on Thursday 2 and Friday 3 October 2025, with events taking place across two of the city's most iconic venues. AICON returns to Belfast October, 2nd to 3rd Day one, hosted at Titanic Belfast, will set the stage with high-profile keynotes, panel discussions, and networking opportunities showcasing how AI is reshaping industries and society. Dr. Stephen Spinelli Jr., President of Babson College, will deliver the keynote address, highlighting how entrepreneurship education can restore stability in a turbulent world. He will explore how artificial intelligence can amplify Entrepreneurial Thought & Action, driving solutions at scale. The conference will once again feature its signature 'twin-track' format, with two programmes running in parallel: AI Now and AI Next. AI Now will explore the latest advancements in artificial intelligence, with a strong focus on practical takeaways. Sessions will provide actionable insights and best practices for adopting AI in a safe, transparent and accountable way. A feature of the AI Now track includes a keynote panel AI Across the Globe - Transforming Industry with AI will feature Ruth McGuiness, Kainos, Lyndsay Shields, Danske Bank UK, and Rachel Bland, NHBC, exploring how artificial intelligence is reshaping industries worldwide. AI Next will look to the future of artificial intelligence, offering visionary perspectives on its potential to transform both society and the economy. This track will explore the opportunities and challenges that lie ahead as AI continues to evolve. Among the highlights is a fireside chat, Reimagining Business: AI, Leadership, and the Future of Organisational Transformation, featuring Gareth Workman, Kainos, and Dr Stephen McKeown, Allstate. Drawing on McKeown's experience driving digital transformation at Allstate, the session will examine how AI is reshaping business models, workforce dynamics, and leadership. Speaking ahead of the conference Gareth Workman Chief AI Officer at Kainos said: "AI is no longer optional - it's becoming the foundation of how value is created and how society expects to engage. At Kainos, we believe the opportunity lies not just in adopting AI, but in shaping an AI-native future that is trusted, responsible, and human-centric. "Conferences like AICON are vital because they bring together industry, academia, and government to share lessons, confront challenges, and unlock new opportunities. By working together, we can ensure AI delivers real value - transforming what matters most for businesses, people, and society." On Friday 3 October, the conference will move to W5 Belfast. Hosted by the AI Collaboration Centre (AICC), this day will offer a deeper dive into academic breakthroughs, cross-sector collaboration, and emerging technologies driving progress across healthcare, fintech, manufacturing, and beyond. Michaela Black, Principal Investigator at AICC added: "AICON 2025 is about looking beyond the hype to see where artificial intelligence is truly taking us. For Northern Ireland, it's an opportunity to show how a small region can punch above its weight on the global stage, not just adopting AI, but shaping how it's used responsibly and for the benefit of society. "The Artificial Intelligence Collaboration Centre exists to bridge industry, academia, research and government and AICON is where those worlds come together to imagine the future and start building it today. We're especially proud to be hosting Day 2 of the event on 3rd October at W5 Be...
In this episode of More or Less, Brit Morin and Dave Morin sit down with legendary VC Tony Conrad (True Ventures) for a candid conversation on the state of venture capital, the real impact of AI, the death of social media, and why non-attribution and EQ matter more than ever.Chapters:01:30 – The Power (and Rarity) of Non-Attribution in Venture 04:00 – Building Culture After the Dot-Com Crash 06:00 – Why We Don't Do This for the Money 08:30 – EQ vs IQ: The Human Side of Venture Capital 11:00 – The Wildfire Story: When VCs Show Up as Humans 16:50 – The Best GPs & Firms: Inspiration from Legacy and Newcomers 22:30 – Space Tech Is Real Venture Capital (Not Just AI) 28:10 – Venture Is a Contact Sport: Lessons from Ron Conway 30:40 – AI Hype: Is There Any Money Left for Startups? 40:00 – Manipulation, Social Media, and the Rise of AEO 49:30 – Building Brands in the Age of AI (and Why We Paused Consumer) 55:00 – The Dangers of AI Hype and Overfunding 57:00 – Why Contrarian Investing Can Wait—Focus on AI Now 1:04:00 – Pop Culture Corner: Taylor Swift, Branding, and First Concerts We're also on ↓X: https://twitter.com/moreorlesspodInstagram: https://instagram.com/moreorlessSpotify: https://podcasters.spotify.com/pod/show/moreorlesspodConnect with us here:1) Sam Lessin: https://x.com/lessin2) Dave Morin: https://x.com/davemorin3) Jessica Lessin: https://x.com/Jessicalessin4) Brit Morin: https://x.com/brit
Jaké budou klíčové dovednosti a kompetence pro úspěch v nové přicházející AI éře? Jaké strategie vám pomohou zůstat relevantní na pracovním trhu? A jak ovlivní umělá inteligence vnímání smyslu práce a života celkově? Drazí přátelé, občas vezmu nějakou svou přednášku a dám ji bezplatně ke zhlédnutí. Rozhodl jsem se takhle zveřejnit svou úvodní přednášku letošní konference AI-NOW. Vzhledem k tomu, jak moc nám v ČR ujíždí vlak AI, a vzhledem k tomu, jak naše republika (na rozdíl třeba od mého drahého Singapuru) není na AI transformaci připravená, tak v tom vidím opravdu velký smysl tuto osvětu dělat... Jestli nám AI vlak jednou ujede, myslím, že ho již nikdy nemáme šanci dohnat... Řeknu to ještě jinak – AI transformace je naprosto klíčová pro budoucí konkurenceschopnost ČR i nás jako jednotlivců. V přednášce mimo jiné shrnuji, co nejdůležitějšího se v AI světě v letošním roce stalo, co nás v nejbližších měsících čeká, jaká hlavní rizika jsou s AI spojena, ale hlavně jak se na celou změnu můžeme jako jednotlivci a organizace připravit. Budu také moc rád, pokud se na přednášku nejen podíváte, ale pomůžete mi její poselství šířit. Děkuji moc. ❤️ ODKAZY: - Záznam celé konference AI-NOW 2024 zde (s kódem PETR20 -20%): https://www.edumame.cz/p/ainow-2024
Adam Russell, head of AI research at USC's Information Sciences Institute, has an engrossing discussion with Adam Clayton Powell III, who guest hosts the episode, about the development of AI -- and what Russell terms "AI Now, AI Next, AI in the Wild."
Read the full transcript here. What is "apocaloptimism"? Is there a middle ground between apocalypticism and optimism? What are the various camps in the AI safety and ethics debates? What's the difference between "working on AI safety" and "building safe AIs"? Can our social and technological coordination problems be solved only by AI? What is "qualintative" research? What are some social science concepts that can aid in the development of safe and ethical AI? What should we do with things that don't fall neatly into our categories? How might we benefit by shifting our focus from individual intelligence to collective intelligence? What is cognitive diversity? What are "AI Now", "AI Next", and "AI in the Wild"?Adam Russell is the Director of the AI Division at the University of Southern California's Information Sciences Institute (ISI). Prior to ISI, Adam was the Chief Scientist at the University of Maryland's Applied Research Laboratory for Intelligence and Security, or ARLIS, and was an adjunct professor at the University of Maryland's Department of Psychology. He was the Principal Investigator for standing up the INFER (Integrated Forecasting and Estimates of Risk) forecasting platform. Adam's almost 20-year career in applied research and national security has included serving as a Program Manager at the Intelligence Advanced Research Projects Activity (IARPA), then as a Program Manager at the Defense Advanced Research Projects Agency (DARPA) (where he was known as the DARPAnthropologist) and in May 2022 was appointed as the Acting Deputy Director to help stand up the Advanced Research Projects Agency for Health (ARPA-H). Adam has a BA in cultural anthropology from Duke University and a D.Phil. in social anthropology from Oxford University, where he was a Rhodes Scholar. He has also represented the United States in rugby at the international level, having played for the US national men's rugby team (the Eagles). StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
UNESCO llama a México a desarrollar una estrategia de AI | Now, el neobanco de Invex, lanza tarjeta de crédito de fácil acceso | La AIMX y el INAI firman acuerdo de colaboración | Fiscalía española investiga a Meta por uso de datos para entrenar su AI | Así lo dijo Carlos Marmolejo, director general de Finsus | Todavía en las nubes si X será multada por la UE | Grupo Elektra es una de las historias innovadoras | Eric Moguel, Data, Digital & Information Technology Director de Novartis, nos da el IT Masters Insight
It seems like the loudest voices in AI often fall into one of two groups. There are the boomers – the techno-optimists – who think that AI is going to bring us into an era of untold prosperity. And then there are the doomers, who think there's a good chance AI is going to lead to the end of humanity as we know it.While these two camps are, in many ways, completely at odds with one another, they do share one thing in common: they both buy into the hype of artificial intelligence.But when you dig deeper into these systems, it becomes apparent that both of these visions – the utopian one and the doomy one – are based on some pretty tenuous assumptions.Kate Crawford has been trying to understand how AI systems are built for more than a decade. She's the co-founder of the AI Now institute, a leading AI researcher at Microsoft, and the author of Atlas of AI: Power, Politics and the Planetary Cost of AI.Crawford was studying AI long before this most recent hype cycle. So I wanted to have her on the show to explain how AI really works. Because even though it can seem like magic, AI actually requires huge amounts of data, cheap labour and energy in order to function. So even if AI doesn't lead to utopia, or take over the world, it is transforming the planet – by depleting its natural resources, exploiting workers, and sucking up our personal data. And that's something we need to be paying attention to. Mentioned:“ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine” by Joseph Weizenbaum“Microsoft, OpenAI plan $100 billion data-center project, media report says,” Reuters“Meta ‘discussed buying publisher Simon & Schuster to train AI'” by Ella Creamer“Google pauses Gemini AI image generation of people after racial ‘inaccuracies'” by Kelvin Chan And Matt O'brien“OpenAI and Apple announce partnership,” OpenAIFairwork“New Oxford Report Sheds Light on Labour Malpractices in the Remote Work and AI Booms” by Fairwork“The Work of Copyright Law in the Age of Generative AI” by Kate Crawford, Jason Schultz“Generative AI's environmental costs are soaring – and mostly secret” by Kate Crawford“Artificial intelligence guzzles billions of liters of water” by Manuel G. Pascual“S.3732 – Artificial Intelligence Environmental Impacts Act of 2024″“Assessment of lithium criticality in the global energy transition and addressing policy gaps in transportation” by Peter Greim, A. A. Solomon, Christian Breyer“Calculating Empires” by Kate Crawford and Vladan Joler Further Reading:“Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence” by Kate Crawford“Excavating AI” by Kate Crawford and Trevor Paglen“Understanding the work of dataset creators” from Knowing Machines“Should We Treat Data as Labor? Moving beyond ‘Free'” by I. Arrieta-Ibarra et al.
S3E33 Self-publishers: Are You Still Creating Characters the Old Way? Upgrade with AI Now! Description: In today's episode, Cindy and Tammie dive into the transformative role of AI in character development for authors. They discuss how AI can enhance the creative process by offering diverse character perspectives, helping writers overcome writer's block, and refining character depth with psychological insights. Whether you're writing your first novel or looking to add depth to your characters in an ongoing series, this episode will equip you with practical tools and insights on integrating AI effectively into your character creation process. Links: ToolsADay: https://toolsaday.com/writing/character-generator WriterHand AI Character Generator: https://writerhand.com/tools/story-character-generator NovelAI: https://novelai.net/
Welcome to today's episode of AI Lawyer Talking Tech! In a world where artificial intelligence is rapidly transforming various industries, the legal sector finds itself at a critical juncture. From groundbreaking legislation and landmark lawsuits to the integration of AI in legal practice, the intersection of law and technology has never been more dynamic. Today, we'll explore the latest developments shaping this evolving landscape, including the EU's AI Act, Tennessee's ELVIS Act, and the growing importance of data privacy in the age of AI. We'll also delve into the challenges and opportunities faced by legal professionals as they navigate this uncharted territory, from conducting fundamental rights impact assessments to leveraging AI for enhanced efficiency and client service. So, join us as we unravel the complexities of AI and law, and discover how this powerful technology is redefining the future of the legal industry. How to Become an Immigration Lawyer02 Apr 2024Tech EdvocateLawmatics' New Custom Dashboards Let Your Law Firm Track and Visualize The Data That Matters To You02 Apr 2024LawSitesWhat Makes A Good In-House Lawyer Great?02 Apr 2024Above The LawArtificial Intelligence - Who02 Apr 2024The Silicon ReviewAlan Raul, Founder of Sidley Austin's Privacy and Cybersecurity Law Practice Elected FPF's New Board President02 Apr 2024Future of Privacy ForumExploring the Intersection of Generative AI and Cybersecurity at ILTA EVOLVE – Ken Jones and Josh Smith02 Apr 20243 Geeks and a Law BlogThe Anti-Innovation Supreme Court: Major Questions, Delegation, Chevron and More by Jack Michael Beermann :: SSRN02 Apr 2024OTHERWISEAIPPI UK Event Report: Roundup of 2023's Patent Cases02 Apr 2024The IPKatCan Self-Represented Litigants Access Justice? NSRLP's New Intake Report02 Apr 2024SlawHow to bridge the gap between the IT and legal staffs to better combat insider risk01 Apr 2024SC Magazine USUnderstanding The Increased Complexity Of The Data Privacy Landscape02 Apr 2024Forbes.comWhy DOJ's Antitrust Case Against Apple Falls Flat02 Apr 2024American Enterprise InstituteThe Law vs AI: Now the legal battles are starting to intensify02 Apr 2024RedShark NewsGoogle Agrees to Delete Users' ‘Incognito' Browsing Data in Lawsuit Settlement02 Apr 2024TimeGregory Ziegler – Attorney Making a Powerful Impact on Engineering and Architecture Law01 Apr 2024FinanceDigest.comThe Legal 500 EMEA 2024 Recognizes Cooley01 Apr 2024CooleyAlive and Kicking: Washington State's My Health My Data Act Goes into Effect Today01 Apr 2024EPIC – Electronic Privacy Information CenterEU Data Act (part 8): smart contracts02 Apr 2024Hogan LovellsDivergent Paths on Regulating Artificial Intelligence01 Apr 2024LittlerYoshikawa Interviewed on Tennessee's New AI Law, ELVIS Act02 Apr 2024Adams & Reese LLPColorado Close to First-in-the-Nation Neuro-Privacy Law Designed to Protect Biological and Neural Data02 Apr 2024BeneschArtificial intelligence in the insurance sector: fundamental right impact assessments01 Apr 2024Hogan LovellsFirst-of-its-Kind AI Law Addresses Deep Fakes and Voice Clones01 Apr 2024Holland & KnightREMINDER: Washington's “My Health My Data” Act Now In Effect01 Apr 2024BeneschNTIA issues report on recommended federal government actions to promote accountable AI01 Apr 2024Hogan Lovells
Heather Dewey-Hargborg, American artist and bio-hacker most knowned for the project Stranger Visions. Ana Brígida for The New York Times Dr. Heather Dewey-Hagborg is a transdisciplinary artist and educator who is interested in art as research and critical practice. Her controversial biopolitical art practice includes the project Stranger Visions in which she created portrait sculptures from analyses of genetic material (such as hair, cigarette butts, or chewed up gum) collected in public places. Heather has shown work internationally at events and venues including the World Economic Forum, the Daejeon Biennale, and the Shenzhen Urbanism and Architecture Biennale, the Van Abbemuseum, Transmediale and PS1 MOMA. Her work is held in public collections of the Centre Pompidou, the Victoria and Albert Museum, the Wellcome Collection, and the New York Historical Society, among others, and has been widely discussed in the media, from the New York Times and the BBC to Art Forum and Wired. Heather has a PhD in Electronic Arts from Rensselaer Polytechnic Institute. She is a visiting assistant professor of Interactive Media at NYU Abu Dhabi, an artist fellow at AI Now, an Artist-in-Residence at the Exploratorium, and is an affiliate of Data & Society. Hybrid (Trailer) from Heather Dewey-Hagborg on Vimeo. Installation view, Heather Dewey-Hagborg, Hybrid: an Interspecies Opera. Courtesy of the artist and Fridman Gallery. Still from Heather Dewey-Hagborg, Hybrid: an Interspecies Opera. Courtesy of the artist and Fridman Gallery.
Getting Ahead on Generative AI: Ep. 40 of Red Sky Fuel for Thought Podcast What You'll Learn in This Episode:· How marketers and PR professionals can use generative AI to make our lives easier· Where we should not use generative AI from a legal or ethical perspective· How to strike the balance between being better with AI and being better than AI Now that the dust is settling on the AI maelstrom that's raged for the past few months, our September episode looks at what we've learned about generative AI in particular: the good, the bad and the uncertain. Host Lara Graulich examines how artificial intelligence, or AI, has become a buzzword that elicits many emotions: wonder, excitement, confusion and anxiety, among others. As she says, “One thing is certain: This technology is here to stay, and it's important for us to understand it as marketing and public relations professionals.” To help you make out the full picture of generative AI today, we've divided this episode into two parts. First, Umbar Shakir, a partner and client director at Gate One, gives us a whip-smart introduction to generative AI, what it's capable of and what its limitations are. In part two, we dig into the specific implications that generative AI has in the PR and marketing space. For this roundtable, we're chatting with Rachael Sansom, CEO of Havas Red U.K., and Myrna Van Pelt, head of technology and business for Havas Red Australia. The episode begins with Umbar (pronounced “Amber”), who differentiates traditional AI from generative AI. Traditional AI, she says, is the ability of machines to mimic human intelligence to perform tasks and automate workflows. This is AI as we've known it; it's what's been around for decades, and it's something technology consultants have been implementing for clients for a long time. However, when large language models began arriving over the past five years or so, generative AI stole the spotlight. With generative AI, trillions of bits of crowdsourced data can be used to synthesize new data. Does this new capability represent a threat to human creativity or to job security? No, says Umbar: “As marketers, your whole value add to customers is differentiation and personalization. Even though generative AI can generate content for us, you need the human brain to give the differentiation. And then you need the human heart and emotion. In all the marketing campaigns I've been involved in, an emotive response is really important to memorability. That comes from heart, and a lot of our emotional intelligence comes from our values, beliefs and moral judgments. At the moment, you can't mathematically program that in. What we need to remember is that we've built this tool, and we can interact with it; it might be faster than us, and it might be able to process more data than we can at any point in time, but it doesn't replace our humanity.” Instead, AI can create space for those of us in this industry to get back to our craft and to doing some of the things that drew us here in the first place — to creating human connection, for example — rather than the monotony of data analysis or transcription. Plus, with generative AI, we're going to get richer insights much more quickly than we would on our own. When it comes to humans' job security, Umbar says, “I've got a slightly provocative view on things. When people worry that generative AI will cause people to lose jobs, I say there are some jobs out there that humans should never have been doing. We have taken really tedious work and turned it into careers for people. We've normalized tedium. How do we unshackle ourselves from some of that tedium? How do we then free up capacity to solve for bigger and better problems for society? How do you use this technology to replace what humans have been doing that fundamentally doesn't tap into our humanity or our values or our creativity?” Umbar's segment ends with her answering these questions, before Lara then welcomes Rachael and Myrna to the podcast. She first asks them what excites them most about generative AI and the capabilities it brings to our clients and which tools they've most enjoyed using. “Gen AI cannot create ideas, but what it can do is take great ideas, by humans, and push them faster and further and help iterate them more brilliantly,” says Rachael. In marketing and communications, Myrna says AI also has a distinct role to play in helping us in the area of rapid decision making. “As humans, we have finite ability to scan volumes of information,” she says. “However, AI does this at a fraction of the time. So, for example, when it comes to understanding audience preferences, or demographic nuances, AI can help sort through this massive volume of content, identifying patterns and trends, anticipating future scenarios, and then categorizing the data. We then have an absolute smorgasbord of useful pre-categorized content we can use to inform campaigns, particularly so in industries where a rapid pivot of a campaign might make the difference between success and failure — particularly so in political campaigns.” Among Myrna's go-to AI tools, she highlights Brandwatch, which provides media monitoring and competitor tracking; TLDR, which summarizes high-tech articles; and DeepL Translate, which can accurately translate content in dozens of different languages. Next, they talk about the inherent risks of using AI, including where we should and shouldn't use it from an ethical and legal perspective — e.g., is a press release fair game? Thank you to each of our guests for weighing in on the transformative power of AI. We hope you'll give “Red Sky Fuel for Thought” a listen, and subscribe to the show on iTunes, Spotify or your favorite podcasting app. Don't forget to rate and review to help more people find us!Also mentioned on this episode:· ChatGPT· Brandwatch· TLDR· DeepL Translate Follow Red Havas for a daily dose of comms news:· Twitter· Facebook· Instagram· LinkedIn Subscribe:Don't forget to subscribe to the show using your favorite podcasting app.· iTunes· Spotify What did you love? What would you like to hear about next?Remember to rate and review today's show; we'd love to hear from you!
A Phony Crisis Averted Now a Celebration of Compromise and Bipartisanship | The Next War of Drones and AI Now in Ukraine | Exporting the Technology of Repression and Australia's Trial of the Century backgroundbriefing.org/donate twitter.com/ianmastersmedia facebook.com/ianmastersmedia
Today on the Ether we have Atari_buzzk1LL hosting Fetch.ai spaces AI Now with Fetch.ai community developer Crypto.AI. Recorded on June 2nd 2023. Make sure to check out the two newest tracks from Finn and the RAC FM gang over at ImaginetheSmell.org! The majority of the music at the end of these spaces can be found streaming over on Spotify, and the rest of the streaming platforms. Check out Project Survival, Virus Diaries, and Plan B wherever you get your music. Thank you to everyone in the community who supports TerraSpaces.
What additional rules/regulations do we need in place for AI NOW? --- Send in a voice message: https://podcasters.spotify.com/pod/show/ancienttexan/message Support this podcast: https://podcasters.spotify.com/pod/show/ancienttexan/support
Show Notes Timnit Gebru is an artificial intelligence researcher. Timnit advocates for fair and just use of the technology we use everyday. A former employee of Google, Timnit consistently calls in and calls out a Big Tech industry that leverages power, capital, and bias in favor of, well, themselves and their wallets. From language to surveillance- Timnit knows the potential harms of artificial intelligence know no bounds. In a time when we're at war, today's episode calls into question for whom we are fighting? Whose wars are worthy of discussion and what harms are so deeply ingrained within our consciousness that we ignore our own civilian casualties. As the world witnesses the 16th month of a war in Ethiopia, Timnint's journey reminds us of the refugee, the warrior, and the heroes we often dismiss and determine unworthy of home. This conversation was recorded on Jan 27, 2022 Learn More about this topic https://www.ruhabenjamin.com/ (Rhua benjamin) https://www.dukeupress.edu/dark-matters (Simone browne (Dark Matters: on Surveillance of Blackness) ) https://www.netflix.com/title/81328723 (Coded bias) https://pacscenter.stanford.edu/person/tawana-petty/#:~:text=She%20is%20the%20National%20Organizing,and%20shared%20by%20government%20and (Tawana petty) https://www.politico.com/news/2021/06/02/senate-democrats-google-racial-equity-491605 (Support regulations to safeguard) https://www.wired.com/story/facebook-ford-fall-from-grace/ (Mar Hicks wrote op ed for Wired (tech historian)) Who to follow? https://www.ajl.org/ (Algorithm justice league) https://datasociety.com/ (Data society) https://d4bl.org/ (Data for black lives) https://ainowinstitute.org/ (AI Now) https://www.dair-institute.org/ (DAIR ) Other Things we mention https://contentauthenticity.org/ (contentauthenticity.org ) https://www.britannica.com/topic/Fairness-Doctrine (The fairness doctrine ) https://www.washingtonpost.com/outlook/2021/02/04/fairness-doctrine-wont-solve-our-problems-it-can-foster-needed-debate/ (Fairness doctrine washington post article ) Host https://www.instagram.com/dario.studio/ (Dario Calmese)
Show Notes Buffer Overflow: Skeletor v. Dr. Doom Episode 203 Google v. Oracle, Office 365 Outages, and VMware Cloud Part Trois Hosts Ned Bellavance https://www.linkedin.com/in/ned-bellavance-ba68a52 @Ned1313 Chris Hayner, Delivery Manager https://www.linkedin.com/in/chrismhayner Kimberly DeFilippi, Project Manager, Business Analyst https://www.linkedin.com/in/kimberly-defilippi-77b3986/ Brenda Heisler, ISG Operations https://www.linkedin.com/in/brenda-heisler-b5431989/ Longer Topics Google v. Oracle ends. No one wins. Supreme Court Decision: https://www.supremecourt.gov/opinions/20pdf/18-956_d18f.pdf The EFF is crowing about the victory Lightning Round Facebook takes pole position in the ‘race to lose the most user data’ Office 356 has it’s latest outage in a month Website I don’t understand in talks to acquire website I don’t understand VMware launches VMware Cloud The age old lesson to double check your sources comes to haunt AI Now we’ll never know how babby is formed Music Credits Intro: Jason Shaw - Tech Talk Outro: Jason Shaw – Feels Good 2 B
Show Notes Buffer Overflow: Skeletor v. Dr. Doom Episode 203 Google v. Oracle, Office 365 Outages, and VMware Cloud Part Trois Hosts Ned Bellavance https://www.linkedin.com/in/ned-bellavance-ba68a52 @Ned1313 Chris Hayner, Delivery Manager https://www.linkedin.com/in/chrismhayner Kimberly DeFilippi, Project Manager, Business Analyst https://www.linkedin.com/in/kimberly-defilippi-77b3986/ Brenda Heisler, ISG Operations https://www.linkedin.com/in/brenda-heisler-b5431989/ Longer Topics Google v. Oracle ends. No one wins. Supreme Court Decision: https://www.supremecourt.gov/opinions/20pdf/18-956_d18f.pdf The EFF is crowing about the victory Lightning Round Facebook takes pole position in the ‘race to lose the most user data’ Office 356 has it’s latest outage in a month Website I don’t understand in talks to acquire website I don’t understand VMware launches VMware Cloud The age old lesson to double check your sources comes to haunt AI Now we’ll never know how babby is formed Music Credits Intro: Jason Shaw - Tech Talk Outro: Jason Shaw – Feels Good 2 B
Listener Survey In COVID-related AI news, Andy and Dave discuss research that uses NLP to predict mutations in a virus that would allow it to avoid detection by antibodies. In regular AI news, the US Food and Drug Administration publishes an Action Plan for AI and ML, with more to follow. The White House launches the National AI Initiative Office, which will work with the private sector and academia on AI initiatives. The AI Now institute has launched an effort for “A New AI Lexicon,” in which it invites contributors to provide perspectives and narratives for describing new vocabulary that adequately reflects demands and concerns related to AI technology. And the Federal Reserve is asking for comments about the use of AI/ML in banking, as it considers increasing oversight of the technologies. In research, Michal Kosinski at Stanford University publishes in Nature Reports how facial recognition technology can identify a person’s political orientation (to 72% accuracy); Andy and Dave spend some extra time discussing the challenges and implications behind such applications of facial recognition technology. Researchers at Columbia University demonstrate the ability of an AI observer to “visualize the future plans” of an actor, solely through visual information. The report of the week comes from CNAS on AI and International Stability: Risks and Confidence-Building Measures. The book of the week examines How Humans Judge Machines. And finally, a YouTube documentary from Noclip examines how machine learning plays out in Microsoft’s Flight Simulator. Click here to visit our website and explore the links mentioned in the episode.
In COVID-related AI news, another concerning report, this time in Nature Medicine, found “serious concerns” with 20,000 studies on AI systems in clinical trials, with many reporting only the best-case scenarios; in response, an international consortium has developed CONSORT-AI, reporting guidelines for clinical trials involving AI. In Nature, an open dataset provides a collection and overview of governmental interventions in response to COVID-19. In regular AI news, the DoD wraps up its 2020 AI Symposium. And the White House nominates USMC Maj. Gen. Groen to lead the JAIC. The latest report from the NIST shows that facial recognition technology still struggles to identify people of color. Portland, Oregon passes the toughest ban on facial recognition technology in the US. And The Guardian uses GPT-3 to generate some hype. In research, OpenAI demonstrates the ability to apply transformer-based language models to the task of automated theorem proving. Research from Berkeley, Columbia, and Chicago proposes a new test to measure a text model’s multitask accuracy, with 16,000 multiple choice questions across 57 task areas. A report from AI Now takes a look at regulating biometrics, which includes tech such as facial recognition. And the 37th International Conference on Machine Learning makes its proceedings available online. Click here to visit our website and explore the links mentioned in the episode.
The Social Dilemma is a 2020 American docudrama. The dilemmaNever before have a handful of tech designers had such control over the way billions of us think, act, and live our lives. Discover what’s hiding on the other side of your screenWe tweet, we like, and we share— but what are the consequences of our growing dependence on social media? This documentary-drama hybrid reveals how social media is reprogramming civilization with tech experts sounding the alarm on their own creations. The Social Dilemma features the voices of technologists, researchers and activists working to align technology with the interests of humanity. The Social Dilemma is a 2020 American docudrama. The film explores the rise of social media and the damage it has caused to society, focusing on its exploitation of its users for financial gain through surveillance capitalism and data mining, how its design is meant to nurture an addiction, its use in politics, its impact on mental health (including the mental health of adolescents and rising teen suicide rates), and its role in spreading conspiracy theories such as Pizzagate and aiding groups such as flat-earthers. The film features interviews with former Google design ethicist and Center for Humane Technology co-founder Tristan Harris, his fellow Center for Humane Technology co-founder Aza Raskin, Asana co-founder and Facebook's like button co-creator Justin Rosenstein, Harvard University professor Shoshana Zuboff, former Pinterest president Tim Kendall, AI Now director of policy research Rashida Richardson, Yonder director of research Renee DiResta, Stanford University Addiction Medicine Fellowship program director Anna Lembke, and virtual reality pioneer Jaron Lanier. The interviews are cut together with dramatizations starring actors Skyler Gisondo, Kara Hayward, and Vincent Kartheiser, which tell the story of a teenager's social media addiction.
In this episode we speak with Kate Crawford, founder of the AI Now Institute and professor who has spent the last decade studying the political implications of data systems, machine learning and artificial intelligence. We discuss the anatomy of AI systems and full ecosystem of human and material resources behind an Amazon echo, the need to develop an understanding of the exponential accumulation of power under platform capitalism, the use of AI systems in predictive policing and other controversial areas, and Kate’s parallel experience as an electronic musician.This episode ends rather abruptly as we got lost in conversation and Kate had to run, so forgive us for the atypical ending! Relevant Kate links:AI Now Institute: https://ainowinstitute.org/Anatomy of AI: https://anatomyof.ai/Links we raised:Stance Features of Youtube Celebrities by Katri Mustonen: https://jyx.jyu.fi/bitstream/handle/123456789/56988/1/URN%3ANBN%3Afi%3Ajyu-201802011411.pdf
The Sunday Times’ tech correspondent Danny Fortson brings on Rashida Richardson, head of policy research at AI Now, to talk about tech’s pang of conscience about facial recognition technology (3:40), predictive policing (5:20), the problem with the technology (8:15), how pervasive it is (11:30), the laws (13:40), the visceral effect of this technology (18:00), how AI is seeping into law enforcement (20:25), the data problem (25:20), whether this moment will lead to a crackdown (27:05), if a ban is realistic (29:25), and the race to the bottom (33:45). Support this show http://supporter.acast.com/dannyinthevalley. See acast.com/privacy for privacy and opt-out information.
A solid data strategy can prevent your company from running aground and turning a huge opportunity into a horrible mess. Dan Wu is our guest on this episode of the Georgian Impact Podcast. Dan is a superstar commentator in the privacy and data governance space. He's leveraging his Ph.D. in Sociology and Social Policy and his law degree to help protect people and their data. Dan believes that the best way to do that is through data strategies formed by cross-functional teams that include input from governance, analytics, marketing and product departments. You'll hear about: What we can learn from the botched launch of the Apple Credit Card Why every company needs a data strategy How regulation, like the Algorithmic Transparency Act, could add protections for consumers and accountability for business Offensive vs. defensive data strategy – HBR Article Where responsibility for inaction leading to data breaches should lie Data risks businesses face, including biased algorithms, sharing data with the wrong people, 3rd party data breaches, insider incidents, and technical mistakes Why data ethics need to go beyond what's strictly legal in order to establish and maintain trust. AI Now's 2019 report that touches on ethical inequality risk factors in AI Who is Dan Wu? Dan Wu is the Privacy Counsel & Legal Engineer at Immuta, a leading automated data governance platform for analytics. He writes about purposeful data strategy on TechCrunch and LinkedIn. He holds a J.D. & Ph.D. from Harvard University.
The Sunday Times’ tech correspondent Danny Fortson brings on Meredith Whitaker, founder of AI Now and organiser of the Google walk-out, to talk about how she arrived at the search giant 13 years ago (3:40), delving into tech’s effects on society (4:30), becoming a critic (6:15), and then a labour organiser (8:40), the debate on Silicon Valley working with the Pentagon (11:30), AI bias (14:50), sentencing algorithms (17:00), the Google walk-out (19:45), retaliation (22:30), the dangers of government co-opting Big Tech in the coronavirus response (25:25), how AI can reinforce societal divides (32:30), and the plight of “essential” workers (34:15). See acast.com/privacy for privacy and opt-out information.
Im ersten Teil unseres Programms wird es um aktuelle Ereignisse gehen. Wir beginnen mit der Eröffnung des Amtsenthebungsverfahrens (Impeachment) gegen Präsident Donald Trump durch das US-Repräsentantenhaus am Mittwoch. Es ist erst das dritte Impeachment in der Geschichte der USA. Weiter geht es mit den Wahlen in Großbritannien und dem klaren Sieg von Boris Johnson und seiner konservativen Partei. Anschließend sprechen wir über die Forderung des AI Now Institute nach einer besseren Regulierung der Emotionserkennungstechnologie. Zum Schluss sehen wir uns noch die Ergebnisse einer von britischen Forschern durchgeführten Studie an, derzufolge eine neue Lebensmittelkennzeichnung, bei der angegeben wird, wie viel Sport man treiben muss, um die in diesem Lebensmittel enthaltenen Kalorien zu verbrennen, Vorteile haben könnte. In unserem Segment Trending in Germany sprechen wir heute über die Bundesstaatsanwaltschaft Deutschlands, die kurz davor ist, die russische Regierung offiziell zu beschuldigen, die Ermordung eines georgischen Bürgers im August in Berlin angeordnet zu haben. Dies wird sicherlich schwerwiegende diplomatische Folgen haben. Außerdem sprechen wir über den Vorschlag, dass die Sommerferien in allen deutschen Bundesländern gleichzeitig beginnen sollen. Das hat Vorteile für die Schulen, aber Verkehrsexperten warnen vor Staus und der Tourismusverband fürchtet wirtschaftliche Einbußen. - US-Repräsentantenhaus stimmt für Amtsenthebungsverfahren gegen US-Präsident Trump - Haushoher Sieg für Boris Johnson bei den Wahlen in Großbritannien - AI Now fordert gesetzliche Regulierung der Emotionserkennungstechnologie - Neue Lebensmittelkennzeichnung soll angeben, wie viel Sport zum Kalorienabtrainieren nötig ist - Mutmaßlicher Attentäter im Tiergarten-Mord womöglich selbst in Gefahr - Deutschland streitet über die Sommerferien
En la primera parte del programa, vamos a comentar la actualidad internacional. Comenzaremos con la aprobación de la destitución del presidente Donald Trump por parte de la Cámara de Representantes de EE. UU. el miércoles, la tercera vez que ocurre en la historia de EE. UU. Continuaremos con las elecciones de Reino Unido y la decisiva victoria de Boris Johnson y su partido. Discutiremos la petición del instituto de investigación AI Now para regular la tecnología de detección de emociones y revisaremos los resultados de un estudio llevado a cabo por investigadores británicos sobre las posibles ventajas del etiquetado alimentario en equivalentes de ejercicio. Hoy, en nuestra sección Trending in Spain, hablaremos de estadísticas. Una de ellas es muy positiva: hablaremos de cómo las vacaciones hacen aumentar la tasa de empleo en España. La segunda, en cambio, será todo lo contrario. Yo diría que los datos son incluso alarmantes. ¡La natalidad en España en el año 2018 ha sido la más baja de los últimos 20 años! - La Cámara de Representantes de EE. UU. aprueba la destitución de Trump - Boris Johnson obtiene una victoria aplastante en las elecciones de Reino Unido - AI Now pide leyes que restrinjan las tecnologías de detección de emociones - El etiquetado de alimentos en equivalentes de ejercicio funciona, según un grupo de investigadores británicos - El tan esperado puente de diciembre - España registró en 2018 la cifra más baja de nacimientos en 20 años
חברות רבות בתחום הטכנולוגיות לזיהוי פנים טוענות שהן לא מציעות רק כלי שבאמצעותו ניתן לגלות את זהותו של אדם אלא גם כלי שבאמצעותו ניתן לגלות מה המצולם מרגיש וזאת באמצעות ניתוח של מיקרו הבעות פנים. חברות ענק כמו אמזון, יבמ ומיקרוסופט לצד חברות המתמחות בתחום, פועלות בשוק זיהוי הרגשות שמגלגל מדי שנה 20 מיליארד דולר אך האם זיהוי שכזה בכלל אפשרי?בפרק השני של סדרת "על הפנים" המוקדשת לטכנולוגיות לזיהוי פנים, נבדוק האם ניתן לזהות פדופילים, עבריינים ואפילו חוקרים באקדמיה באמצעות ניתוח תווי הפנים שלהם ומדוע קל לזהות עצב אבל קשה לזהות תדהמה.האזנה נעימהיובל דרור.קישורים:מהיכן מגיעים רגשותhttps://aeon.co/essays/human-culture-and-cognition-evolved-through-the-emotionsVaught's Practical Character Readerhttps://publicdomainreview.org/collections/vaughts-practical-character-reader-1902/דוח מכון המחקר AI Nowhttps://ainowinstitute.org/AI_Now_2018_Report.pdfההיסטוריה הגזענית של זיהוי פניםhttps://www.nytimes.com/2019/07/10/opinion/facial-recognition-race.htmlדף הבית של התכניתרשימת תפוצה בדואר האלקטרוני | iTunes | האפליקצייה שלנו לאנדרואיד | RSS Link | פייסבוק | טוויטר
This week, a conversation about privacy, ethics, and organizing in the world of technology.Who benefits from the lack of diversity in the tech industry? Does artificial intelligence reflect the biases of those who create it? How can we push for regulation and transparency? These are some of the questions discussed by our guests, Meredith Whittaker, co-founder of AI Now at NYU and the founder of Google’s Open Research Institute; and Kade Crockford, Director of the ACLU Massachusetts’ Technology and Liberty Program. They appeared at the Sydney Goldstein Theater in San Francisco on June 7, 2019.
Paul Allen, el cofundador de Microsoft, falleció a los 65 años, comienza el AI Now 2018 Symposium, mientras que en Brasil se lleva a cabo la 20º edición de Futurecom. www.amenazaroboto.com
This week: what happened while we were gone, real problems with AI, spying vacuums, and a suicidal robot. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week. The stories this week: Elon is worried about killer robots The real problems with AI Roomba, the home mapping vacuum cleaner Other stories we bring up: AI and enormous data Why AI is not colour blind Google's collaboration with Carnegie Mellon University paper AI Now initiative Cathy O'Neil's book Weapons of Math Destruction Do algorithms make better decisions? Roomba data will be sold to the highest bidder How to Use iRobot Roomba 980 Robot Vacuum Our robot of the week: A Knightscope security robot You can subscribe to this podcast on iTunes, Soundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au. Send us your news ideas to sbi@sydney.edu.au For more episodes of The Future, This Week see our playlists
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
This Week in Machine Learning & AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show covers the White House’s AI Now workshop, tuning your AI BS meter, research on predatory robots, an AI that writes Python code, plus acquisitions, financing, technology updates and a bunch more. Show notes for this episode can be found at https://twimlai.com/8.