POPULARITY
ML engineering demand remains high with a 3.2 to 1 job-to-candidate ratio, but entry-level hiring is collapsing as AI automates routine programming and data tasks. Career longevity requires shifting from model training to production operations, deep domain expertise, and mastering AI-augmented workflows before standard implementation becomes a commodity. Links Notes and resources at ocdevel.com/mlg/mla-30 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want Market Data and Displacement ML engineering demand rose 89% in early 2025. Median salary is $187,500, with senior roles reaching $550,000. There are 3.2 open jobs for every qualified candidate. AI-exposed roles for workers aged 22 to 25 declined 13 to 16%, while workers over 30 saw 6 to 12% growth. Professional service job openings dropped 20% year-over-year by January 2025. Microsoft cut 15,000 roles, targeting software engineers, and 30% of its code is now AI-generated. Salesforce reduced support headcount from 9,000 to 5,000 after AI handled 30 to 50% of its workload. Sector Comparisons Creative: Chinese illustrator jobs fell 70% in one year. AI increased output from 1 to 40 scenes per day, crashing commission rates by 90%. Trades: US construction lacks 1.7 million workers. Licensing takes 5 years, and the career fatality risk is 1 in 200. High suicide rates (56 per 100,000) and emerging robotics like the $5,900 Unitree R1 indicate a 10 to 15 year window before automation. Orchestration: Prompt engineering roles paying $375,000 became nearly obsolete in 24 months. Claude Code solves 72% of GitHub issues in under eight minutes. Technical Specialization Priorities Model Ops: Move from training to deployment using vLLM or TensorRT. Set up drift detection and monitoring via MLflow or Weights & Biases. Evaluation: Use DeepEval or RAGAS to test for hallucinations, PII leaks, and adversarial robustness. Agentic Workflows: Build multi-step systems with LangGraph or CrewAI. Include human-in-the-loop checkpoints and observability. Optimization: Focus on quantization and distillation for on-device, air-gapped deployment. Domain Expertise: 57.7% of ML postings prefer specialists in healthcare, finance, or climate over generalists. Industry Perspectives Accelerationists (Amodei, Altman): Predict major disruption within 1 to 5 years. Skeptics (LeCun, Marcus): Argue LLMs lack causal reasoning, extending the adoption timeline to 10 to 15 years. Pragmatists (Andrew Ng): Argue that as code gets cheap, the bottleneck shifts from implementation to specification.
If you're a VP of Sales watching your revenue team paste customer data into ChatGPT, you don't have an adoption problem - you have a governance crisis. Your best people are uploading signed NDAs to Claude and feeding pipeline data into Perplexity because 70% of their day is admin drag, and AI is the only thing fast enough to keep them above water. Financial services tried to ban AI. It failed spectacularly. So they built governance frameworks that let teams move faster and sleep at night. Dr. Angela Murphy - known as Payments Elsa - reveals the "Amnesty and Orchestration" playbook she architected for banks navigating the GENESIS Executive Order. She's a PhD strategist, 2024 PayTech Women Emerging Trendsetter, and advisor to financial institutions on AI governance and ethical AI mandates. You'll learn the three-step governance audit every Revenue Leader should run this quarter - before Legal does. Angela shares real stories of teams using ChatGPT for payment disputes and compliance workflows, creating massive liability. She reveals the conversation framework to surface Shadow AI without triggering panic, the three policies you can implement in 30 days, and why explainability isn't compliance theater - it's revenue protection. This isn't a "fire your team and replace them with bots" episode. Angela proves ethical AI can surface hidden revenue channels, identify products to sunset, and reveal sales cycle biases costing you deals. The regulatory hammer is coming. Financial services just got hit first. Will you architect governance now, or audit the damage later? Download the Executive Guide to Shadow AI at theaihat.com/shadow-ai. Subscribe to AI for Revenue Leaders: The AI Hat Podcast and stop being a Pilot Purgatory statistic. CHAPTERS 00:00 Ethical AI = Revenue Growth: Find Gaps, Biases & New Channels 01:24 Show Intro & Theme Song: Welcome to The AI Hat Podcast 02:56 The Shadow AI Compliance Time Bomb (Real-World Examples) 03:43 Meet Dr. Angela Murphy (Payments Elsa) + Why Banks Try to Ban AI 07:41 Shadow AI in the Back Office: Spreadsheets, PII, and Manual Ops Risks 11:04 Why Revenue Leaders Should Watch FinTech: Payments Rails & Stablecoins 13:04 Genesis Executive Order Explained: “Suggestulation” and What's Coming 16:24 From Fear to Frameworks: Finding Low-Hanging AI Wins with Guardrails 19:24 Resource Break: Executive Guide to Shadow AI 20:33 Orchestration 101: Tool Inventory, Training, and Policy from Existing Governance 23:33 Explainable AI: Decisions You Can Defend (Underwriting Example) 27:51 Ethics, Bias & Revenue Outcomes: Avoid Lawsuits and Unlock Better Decisions 31:19 Biggest Misconceptions: You Can't Ban AI—and Education Isn't Optional 37:30 Monday Morning Action Plan: Start the AI Policy, Audit Tools, Target Pain Points 40:46 Where to Find Angela + Final Wrap and Next Steps Show Notes & Full Transcript: https://theaihat.com/why-your-sales-teams-shadow-ai-is-a-lawsuit-waiting-to-happen-a-fintech-cros-governance-playbook/ Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of Future Fuzz, Vince Quinn sits down with Mike Rotondo, Founder of RITC Cybersecurity, to unpack the growing cybersecurity risks facing modern marketing teams.From phishing scams and business email compromise to AI vulnerabilities and data leakage, Mike explains why marketers are prime targets for cybercriminals—and why being “in the cloud” doesn't automatically mean you're secure.The conversation dives into how cybercriminals operate like full-scale corporations, why user training is the single most important defense, and how simple mistakes—like shared logins or unsecured home routers—can expose entire organizations. Mike also explores emerging threats like “quishing” (QR code phishing), AI exploitation, and the hidden risks of feeding sensitive data into large AI tools.If you're managing customer data, email lists, or AI-powered marketing tools, this episode is a must-listen.Guest BioMike Rotondo is the Founder of RITC Cybersecurity, a consulting firm focused exclusively on cybersecurity strategy, compliance, and risk mitigation.RITC provides services including penetration testing, security framework analysis, SOC 2 audit preparation, HIPAA and PCI compliance consulting, and virtual CISO (vCISO) services. Rather than hands-on IT implementation, Mike and his team specialize in advisory, governance, and security architecture—helping organizations build secure systems from the inside out.With decades of experience in cybersecurity dating back to the 1990s, Mike works with organizations to prevent breaches, reduce liability, and strengthen internal defenses against evolving cyber threats.TakeawaysBeing in the cloud does not mean you're secure.Most breaches start with users—not firewalls.Cybercriminals operate like corporations, with R&D and strategy teams.Phishing and business email compromise (BEC) are still the top threats.Shared logins and admin access for everyday users create major vulnerabilities.Remote work requires secured routers, patched systems, and enforced device standards.“Quishing” (QR code phishing) is an emerging attack vector.AI tools can create data leakage risks if policies aren't in place.Personally identifiable information (PII) exposure can financially destroy small companies.Cybersecurity training is the most effective prevention strategy.Chapters00:00 Introduction to Mike Rotondo 00:28 What RITC Cybersecurity Does 01:31 Why Businesses Are More Vulnerable Than They Think 03:01 How Cybercriminals Actually Operate 04:10 Real-World Impact of Phishing Attacks 06:30 Building Strong Cyber Defenses 07:57 Remote Work Security Risks 09:42 QR Code Phishing (“Quishing”) 10:45 Why Cybersecurity Feels Overwhelming 11:05 The Importance of Employee Training 12:26 AI's Role in Cybersecurity Threats 14:53 AI Server Vulnerabilities 15:15 How Marketers Should Approach AI Security 17:08 Data Leakage and PII Risks 18:31 The Financial Fallout of a Breach 19:08 The Ciphered Reality PodcastLinkedInFollow Mike on LinkedIn Follow Vince on LinkedIn
In this video David speaks to Peter Bailey (SVP and GM of Cisco's Security business). AI agents are moving fast inside enterprises, and CISOs are hitting the brakes for one reason: the attack surface is expanding at machine speed. In this interview, we break down how agentic AI changes security, why MCP servers and agent tool access create new risks, and what a zero trust approach looks like when the “user” is a non-deterministic agent. We cover real-world problems like shadow MCP servers, agents touching sensitive systems and PII, and why traditional perimeter controls and firewalls are not enough when traffic is encrypted and actions happen too quickly downstream. You'll also hear what Cisco is doing across the AI lifecycle: AI Defense for model scanning, provenance and guardrails, plus new protections focused on agent identity, dynamic authorization, behavior monitoring, and revocation. On the networking side, we discuss how SD-WAN and secure access (SASE) can add visibility and policy control for AI usage, including prioritizing latency-sensitive AI traffic while still enforcing security. If you're a security engineer, network engineer, or CISO trying to move from AI hype to safe deployment, this video gives you a practical mental model and the controls to start building now. Big thank you to @Cisco for sponsoring this video and for sponsoring my trip to Cisco Live Amesterdam. // Peter Baily' SOCIALS // LinkedIn: / peterhbailey Guest Bio: https://newsroom.cisco.com/c/r/newsro... // David's SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal YouTube: / @davidbombal Spotify: open.spotify.com/show/3f6k6gE... SoundCloud: / davidbombal Apple Podcast: podcasts.apple.com/us/podcast... // MY STUFF // https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com // MENU // 0:00 - Coming Up 0:30 - Introduction 01:15 - CISOs Problems with AI 02:35 - Real Issues with AI Agents 04:29 - Growth of the Attack Surface 05:34 - Concern of Poisoned AI and MCP 08:09 - What is the Kill-chain 10:16 - AI with Built-in Security 11:56 - Best Practises for AI Security 14:08 - Cisco Innovations for AI 16:48 - Cisco's Red Team for own AI 18:27 - Secure AI in Public Places 20:09 - Should You get into Cyber Security 21:26 - Advice To Your Younger Self 22:29 - Outro Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! Disclaimer: This video is for educational purposes only. #cisco #ciscoemea #ciscolive
In this episode, we're joined by Maryam Ashoori, VP of Product and Engineering at IBM's Watsonx platform. With a background that includes 2 master's degrees in AI, a PhD in Systems Design Engineering, and named on over 30 patents at IBM, she's been on the bleeding edge for over a decade. Currently leading the charge on Agentic AI and AI Governance at IBM, Maryam is a bridge between the theoretical frontier of AI and the messy reality of enterprise deployment. In this episode, Maryam: Tells why AI has been stuck in pilot purgatory for longer than expected, and what you need to do today for a successful enterprise deployment Calls shenanigans on the “biggest, best model” crowd, and why often a smaller, more focused tool is the right choice Explains how to build an agnostic architecture that can handle the realities of an AI world where models advance faster than anybody can keep up Links LinkedIn: https://www.linkedin.com/in/mashoori/ IBM: https://www.ibm.com/us-en Resources Reinventing SaaS: Zuora's AI Transformation | Karthik Chakkarapani and Shakir Karim (Zuora): https://www.youtube.com/watch?v=gHVxnLikMpQ Linear's Secret to Building Powerful AI Products | Nan Yu, Head of Product (Linear): https://www.youtube.com/watch?v=27rGB-6XQJg Chapters 00:00 Intro 02:18 From ChatGPT hype to enterprise reality: use cases, ROI, and the rise of agents 06:11 Security, accountability & governance: who's responsible when agents go wrong? 10:37 Risk-based rollout: use-case scoping, Risk Atlas, and guardrails like PII detection 17:10 Observability for agentic workflows 18:21 Why compute optimization matters 22:58 Designing for model agility: abstraction layers, routing, and picking the right model 27:23 Conclusion Follow LaunchPod on YouTube We have a new YouTube page! Watch full episodes of our interviews with PM leaders and subscribe! What does LogRocket do? LogRocket's Galileo AI watches user sessions for you and surfaces the technical and usability issues holding back your web and mobile apps. Understand where your users are struggling by trying it for free at LogRocket.com.Special Guest: Maryam Ashoori.
Federal Tech Podcast: Listen and learn how successful companies get federal contracts
Connect to John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/ Want to listen to other episodes? www.Federaltechpodcast.com Cybersecurity is a rapidly evolving field, where every effective defense technique is quickly noticed and adapted to by malicious actors. The real question is how fast each side of this ongoing cat-and-mouse game can respond. Let us take an example of web applications. In the decade-long slog of the cloud, federal users migrated to web-based applications protected by Web Application Firewalls (WAFs). firewalls. As that method matured, malicious observers noted that the Application Programming Interface (API) allowed these software programs to communicate and exchange data. Voila, another attack vector was born. During today's interview, Joe Henry from Akamai Technologies notes that 80% of their customers report API attacks. Henry details a curious term called "Broken-Object Level Authorization." In this attack, an application fails to check if a user is authorized to access specific data objects. The ID is manipulated, and the malicious actor gets access. Akamai's API Security performs behavioral analysis beyond WAFs, flags PII exposure, and supports a zero-trust posture. Software developers talk about a "shift left"; we apply that to the Akamai approach. They have a worldwide network of Points of Presence (POPs) and data centers where they can observe attacks as they develop. It is so strong that it provides fail-open resilience with a 100% SLA. Akamai provides a State of the Internet Report (quarterly). If you would like to stay connected with the next manifestation of attack, consider subscribing or visiting their website to stay informed about the latest trend
In this episode of Resilient Cyber, I sit down with VP, Product Marketing and Strategy for Protegrity, James Rice. We will be discussing how traditional approaches to security aren't solving the AI security challenge, the importance of data-centric approaches for secure AI implementation and addressing issues such as AI data leakage.James and I dove into a lot of great topics, including:Why traditional perimeter-based and infrastructure-centric security models are failing in the era of AI, and why organizations need to fundamentally rethink their approach to securing AI workloads.The concept of data-centric security — protecting the data itself rather than the systems surrounding it — and why this shift is critical as data flows across cloud platforms, AI models, and agentic workflows.The growing risk of AI data leakage and how sensitive information (PII, PHI, PCI, intellectual property) can inadvertently be exposed through AI training data, model outputs, prompt injection, and RAG pipelines.Why many organizations find themselves stuck in an "AI circularity" — wanting to leverage AI but unable to do so because of the complexity of securing critical business data throughout the AI lifecycle.The importance of embedding security controls inline within the AI pipeline — from data ingestion and model training to orchestration and output — rather than bolting security on after the fact.How data protection techniques such as tokenization, anonymization, dynamic masking, and format-preserving encryption can enable organizations to use realistic, context-rich data for AI while maintaining compliance and reducing risk.The challenge of securing agentic AI workflows, where autonomous agents continuously interact with enterprise data, making traditional access control models insufficient.How organizations can balance the need for AI innovation and data utility with regulatory compliance requirements across frameworks like GDPR, HIPAA, PCI DSS, and emerging AI-specific regulations.James's perspective on how security, risk, and compliance functions need to evolve to keep pace with the rapid productionization of AI across the enterprise.The role of semantic guardrails in governing AI inputs and outputs, ensuring that protection is applied contextually based on how data is being used — not just where it resides.About the GuestJames Rice is VP of Product Marketing and Strategy at Protegrity, a global leader in data-centric security. He brings over 20 years of experience in security, risk, and compliance, having provided solution engineering, value engineering, and implementation services to Fortune 1000 organizations across industries. Prior to Protegrity, James held leadership roles at Pathlock (formerly Greenlight Technologies), Accenture, and PricewaterhouseCoopers.About ProtegrityProtegrity is a data-centric security platform that protects sensitive data across hybrid, multi-cloud, and AI environments. Their approach embeds security directly into the data itself — enabling enterprises to unlock insights, accelerate innovation, and meet global compliance with confidence. Protegrity's solutions include data discovery and classification, tokenization, anonymization, dynamic masking, and semantic guardrails for AI and analytics workflows.Learn more at protegrity.com
From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword
Traditional IT security is predictable, but AI is not. In an era where AI learns, evolves, and operates on data-centric logic, the standard playbooks for network and infrastructure security are no longer enough. Enter ISACA's Advanced in Artificial Intelligence Security Management (AISM), a framework designed to bridge the gap between traditional security and the unique risks of the AI era.In this episode, we explore the shift from application logic to data-centric AI security. We dive into the complexities of "Poisoning" attacks, prompt injections, and the critical importance of human-in-the-loop governance. Whether you're a CISSP, CISM, or an aspiring AI security leader, this is your guide to mastering the integration of AI into your enterprise strategy.
Muhammad Danish, University of New Mexico lead author and cybersecurity researcher, discussing his team's work on "Private Links, Public Leaks: Consequences of Frictionless User Experience on the Security and Privacy Posture of SMS-Delivered URLs". This paper examines how the push for frictionless user experiences has led many services to rely on SMS-delivered, single-click URLs—an inherently insecure channel that can be intercepted or leaked. Analyzing more than 322,000 unique URLs from 33 million messages, the researchers found widespread security failures, including exposed PII across 701 endpoints at 177 services due to weak, token-based authentication that treats possession of a link as sufficient authorization. The study also identified low-entropy tokens enabling mass URL enumeration and data overfetching issues, though disclosures prompted 18 services to fix flaws, improving privacy protections for at least 120 million users. The research can be found here: Private Links, Public Leaks: Consequences of Frictionless User Experience on the Security and Privacy Posture of SMS-Delivered URLs Learn more about your ad choices. Visit megaphone.fm/adchoices
Recorded live at Cloud Connections, the Cloud Communications Alliance event in Delray Beach, Doug Green, Publisher of Technology Reseller News, spoke with Bill Placke, Co-Founder & President, Americas at SecurePII, about one of the most pressing challenges facing AI-driven communications today: how to scale AI while complying with global data privacy regulations—and how that challenge can become a competitive advantage. Placke explains that SecurePII was formed to address a growing structural problem in AI adoption. While organizations are eager to deploy AI and train large language models, regulatory uncertainty around personally identifiable information (PII) has stalled progress. Citing industry research showing that more than 60 percent of AI initiatives have been paused due to data privacy concerns, Placke argues that governance policies alone are not enough. Instead, SecurePII takes an architectural approach. At the core of SecurePII's solution is data minimization at the point of ingestion. The company's technology prevents sensitive information—such as credit card numbers, names, addresses, or social security numbers—from ever entering enterprise systems. SecurePII's existing PCI-focused offering already removes cardholder data from call flows, keeping organizations out of PCI scope entirely. The same approach is now being extended to broader categories of PII, enabling AI systems to operate and train on clean data streams that are free from regulated information. Placke emphasizes that this upstream architectural design fundamentally changes the compliance equation. Regulators and plaintiff attorneys, he notes, care about outcomes—not intent. If sensitive data never enters the system, compliance scope, audit costs, breach exposure, and regulatory risk are dramatically reduced. “Downstream controls don't scale with AI—architecture does,” Placke says, positioning data minimization as a foundation for both trust and growth. The discussion also highlights the role of consent and customer trust in an AI-enabled world. Rather than asking customers to consent to broad data use, SecurePII enables enterprises to clearly state that sensitive information is neither seen nor stored, while still allowing AI to learn from outcomes and sentiment. This approach removes what Placke calls the “creepy factor” associated with AI and personal data, while aligning with emerging frameworks such as the EU AI Act and long-standing NIST guidance. For MSPs, UCaaS providers, and channel partners, Placke frames compliance not as a cost center but as a revenue opportunity. By embedding privacy-preserving architectures into voice, AI, and communications solutions, service providers can differentiate themselves as trusted advisors—helping customers deploy AI safely, reduce regulatory exposure, and accelerate adoption. To learn more about SecurePII and its privacy-first AI architecture, visit https://www.securepii.cloud/.
Corey Zumar is a Product Manager at Databricks, working on MLflow and LLM evaluation, tracing, and lifecycle tooling for generative AI.Jules Damji is a Lead Developer Advocate at Databricks, working on Spark, lakehouse technologies, and developer education across the data and AI community.Danny Chiao is an Engineering Leader at Databricks, working on data and AI observability, quality, and production-grade governance for ML and agent systems.MLflow Leading Open Source // MLOps Podcast #356 with Databricks' Corey Zumar, Jules Damji, and Danny ChiaoJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterShoutout to Databricks for powering this MLOps Podcast episode.// AbstractMLflow isn't just for data scientists anymore—and pretending it is is holding teams back. Corey Zumar, Jules Damji, and Danny Chiao break down how MLflow is being rebuilt for GenAI, agents, and real production systems where evals are messy, memory is risky, and governance actually matters. The takeaway: if your AI stack treats agents like fancy chatbots or splits ML and software tooling, you're already behind.// BioCorey ZumarCorey has been working as a Software Engineer at Databricks for the last 4 years and has been an active contributor to and maintainer of MLflow since its first release. Jules Damji Jules is a developer advocate at Databricks Inc., an MLflow and Apache Spark™ contributor, and Learning Spark, 2nd Edition coauthor. He is a hands-on developer with over 25 years of experience. He has worked at leading companies, such as Sun Microsystems, Netscape, @Home, Opsware/LoudCloud, VeriSign, ProQuest, Hortonworks, Anyscale, and Databricks, building large-scale distributed systems. He holds a B.Sc. and M.Sc. in computer science (from Oregon State University and Cal State, Chico, respectively) and an MA in political advocacy and communication (from Johns Hopkins University)Danny ChiaoDanny is an engineering lead at Databricks, leading efforts around data observability (quality, data classification). Previously, Danny led efforts at Tecton (+ Feast, an open source feature store) and Google to build ML infrastructure and large-scale ML-powered features. Danny holds a Bachelor's Degree in Computer Science from MIT.// Related LinksWebsite: https://mlflow.org/https://www.databricks.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Corey on LinkedIn: /corey-zumar/Connect with Jules on LinkedIn: /dmatrix/Connect with Danny on LinkedIn: /danny-chiao/Timestamps:[00:00] MLflow Open Source Focus[00:49] MLflow Agents in Production[00:00] AI UX Design Patterns[12:19] Context Management in Chat[19:24] Human Feedback in MLflow[24:37] Prompt Entropy and Optimization[30:55] Evolving MLFlow Personas[36:27] Persona Expansion vs Separation[47:27] Product Ecosystem Design[54:03] PII vs Business Sensitivity[57:51] Wrap up
Right now, while you're listening to this, one of your sales reps is pasting customer data into ChatGPT. Your marketing manager just uploaded strategic memos to Claude. And your IT department has no idea. Welcome to Shadow AI. Research shows that while only 40% of companies have official enterprise AI licenses, 90% of employees are using AI tools through personal accounts. Your team isn't trying to sabotage you - they're trying to survive the "Admin Drag" we discussed in Episode 01. The question isn't whether they're using AI. They are. The question is: Are you going to hide from it, or are you going to lead it? In this episode, Mike Allton tackles the biggest obstacle standing between you and effective AI implementation: governance. He breaks down why the typical corporate response - the "ban hammer" - doesn't just fail, it makes the problem worse by driving AI usage completely underground where you have zero visibility. You'll discover: Why 90% of employees are "Shadow Users" leveraging personal AI accounts to hit their goals The three scenarios currently creating data leaks in most revenue organizations (PII in free tools, unreleased product details in public galleries, customer records in unsecured platforms) Why your team isn't being malicious - they're being desperate (and what that tells you about your tech stack) The "Traffic Light System": A simple 3-tier framework (Green/Yellow/Red) that employees actually remember How to launch a 30-Day Amnesty Program that brings Shadow AI into the light without creating a witch hunt The 7-day implementation roadmap to go from "Shadow Risk" to "Sanctioned Speed" This isn't about punishing innovation. It's about governing it. Because when you sanction Shadow AI, you don't just reduce risk - you unlock velocity. Mike walks you through the exact email templates, survey questions, and policy frameworks you need to turn your biggest security vulnerability into a competitive advantage. Featured Framework: The 30-Day Amnesty Program - discover what tools your team is actually using (and why your official stack is failing them) Featured Resource: The Shadow AI Governance Launchpad - includes the Traffic Light cheat sheet, amnesty email script, and AI Use Policy template ready for Legal review Download here: https://theaihat.com/the-executive-guide-to-shadow-ai-from-security-risk-to-competitive-advantage/ Next Episode: We move from governance to implementation. You'll learn how to hire your first Digital Crew member - a Sales Prep Agent that researches prospects 15 minutes before every call and delivers a Battle Card to your rep's inbox. If you're a VP of Sales, CRO, or RevOps Director who needs to secure your team while enabling them to move faster, this is your playbook. Episode Timestamps 00:00 Introduction to Shadow AI 00:26 The Reality of Unapproved AI Usage 02:21 The Risks of Shadow AI 03:55 The Ineffectiveness of Banning AI 07:21 Sanctioning AI for Safety and Efficiency 08:35 Implementing the Traffic Light System 10:25 Rolling Out the AI Amnesty Program 13:08 Final Thoughts and Next Steps 14:52 Conclusion and Wrap-Up Learn more about your ad choices. Visit megaphone.fm/adchoices
Click here to sign up for a new platform that helps law firms use subscription billing.To stay up to date with Practi, subscribe to our newsletter at practi.ai/hello.On June 17, 2025, I presented live at LegalGeek in Chicago on the topic of integrating. Here are the top 5 takeaways:* AI is Rapidly Transforming Legal Practice.Artificial intelligence is accelerating changes in law firms, from automating routine tasks to enabling new business models. The adoption of generative AI has made it possible to handle complex, unstructured data and deliver legal services faster and more efficiently than ever before.* The Billable Hour is Obsolete.The traditional billable hour model is under pressure. As AI automates more legal work, clients increasingly value output and results over time spent. The billable hour could disappear within five years, replaced by value-based and alternative fee structures, like subscriptions.* Subscription and Alternative Fee Models Offer Major Advantages.Subscription-based and alternative fee arrangements provide pricing transparency, encourage client engagement, and align incentives for efficiency. These models help lawyers focus on long-term client relationships and accessibility, rather than maximizing short-term profits.* AI Enhances Client Service and Access to Justice.By leveraging AI tools, lawyers can serve more clients at lower costs, helping to close the access to justice gap. Subscription models make legal help more affordable and encourage clients to seek advice proactively, preventing problems before they escalate.* Cultural Change is Essential for the Future of Law.Embracing technology and new business models requires a cultural shift within the legal profession. This includes rethinking mentorship, collaboration, and how value is measured. Firms that adapt will reduce burnout, improve teamwork, and better meet evolving client needs.__________________________Sign up for Paxton, my all-in-one AI legal assistant, helping me with legal research, analysis, drafting, and enhancing existing legal work product.I've partnered with Pii to make it easy for you to purchase the hardware I use in my law firm: (1) Studio Setup; (2) Midrange Setup; (3) Highrange Setup.Get Connected with SixFifty, a business and employment legal document automation tool.Sign up for Gavel, an automation platform for law firms.Visit Law Subscribed to subscribe to the weekly newsletter to listen from your web browser.Prefer monthly updates? Sign up for the Law Subscribed Monthly Digest on LinkedIn.Check out Mathew Kerbis' law firm Subscription Attorney LLC.Want to use the subscription model for your law firm? Click here to sign up for a new platform that helps law firms use subscription billing. Get full access to Law Subscribed at www.lawsubscribed.com/subscribe
In this bonus episode of Banking on Fraudology, powered by Safeguard , Hailey Windham talks with Ben Graf, a self-taught AI expert in the neobank space. Ben embodies the spirit of curiosity and courage driving the next wave of fraud-fighting transformation.The conversation dives into what it really looks like to learn AI from the ground up, emphasizing that the future of fraud prevention isn't about replacing people, but empowering them through technology.Key Takeaways: AI, Innovation, and Fraud-Fighting EmpowermentUsing AI to Learn AI: Ben explains how he used varying LLM chats (like ChatGPT, Claude, and Gemini) as a coach or mentor, experimenting for hours to understand their capabilities, consistency, and how to effectively prompt them.This approach helped him translate technical language and practices (like data analysis, SQL, and JavaScript) into actionable knowledge for his team, breaking down communication barriers.The hardest part was knowing where to start, but the key was realizing that "something is better than nothing" and compounding knowledge quickly breaks down barriers.Practical AI Applications for Eliminating Busy Work: AI should be used to make teams more efficient and help professionals focus strategically.Automating Document Verification: AI can use OCR to pull data, flag inconsistencies, and serve up summaries for identity, business, and income documents, which are often the most time-consuming parts of a review.Data Retrieval and System Silos: AI can help team members write their own SQL queries to retrieve data from data warehouses, dramatically reducing requests to the data team.Product and Feature Proposals: AI tools can mock up full dashboard concepts and even provide code snippets to give engineers a visual and break down communication barriers between fraud and technical teams.The Power of Empowerment and Buy-In: Leadership should create a culture where fraud fighters are empowered to explore and innovate.The magic of time savings lies in filling the time freed from "busy work" (like false positives) with new, high-impact tasks, whether that's cost savings in fraud loss or better customer retention.Teams are advised to keep proprietary or PII information out of the loop and find safe spaces to explore, remembering that everyone is still figuring out what AI can do.Get in the mood of being grateful for the fraud-fighting community, and be reminded of how strong the fraud-fighting community truly is. About Hailey Windham:As a 2023 CU Rockstar Recipient, Hailey Windham, CFCS (Certified Financial Crimes Specialist) demonstrated unbounding passion for educating her community, organization and credit union membership on scams in the market and best practices to avoid them. She has implemented several programs within her previous organizations that aim at holistically learning about how to prevent and detect fraud targeted at membership and employees. Windham's initiatives to build strong relationships and partnerships throughout the credit union community and industry experts have led to countless success stories. Her applied knowledge of payments system programs combined with her experience in fraud investigations offers practical concepts that are transferable, no matter the organization's size. Connect with Hailey on LinkedIn: https://www.linkedin.com/in/hailey-windham/
Click here to sign up for a new platform that helps law firms use subscription billing.Here are the top 5 takeaways from this episode:* Innovative Criminal Defense Internship/Externship Program:Adam Rossen's law firm runs a highly competitive, curriculum-based internship program for high school, undergraduate, and law students. The program is designed to provide real-world legal experience, pairing students with attorneys and involving them in substantive casework. There is a push to rebrand it as an externship to help students receive academic credit.* Subscription and Fixed Fee Models in Criminal Defense:Rossen's firm utilizes fixed fees and is exploring subscription models for ongoing client services, especially for probation-related matters. The discussion highlights the benefits of offering tiered subscription packages (e.g., access to legal resources, help with forms, full representation) to better serve clients and create predictable revenue.* Client Retention and Value Packaging:Rossen's firm has historically provided certain post-case services (like probation motions) for free to nurture long-term client relationships and encourage positive reviews. However, this has become burdensome for staff, leading to a reevaluation of what should be included for free versus what should be packaged and charged for in a subscription or a la carte model.* Data-Driven Pricing and Service Decisions:There is an emphasis on analyzing firm data (e.g., rates of probation violations, service usage) to inform pricing and package design for subscription offerings. This approach ensures that the firm's offerings are both sustainable and aligned with client needs.* Leveraging Technology and AI in Legal Practice:Adam Rossen is co-owner of Meet Gabby, a voice AI company that is being used to automate intake, calls, and other administrative tasks in his firm. The adoption of AI tools (like Paxton and Gabby) is seen as a way to scale operations, improve efficiency, and maintain high service quality without significantly increasing staff.__________________________Learn more about Rossen Law Firm and Meet Gabby.Sign up for Paxton, my all-in-one AI legal assistant, helping me with legal research, analysis, drafting, and enhancing existing legal work product.Here's a link to purchase lifetime access to the recordings of My Shingle's AI Teach-In if you couldn't make it live.I've partnered with Pii to make it easy for you to purchase the hardware I use in my law firm: (1) Studio Setup; (2) Midrange Setup; (3) Highrange Setup.Get Connected with SixFifty, a business and employment legal document automation tool.Sign up for Gavel, an automation platform for law firms.Visit Law Subscribed to subscribe to the weekly newsletter to listen from your web browser.Prefer monthly updates? Sign up for the Law Subscribed Monthly Digest on LinkedIn.Check out Mathew Kerbis' law firm Subscription Attorney LLC.Want to use the subscription model for your law firm? Click here to sign up for a new platform that helps law firms use subscription billing. Get full access to Law Subscribed at www.lawsubscribed.com/subscribe
Global investment in AI across financial services is projected to grow from USD 38.36 billion in 2024 to USD 190.33 billion by 2030, according to a 2024 market forecast by Markets and Markets. At the same time, UK regulators report that AI adoption is already widespread across the sector. A joint 2024 survey by the Bank of England and the Financial Conduct Authority found that 75 percent of UK financial services firms are already deploying AI, with a further 10 percent planning adoption within the next three years. As AI adoption accelerates, search visibility in finance is no longer dictated by traditional rankings alone. AI overviews, gen AI assistants and zero-click results now sit between customers and brand websites, reshaping how trust, authority and compliance are interpreted online. In this environment, SEO is no longer just a growth channel. It has become a frontline control mechanism for accuracy, regulatory alignment and brand credibility. To address this shift, AccuraCast has published its definitive SEO Guide for Financial Services, outlining the structural, technical and governance frameworks required for finance brands to remain visible and compliant in an AI-first discovery landscape. The insights below come from Lourenço Caliento Gonçalves, SEO Consultant at AccuraCast, who works directly with banks, insurers and fintech firms navigating this changing search environment. 1. SEO in an AI Summary World AI Overviews and assistants now sit between users and brand sites, especially on "what/how/which account/card/loan" queries in finance. Studies on financial keywords show AI modules cite only a small set of domains per answer, so visibility is increasingly about being one of the few trusted citations rather than "position 3 vs 5". Practical shifts for finance SEO: 1. Move from chasing every keyword to owning topic clusters where you can be the definitive, expert, frequently-updated source. 2. Design pages that both: Feed AI (clear entities, schema, citations, expert authorship) and Still convert in a zero?click world (compelling USP, tools, calculators, comparison tables that go beyond the AI summary). 2. SEO's Role in Accuracy and Compliance Because finance is considered a YMYL (your money, your life) category, search systems and AI models heavily weigh accuracy, disclosures and regulatory alignment. Regulators like the SEC, FCA, CFTC, BaFin, ESMA, EIOPA and EBA set rules for product communication, risk disclosure and data/privacy that directly affect how content can be written and tracked. SEO becomes a compliance ally by: Embedding governance into content workflows: versioning, review logs, jurisdiction tagging, "last updated" labels, and mandated disclaimers on all money pages. Hard-coding technical safeguards: secure-by-default (HTTPS, HSTS), cookie and tracking consent, correct handling of PII, and robust legal/Ts & Cs/privacy internal linking so crawlers and users always see compliant context. 3. SEO Challenges When Adding AI and Automation Banks, insurers and fintechs are accelerating AI and agent use across content, but surveys show the main friction points are compliance overhead, skills gaps and governance. SEO?specific pain points typically include: Drift from brand and regulatory language: AI can introduce unapproved promises, omit mandatory risk language or hallucinate product conditions, creating both compliance and ranking risk on YMYL topics. Inconsistent E-E-A-T: At scale, content may lack real experts, citations and author bios, weakening trust signals for both search and AI engines that now cross?check authority more strictly for finance queries. Fragmented workflows: Legal/compliance reviews are often still manual and periodic, while AI can publish or update faster than teams can approve, which creates a backlog or the risk of rogue content going live. Mitigations that work: Guardrailed generation: Fix templates with "non-editable" compliance blocks per product/region; restrict RAG systems ...
Click here to sign up for a new platform that helps law firms use subscription billing.Here are the top 5 takeaways from this episode:* AI and Blockchain Will Transform Legal Practice and Evidence: The intersection of AI and blockchain is poised to revolutionize how legal professionals verify authenticity and ownership of digital evidence. Blockchain's immutability and transparency can provide cryptographic proof of creation and ownership, which will be crucial as AI-generated content becomes indistinguishable from reality.* The Billable Hour Model Is Obsolete—Subscription Models Are the Future: The traditional billable hour is a relatively recent invention and is increasingly misaligned with client needs and lawyer well-being. Subscription-based legal services offer pricing certainty, better client relationships, and align incentives for efficiency, making them a more sustainable and client-friendly business model.* AI Will Reshape Legal Training and the Profession's Structure: As AI automates more legal work, the traditional law firm pyramid (with many associates learning under partners) will erode. There's a pressing need to rethink how new lawyers are trained, with externships and hands-on, AI-powered experiences becoming more important than the old apprenticeship model.* Legal Market Opportunity Is Vast, but Access Hinges on Affordability and Innovation: A huge portion of the legal market remains untapped due to cost and complexity. AI and new business models (like subscriptions) can unlock this latent demand, but lawyers must adapt to serve clients who expect affordable, accessible, and tech-enabled services.* Human Value in Law Will Shift Toward Creativity, Art, and Personalization: As routine legal tasks become automated, the unique value lawyers provide will center on creativity, personal connection, and brand differentiation—much like in the arts. Lawyers who embrace technology and focus on what only humans can do will thrive in the coming era of legal abundance.__________________________Learn more about Nessler & Associates and Integrated Cognition.Sign up for Paxton, my all-in-one AI legal assistant, helping me with legal research, analysis, drafting, and enhancing existing legal work product.Here's a link to purchase lifetime access to the recordings of My Shingle's AI Teach-In if you couldn't make it live.I've partnered with Pii to make it easy for you to purchase the hardware I use in my law firm: (1) Studio Setup; (2) Midrange Setup; (3) Highrange Setup.Get Connected with SixFifty, a business and employment legal document automation tool.Sign up for Gavel, an automation platform for law firms.Visit Law Subscribed to subscribe to the weekly newsletter to listen from your web browser.Prefer monthly updates? Sign up for the Law Subscribed Monthly Digest on LinkedIn.Check out Mathew Kerbis' law firm Subscription Attorney LLC.Want to use the subscription model for your law firm? Click here to sign up for a new platform that helps law firms use subscription billing. Get full access to Law Subscribed at www.lawsubscribed.com/subscribe
Click here to sign up for a new platform that helps law firms use subscription billing.On June 17, 2025, I presented live at LegalGeek in Chicago on the topic of integrating. Here are the top 5 takeaways:* The Billable Hour Is Obsolete.* The adoption of AI tools in legal practice is making the traditional billable hour model increasingly untenable. AI enables lawyers to deliver work faster and more efficiently, aligning incentives with client value rather than time spent. Subscription and value-based pricing models are more viable and attractive for both lawyers and clients.* Purpose-Built, Legal-Specific AI Tools Are Essential.* Not all AI is created equal. General-purpose tools like ChatGPT are not reliable for legal research or fact-finding. Instead, legal professionals should use purpose-built, legal-specific AI tools (like Paxton) that leverage retrieval augmented generation (RAG) and are trained on legal data. These tools provide more accurate, reliable, and secure results.* AI Enables Access to the Latent Legal Market.* A vast portion of the legal market remains underserved due to high costs and lack of pricing transparency. AI-powered efficiencies and alternative pricing models (like subscriptions and per-page pricing) open up legal services to a much larger market, making legal help more accessible and affordable for individuals and small businesses.* Effective Use of AI Requires New Skills and Mindsets.* Lawyers must learn to interact with AI as they would with a smart, entry-level assistant: providing context, iterating, and verifying results. Prompt engineering, semantic search, and understanding the limitations and strengths of different AI tools are now essential skills for modern legal professionals.* Adoption of AI Is Now an Ethical Imperative.* With the efficiency and accuracy gains AI provides, not using these tools may be seen as failing to meet ethical obligations to clients. The legal profession is expected to adopt technology that improves client service, transparency, and value. Failing to do so could be considered exploitative or even unethical under professional rules.__________________________Sign up for Paxton, my all-in-one AI legal assistant, helping me with legal research, analysis, drafting, and enhancing existing legal work product.Here's a link to purchase lifetime access to the recordings of My Shingle's AI Teach-In if you couldn't make it live.I've partnered with Pii to make it easy for you to purchase the hardware I use in my law firm: (1) Studio Setup; (2) Midrange Setup; (3) Highrange Setup.Get Connected with SixFifty, a business and employment legal document automation tool.Sign up for Gavel, an automation platform for law firms.Visit Law Subscribed to subscribe to the weekly newsletter to listen from your web browser.Prefer monthly updates? Sign up for the Law Subscribed Monthly Digest on LinkedIn.Check out Mathew Kerbis' law firm Subscription Attorney LLC.Want to use the subscription model for your law firm? Click here to sign up for a new platform that helps law firms use subscription billing. Get full access to Law Subscribed at www.lawsubscribed.com/subscribe
Yeah, agnetic browsers can do your work for you.
Click here to sign up for a new platform that helps law firms use subscription billing.Here are the top 5 takeaways from this episode:* Integrating Wellness and Law: Shannon Villalba combines her background in the arts, holistic wellness, and law to create a unique legal practice. She uses tools like meditation, energy work, and her Heartsong Chara Framework to help clients understand legal concepts and build their businesses in a more balanced, holistic way.* Moving Away from the Billable Hour: Both Shannon and Mathew advocate for flat fee and subscription-based legal services. This model provides clients with predictable, transparent pricing, reduces stress for both clients and attorneys, and encourages more open communication and collaboration.* Leveraging Technology for Efficiency: Shannon runs a virtual law firm and uses a lean tech stack (including MyCase, ClickUp, Zapier, Google Suite, AI tools like ChatGPT and Perplexity, Canva, and more) to streamline operations, improve client service, and stay competitive. She encourages experimentation and play with new tech to discover what works best.* Empowering Clients and Building Relationships: The subscription model allows attorneys to become true partners and guides for their clients, rather than just service providers. This approach fosters deeper relationships, more comprehensive issue spotting, and empowers clients through education and ongoing support.* Women and Underrepresented Attorneys as Innovators: The flexibility of virtual, tech-enabled, and alternative fee law practices is especially attractive to women and underrepresented attorneys. It allows for better work-life balance, the ability to serve clients authentically, and the freedom to innovate outside the constraints of traditional law firm models.__________________________Learn more about Heartsong Legal.Sign up for Paxton, my all-in-one AI legal assistant, helping me with legal research, analysis, drafting, and enhancing existing legal work product.Here's a link to purchase lifetime access to the recordings of My Shingle's AI Teach-In if you couldn't make it live.I've partnered with Pii to make it easy for you to purchase the hardware I use in my law firm: (1) Studio Setup; (2) Midrange Setup; (3) Highrange Setup.Get Connected with SixFifty, a business and employment legal document automation tool.Sign up for Gavel, an automation platform for law firms.Check out my other show, the Law for Kids Podcast.Visit Law Subscribed to subscribe to the weekly newsletter to listen from your web browser.Prefer monthly updates? Sign up for the Law Subscribed Monthly Digest on LinkedIn.Want to use the subscription model for your law firm? Sign up for the Subscription Seminar waitlist at subscriptionseminar.com.Check out Mathew Kerbis' law firm Subscription Attorney LLC. Get full access to Law Subscribed at www.lawsubscribed.com/subscribe
THE Sales Japan Series by Dale Carnegie Training Tokyo, Japan
Why "top-down" selling backfires in Japan's big companies — and what to do instead. Is meeting the President in Japan a guaranteed win? No — unless the President is also the owner (the classic wan-man shachō), your "coup" meeting rarely converts directly. In listed enterprises and large corporates, executive authority is diffused by consensus-driven processes. Even after a warm conversation and a visible "yes," the purchase decision typically moves into a bottom-up vetting cycle that your initial sponsor doesn't personally shepherd. In contrast, smaller firms or founder-led groups may decide quickly, much like private U.S. SMEs or European Mittelstand. The trap is assuming a Western "economic buyer" model maps 1:1 to Japan's governance norms post-Abenomics (2013–2020) and as of 2025. Treat the Presidential meeting as a door-opener, not a done deal. Do now: Reframe the "Prez" as an access node; design your plan for everything that happens after the elevator ride down. What actually happens after the big meeting? The President typically delegates "look into this" to a direct report, and your proposal enters an internal review pipeline. A junior staffer performs due diligence, then a section head reviews and either quietly stops the process or passes it up. If momentum builds, the division head circulates a ringi-sho (稟議書) with attached materials for cross-functional stamps (hanko). Each division repeats its own research — Finance, HR, Operations — before any re-contact with you. Compared with U.S. enterprise sales where a single VP can overrule, Japan's system prioritises organisational risk-sharing and face-saving. Expect additional nemawashi (root-binding) conversations you won't see. Every change to scope, pricing, or timing restarts the paper trail. Do now: Ask early who will run due diligence, which divisions must stamp, and what the ringi packet must include. Why do direct reports sometimes ignore an explicit instruction? Because "check this out" isn't "make this happen" — the President's role usually ends at referral, not enforcement. In large firms (think Toyota-scale keiretsu or Rakuten-class digital groups), middle management owns process integrity. A public "order" in front of you may still be interpreted as permission to evaluate, not a mandate to buy. In the U.S., sellers might push back on "we'll think about it"; in Japan, they really do need to think — collectively. That's not stonewalling; it's governance. The deal can die silently at any stage if the section head sees mis-fit, poor timing (e.g., fiscal year planning in March), or brand risk. Your best lever is equipping mid-levels with a de-risked, spec-tight story that they can defend internally. Do now: Translate the top-level promise into mid-level proof: ROI math, references in Japan, security/PII notes, and implementation flow. How long does the ringi cycle take, and what slows it down? Longer than Western sellers expect — and it resets with every material change. The ringi-sho builds consensus by circulating for stamps across affected divisions. Each unit repeats checks (vendor risk, budget fit, labour impact under Japan's 2023 work-style reforms, data residency for APAC, etc.). If you tweak scope or price, a fresh ringi often triggers. For comparison, an American SaaS deal might hit Legal once; in Japan, Legal, Information Systems, and HR may all run independent passes. Multi-site rollouts (retail, manufacturing) compound complexity versus single-site pilots. Sellers who rush or "pressure close" risk face loss among reviewers — a reputational cost that kills not just this deal but your next. Do now: Time-box your asks, pre-bundle likely objections, and avoid last-minute scope surprises that force a re-circulation. How should you re-engineer your enterprise sales motion for Japan? Build a two-track play: executive alignment for vision + operator enablement for approvals. Track A (C-suite): anchor on strategy, external credibility (Japan references, security attestations), and clear business impact by quarter. Track B (middle-down): deliver a ringi-ready pack — problem framing, options matrix, risk mitigations, rollout plan, KPI table (adoption, uptime targets, ROI), and case miniatures from sectors like automotive, retail, and banking. Compared with Europe (works councils) or the U.S. (deal desk), Japan's reviewer set is broader; so your artefacts must be modular and stamp-friendly. Pro tip: craft a Japanese one-pager that a 25-year-old staffer can champion without fear. Do now: Produce a bilingual ringi kit: exec summary, cost sheet, security appendix, phased pilot plan, and internal FAQ. What if the buyer is a founder-led or SME "one-man President"? Move fast — wan-man shachō environments can green-light on the spot, but still respect downstream implementers. Owner-operators (common in construction, logistics, specialised manufacturers) align closer to U.S. founder-CEO norms: if they decide, it happens. However, success still hinges on managers who must live with the tool or training. Win speed without burning adoption by pre-agreeing a post-signature cadence: kickoff, hands-on enablement, check-ins. Contrast: in multinationals and listed firms, assume consensus first, speed second. Use segmented pipelines and forecasting models for each archetype to avoid "phantom commits" based on executive enthusiasm alone. Do now: Qualify leadership style early; if it's founder-led, offer rapid pilot + success plan; if it's listed, budget for consensus cycles. Quick internal checklist for a ringi-ready packet Executive one-pager (JP/EN) with outcome metrics and timeline Options matrix (do nothing vs. competitor vs. your solution) Security & compliance appendix (data flows, access, audit) Costing & ROI sheet (12–36 months, with sensitivity) Implementation playbook (roles, training, support SLAs) Reference mini-cases from Japan/APAC peers Do now: Attach this checklist to every enterprise proposal in Japan. Conclusion: Stop "selling the Prez"; start enabling the process In Japan's large corporates, the President opens a door; the organisation makes the decision. Treat the executive meeting as your starting pistol, not the finish line. Win by equipping mid-levels to say "yes" safely, designing for ringicadence, and pacing your asks. In founder-led firms, move decisively — with respect for the managers who must land the change. That's how you convert enthusiasm into signed, implemented value in Japan, as of 2025. FAQs Is aggressive closing effective in Japan? No. Pushy tactics create face risk for reviewers and can stall the ringi process; equip, don't pressure. Do all Japanese companies work this way? No. Founder-led SMEs can decide top-down; listed and multinational firms lean consensus-first. What documents speed approval? A bilingual, ringi-ready packet: exec summary, ROI, security, rollout, and references. Next steps for leaders/executives Map the approval path (divisions, stamps, timelines). Build a standard ringi pack and local references. Train your team on Japan-specific cadence and language. Segment forecasts by "founder-led" vs. "listed corporate." Author credentials Dr. Greg Story, Ph.D. in Japanese Decision-Making, is President of Dale Carnegie Tokyo Training and Adjunct Professor at Griffith University. He is a two-time winner of the Dale Carnegie "One Carnegie Award" (2018, 2021) and recipient of the Griffith University Business School Outstanding Alumnus Award (2012). As a Dale Carnegie Master Trainer, Greg is certified to deliver globally across all leadership, communication, sales, and presentation programs, including Leadership Training for Results. He has written several books, including three best-sellers — Japan Business Mastery, Japan Sales Mastery, and Japan Presentations Mastery — along with Japan Leadership Mastery and How to Stop Wasting Money on Training. His works have been translated into Japanese, including Za Eigyō (ザ営業), Purezen no Tatsujin (プレゼンの達人), Torēningu de Okane o Muda ni Suru no wa Yamemashō (トレーニングでお金を無駄にするのはやめましょう), and Gendaiban "Hito o Ugokasu" Rīdā (現代版「人を動かす」リーダー). Greg also publishes daily business insights on LinkedIn, Facebook, and Twitter, and hosts six weekly podcasts. On YouTube, he produces The Cutting Edge Japan Business Show, Japan Business Mastery, and Japan's Top Business Interviews, which are widely followed by executives seeking success strategies in Japan.
In this powerful bonus episode of Banking on Fraudology, powered by Safeguard, Hailey Windham sits down with Andrea Vallentine, Senior Vice President of Fraud and Risk at Old Glory Bank. The conversation dives deep into Andrea's genuinely excited perspective that "this might actually be the best time to be fighting fraud". We explore the rising momentum of collaboration and shared learning that is unifying the industry against fraudsters. Key Takeaways: Collaboration, AI, and Empathy in Fraud PreventionThe Power of Collaboration: Andrea highlights the exciting activities and investments from groups like Fraud Fight Club, Operation Shamrock, and House of Fraud. The focus is shifting from selling products to learning, educating, and collaborating. The AI Perspective: The industry is moving past fear, recognizing that AI has been around for a long time. The recent AI explosion has gotten people more open to listening, realizing they are already using smart technologies in areas like transaction scoring (e.g., Falcon) and link analysis. A Shift to Purpose: Collaboration is increasing because the industry now recognizes the emotional devastation and human impact of fraud. The focus has moved beyond competing to a shared mission of working together "against the fraudsters". Tips for Smaller Teams (Leveraging AI): Andrea recommends that smaller teams use AI (like ChatGPT) to draft summaries, create templates, and refine procedures. This allows teams to find holes in their processes and generate new ideas without using sensitive PII. The Human Side of Design: Empathy is shaping the next generation of fraud design. Using machine learning to identify patterns of customer friction , and giving real-life stories to team members, helps move them out of "robot mode" and focus on the customer experience. Andrea's Final Thought: AI is not scary, and we have been using it forever in things like marketing suggestions on Amazon. She encourages everyone to get involved and leverage the wealth of existing, shared resources instead of recreating materials. This is a must-listen for executives, investigators, and all fraud professionals who are serious about strengthening prevention efforts and building a fraud-fighting community driven by empathy and innovation. Links:Connect to Andrea on LinkedInLearn more about the Safeguard AI deep dive retreat happening in May : SafeguardEvent.comAbout Hailey Windham:As a 2023 CU Rockstar Recipient, Hailey Windham, CFCS (Certified Financial Crimes Specialist) demonstrated unbounding passion for educating her community, organization and credit union membership on scams in the market and best practices to avoid them. She has implemented several programs within her previous organizations that aim at holistically learning about how to prevent and detect fraud targeted at membership and employees. Windham's initiatives to build strong relationships and partnerships throughout the credit union community and industry experts have led to countless success stories. Her applied knowledge of payments system programs combined with her experience in fraud investigations offers practical concepts that are...
Click here to sign up for a new platform that helps law firms use subscription billing.Here are the top 5 takeaways from this episode:1. Constant Adaptation and Simplification Are Key to Law Firm Success.Both Mathew and Lauren emphasized the importance of regularly reassessing and adapting their practice areas, pricing, and service offerings. Lauren pivoted away from tax debt resolution to focus on estate planning and prenups, while Mathew simplified his pricing structure and eliminated underused features and add-ons.2. Data-Driven Decisions Improve Offerings and Client Experience.They both use a mix of analytics, client feedback, and “gut data” from years of experience to refine their services. This includes tracking which offerings clients actually use, which content gets the most engagement, and adjusting accordingly for better retention and satisfaction.3. Streamlined Onboarding and Intentional Friction Save Time.Mathew shared how he reworked his onboarding process using Google Workspace, Calendly, Stripe, and Google Forms to introduce just enough friction. This helps filter out unqualified leads and ensures new clients are a good fit, saving time for both the lawyer and the client.4. Community and Content Platforms Matter.Lauren's move from MailChimp to Substack for her newsletter and podcast was inspired by the platform's community features and ease of use. Both hosts discussed the value of memorable branding, vanity URLs, and focusing content on topics that resonate most with their audience (like costs, outsourcing, AI, and SOPs).5. Embrace AI and Technology, but Stay Client-Focused.Both are exploring ways to use AI and automation to improve efficiency and client service, such as creating SOPs, using AI prompts, and building tools for solo practitioners. However, they stress that technology should serve the client's needs and not overwhelm them with complexity.Bonus: The most popular content topics for their audiences are costs, outsourcing, AI, finances, and standard operating procedures—indicating a strong interest in practical, efficiency-focused advice for running a modern law firm.__________________________Learn more about A Different Practice.Sign up for Paxton, my all-in-one AI legal assistant, helping me with legal research, analysis, drafting, and enhancing existing legal work product.Here's a link to purchase lifetime access to the recordings of My Shingle's AI Teach-In if you couldn't make it live.I've partnered with Pii to make it easy for you to purchase the hardware I use in my law firm: (1) Studio Setup; (2) Midrange Setup; (3) Highrange Setup.Get Connected with SixFifty, a business and employment legal document automation tool.Sign up for Gavel, an automation platform for law firms.Check out my other show, the Law for Kids Podcast.Visit Law Subscribed to subscribe to the weekly newsletter to listen from your web browser.Prefer monthly updates? Sign up for the Law Subscribed Monthly Digest on LinkedIn.Want to use the subscription model for your law firm? Sign up for the Subscription Seminar waitlist at subscriptionseminar.com.Check out Mathew Kerbis' law firm Subscription Attorney LLC. Get full access to Law Subscribed at www.lawsubscribed.com/subscribe
In this episode of the Power Producers Podcast, host David Carothers sits down with Jeff Harris, the CEO and co-founder of Appulate, widely considered the original Insurtech (founded in 2005). Jeff shares Appulate's journey from solving the "abysmal" problem of supplemental form generation to becoming an all-in-one AI solution for agencies. They discuss the critical gap between insurance and technology, how AI is reshaping the industry, and why Appulate is taking a conservative and reliable approach to its implementation. The conversation also covers the dangers of agents using public AI tools with client PII and how technology is the "great equalizer" for small agencies. Key Highlights: The Evolution of the Oldest Insurtech Jeff Harris details Appulate's nearly 20-year history, which began by solving the pain of manual supplemental forms. Today, their Producer Connect platform serves as a "bolt-on" to an agency's AMS, acting as a broad marketing platform that saves time on data entry, obtains loss runs, and integrates with thousands of carrier portals to eliminate redundant work. AI's Role: A Conservative and Reliable Approach While AI is changing the industry, Jeff emphasizes that it must be reliable and consistent. He compares it to Tesla's autonomous driving—it had to be perfected before users could trust it. Appulate currently uses AI where it excels, such as parsing data from loss runs and deck pages, but avoids areas where the industry (like carrier portals) isn't ready for full AI integration, which could cause more problems than it solves. The Danger of "Lazy" AI Implementation David and Jeff discuss the significant E&O and cyber risk of "fundamentally lazy" agents uploading policies with Personally Identifiable Information (PII) into public Chat-GPT. Jeff stresses the importance of using secure, vendor-provided AI solutions rather than unvetted public tools, highlighting that AI is already being effectively used in areas like fraud detection. AI as the "Great Equalizer" for Agencies Jeff explains that AI is a "once-in-a-generation opportunity" for small and mid-sized agencies to compete with the 100-pound gorillas. By automating manual, time-consuming tasks, AI reduces burnout and turnover, helps retain younger tech-savvy talent, and allows smaller agencies to achieve the same level of output and efficiency as their largest competitors without massive investments in headcount. Connect with: David Carothers LinkedIn Jeff Harris LinkedIn Kyle Houck LinkedIn Visit Websites: Power Producer Base Camp Appulate Killing Commercial Crushing Content Power Producers Podcast Policytee The Dirty 130 The Extra 2 Minutes
Click here to sign up for a new platform that helps law firms use subscription billing.Here are the top 5 takeaways from my conversation with Nancy Fox:* Strategic Networking is Essential: Nancy emphasizes that building relationships with the right people is the foundation of business development, especially for professionals like lawyers and accountants. Networking should be targeted and strategic, not just about meeting as many people as possible.* AI as an Enhancement, Not a Replacement: Both Nancy and Mathew agree that AI is a powerful tool for enhancing professional work, not replacing expertise. AI can save time, provide strategic insights, and help with tasks like business planning and niche analysis, but it cannot substitute for real-world experience and judgment.* The Importance of Specialization and Niche: Specialization is evolving. While AI enables professionals to be more generalist, true differentiation comes from having a clear niche—whether that's a specific industry, demographic, or service. Being specific in your value proposition and target market is key.* Productizing and Recurring Revenue Models: Nancy discusses the value of productizing services and adopting recurring revenue models (like subscriptions or memberships) for professionals who want to scale without building large teams. This approach allows for more predictable income and leverages expertise in a repeatable way.* Embracing Failure and Adaptability: Nancy shares that a willingness to experiment, take risks, and even fail is crucial for growth. She stresses the importance of being willing to try new things, learn from failures, and pivot when something isn't working—qualities that are especially important for entrepreneurs and innovators.__________________________Learn more about Nancy Fox's networking group Wyze Rainmakers.Sign up for Paxton, my all-in-one AI legal assistant, helping me with legal research, analysis, drafting, and enhancing existing legal work product.Here's a link to purchase lifetime access to the recordings of My Shingle's AI Teach-In if you couldn't make it live.I've partnered with Pii to make it easy for you to purchase the hardware I use in my law firm: (1) Studio Setup; (2) Midrange Setup; (3) Highrange Setup.Get Connected with SixFifty, a business and employment legal document automation tool.Sign up for Gavel, an automation platform for law firms.Check out my other show, the Law for Kids Podcast.Visit Law Subscribed to subscribe to the weekly newsletter to listen from your web browser.Prefer monthly updates? Sign up for the Law Subscribed Monthly Digest on LinkedIn.Want to use the subscription model for your law firm? Sign up for the Subscription Seminar waitlist at subscriptionseminar.com.Check out Mathew Kerbis' law firm Subscription Attorney LLC. Get full access to Law Subscribed at www.lawsubscribed.com/subscribe
This conversation features insights from Oliver Eden, senior business unit director at Jabil, and Travis Webb, chief scientific officer at PII. Our guests continue their discussion on how autoinjectors and smart technologies can be integrated into clinical trials in a way that isn't problematic or burdensome for patients, particularly for patients that may not be tech savvy. They discuss that by focusing on patient experience, clinical trials can increase engagement and compliance.This episode is presented in partnership with PII.HostLori Ellis, Head of Insights, BioSpaceGuestsOliver Eden, Senior Business Unit Director, JabilTravis Webb, Chief Scientific Officer, PIIDisclaimer: The views expressed in this discussion by guests are their own and do not represent those of their organizations.
On May 14, 2025, I presented live on the topic of subscription-based legal services to Shaun Jardine's Value Based Pricing Colony. Here are the top 5 takeaways:1. Start Small with Subscription Models: Law firms interested in moving to a subscription-based pricing model should begin with a pilot program in one practice area, targeting loyal clients and documenting results to build internal support.2. Value Over Time Tracking: The shift from billable hours to subscription or value-based pricing requires a mindset change—focus on the value delivered, not the time spent. This benefits both clients (predictability) and lawyers (better relationships, less stress).3. Tiered Service Packages and Scope Management: Successful subscription models often use tiered packages (e.g., bronze, silver, gold) with clear boundaries. It's important to positively reinforce clients who need more and move them up tiers, rather than penalizing them.4. Not All Clients or Practice Areas Are Equal: Subscription models may not fit every client or practice area, but with thoughtful segmentation and pricing, firms can attract ideal clients and avoid unprofitable work. It's okay—and often necessary—to turn away difficult or low-value clients.5. Market Opportunity and Innovation: There is a huge, underserved “latent legal market” of people and businesses who need legal help but can't afford traditional hourly rates. Subscription and alternative pricing models, supported by technology and automation, can unlock this market and drive innovation in legal services.__________________________Learn more about Shaun Jardine's Value Based Pricing Colony.Sign up for Paxton, my all-in-one AI legal assistant, helping me with legal research, analysis, drafting, and enhancing existing legal work product.Here's a link to purchase lifetime access to the recordings of My Shingle's AI Teach-In if you couldn't make it live.I've partnered with Pii to make it easy for you to purchase the hardware I use in my law firm: (1) Studio Setup; (2) Midrange Setup; (3) Highrange Setup.Get Connected with SixFifty, a business and employment legal document automation tool.Sign up for Gavel, an automation platform for law firms.Check out my other show, the Law for Kids Podcast.Visit Law Subscribed to subscribe to the weekly newsletter to listen from your web browser.Prefer monthly updates? Sign up for the Law Subscribed Monthly Digest on LinkedIn.Want to use the subscription model for your law firm? Sign up for the Subscription Seminar waitlist at subscriptionseminar.com.Check out Mathew Kerbis' law firm Subscription Attorney LLC. Get full access to Law Subscribed at www.lawsubscribed.com/subscribe
| In this episode of Poised for Exit, we sit down with Steve Enzler, Owner and CEO of Tangible Values, a professional services firm supporting the accounting community.Steve shares eye-opening insights into cybersecurity risks that most small businesses and even accounting firms don't realize they're exposed to. From new IRS and FTC requirements to real-world data breach examples, this conversation sheds light on what every business owner needs to know about protecting client information.You'll learn about the Written Information Security Plan (WISP) now required for accounting firms, what “personally identifiable information” (PII) really means, and the practical steps any business can take to protect themselves and their clients.Steve also explains how WispBuilder.com, one of Tangible Values' latest innovations, helps firms easily comply with cybersecurity mandates and avoid devastating fines and data breaches.This is part one of a two-part discussion.Don't miss part two in a few weeks, where Steve dives deeper into real cases and evolving threats.Learn more about Tangible Values hereResources mentioned:WispBuilder.com: Build and maintain your written information security plan.IRS Publication 4557: Safeguarding Taxpayer Data: A Guide for Your Business.New York Attorney General Settlement Announcement: Press release referenced in this episode.Connect with Julie Keyes, Keyestrategies LLCFounder, Consultant, Author, Pod-caster and Instructor
S&P Futures are edging higher this morning as investors digest a busy slate of corporate earnings and await the start of the Federal Reserve's two-day policy meeting. President Trump continues his Asia tour, leaving Japan for South Korea where trade talks are set to take center stage. In corporate news, PayPal teams up with OpenAI to enable in-app purchases through ChatGPT, while new spinoffs from Honeywell and DuPont are set to join the S&P 500 next week. Tesla's EU sales slipped in September even as overall car sales in the region rose. We'll break down all the key movers including BMRN, CARR, UNH, and UPS trading higher after earnings, and AWI, PII, and WM under pressure. Plus, a look ahead to tonight's big reports from Visa and Mondelez, and tomorrow's heavyweights—Boeing, Caterpillar, Verizon, and C
Here are the top 5 takeaways from this episode with Austin Brittenham of 2nd Chair:* AI is Transforming Legal Workflows and the Billable HourAI tools like 2nd Chair are changing how legal work is performed, reducing the need for billable hours and enabling lawyers to work more efficiently. This shift challenges the traditional billable hour model and encourages new business models in law.* Access to Justice and the Latent Legal MarketThere is a huge unmet demand for legal services—most people with legal needs never consult a lawyer. AI-powered tools and new pricing models can help lawyers serve this “latent legal market,” expanding access to justice and creating new revenue opportunities.* Differentiation in Legal AI: Engineering and User Experience2nd Chair differentiates itself from competitors by building custom solutions for parsing legal documents and providing AI with citations, making it more reliable and useful for lawyers. Transparent, simple subscription pricing is also a key part of their appeal.* The Evolving Role of LawyersAI is automating much of the routine, document-heavy work that has dominated legal practice since the rise of the billable hour. This is pushing lawyers back toward their traditional roles as counselors, advisors, and advocates, focusing on higher-value tasks that require human judgment.* Changing Consumer Expectations and Legal Service DeliveryClients are increasingly using AI tools themselves before consulting lawyers, which changes their expectations for speed, cost, and value. Lawyers need to adapt by leveraging AI to deliver faster, more cost-effective, and higher-quality services, while maintaining the human accountability clients still demand.__________________________Learn more about 2nd Chair.Sign up for Paxton, my all-in-one AI legal assistant, helping me with legal research, analysis, drafting, and enhancing existing legal work product.Here's a link to purchase lifetime access to the recordings of My Shingle's AI Teach-In if you couldn't make it live.I've partnered with Pii to make it easy for you to purchase the hardware I use in my law firm: (1) Studio Setup; (2) Midrange Setup; (3) Highrange Setup.Get Connected with SixFifty, a business and employment legal document automation tool.Sign up for Gavel, an automation platform for law firms.Check out my other show, the Law for Kids Podcast.Visit Law Subscribed to subscribe to the weekly newsletter to listen from your web browser.Prefer monthly updates? Sign up for the Law Subscribed Monthly Digest on LinkedIn.Want to use the subscription model for your law firm? Sign up for the Subscription Seminar waitlist at subscriptionseminar.com.Check out Mathew Kerbis' law firm Subscription Attorney LLC. Get full access to Law Subscribed at www.lawsubscribed.com/subscribe
On May 20, 2025, I presented live on the topic of AI and Alternative Fee Arrangements at the American Bar Association's AI and Virtual Law Summit. Here are the top 5 takeaways:* AI is Transforming Legal Practice EfficiencyThe adoption of AI, especially generative AI, is revolutionizing how legal work is done. Lawyers who move away from the billable hour and embrace efficiency—using AI to complete tasks faster—can actually increase profitability, as less time spent on tasks means more money under alternative fee arrangements.* The Subscription Model is a Profitable Alternative to Billable HoursMoving to a subscription or flat-fee model provides predictable revenue for lawyers and cost transparency for clients. This model incentivizes efficiency, reduces burnout, and fosters better client relationships, as lawyers are no longer penalized for working quickly.* The Latent Legal Market is a Huge OpportunityA significant portion of legal needs in the U.S. (up to 90%) go unmet by lawyers, representing a massive, underserved market. Alternative fee structures and AI-powered efficiency can help lawyers tap into this “blue ocean” of potential clients who need affordable, predictable legal services.* Using AI Ethically and Effectively is CriticalLawyers must use AI tools correctly—choosing the right tool for the right task, understanding the importance of retrieval-augmented generation (RAG) for fact-based work, and being aware of data privacy and compliance issues. AI is a powerful assistant, but not a source of truth on its own.* Legal Practice is Evolving—Adapt or Be Left BehindThe legal industry is shifting toward technology-driven, client-centered models. Lawyers who embrace AI, alternative fee arrangements, and productized services will be better positioned for the future. The billable hour may eventually be seen as outdated or even unethical, so now is the time to adapt.__________________________Here's a link to the slide deck that goes with the presentation.Sign up for Paxton, my all-in-one AI legal assistant, helping me with legal research, analysis, drafting, and enhancing existing legal work product.Here's a link to purchase lifetime access to the recordings of My Shingle's AI Teach-In if you couldn't make it live.I've partnered with Pii to make it easy for you to purchase the hardware I use in my law firm: (1) Studio Setup; (2) Midrange Setup; (3) Highrange Setup.Get Connected with SixFifty, a business and employment legal document automation tool.Sign up for Gavel, an automation platform for law firms.Check out my other show, the Law for Kids Podcast.Visit Law Subscribed to subscribe to the weekly newsletter to listen from your web browser.Prefer monthly updates? Sign up for the Law Subscribed Monthly Digest on LinkedIn.Want to use the subscription model for your law firm? Sign up for the Subscription Seminar waitlist at subscriptionseminar.com.Check out Mathew Kerbis' law firm Subscription Attorney LLC. Get full access to Law Subscribed at www.lawsubscribed.com/subscribe
- Apple Announces 3nm M5 Processor - No In-Box Wall Wart for New MacBook Pro in EU and UK - Apple Announces iPad Pro Powered by M5 - Apple Outs Black Magic Keyboard for iPad Air - M5 Apple Vision Pro Up for Order - M2 Vision Pro Not Eligible for Trade-In - Apple Announces Accessories for Vision Pro - Apple Vision Pro App Hits iPad with iPadOS 26.1 - NBA 2K26 Arcade Edition Hits Apple Arcade Today - US Mint Previews $1 California Innovation Coin Featuring Steve Jobs - The FBI says sites are spoofing the FBI. Plus - a medical imaging company loses patient PII with no compensation. It's all on Checklist No. 444 - Find it today at checklist.libsyn.com - Catch Ken on Mastodon - @macosken@mastodon.social - Send Ken an email: info@macosken.com - Chat with us on Patreon for as little as $1 a month. Support the show at Patreon.com/macosken
- Joz Teases New Laptop in Cryptic Twitter Post - Omdia: Q3 Global Growth for Smartphones up 3%, Strongest Q3 Growth Ever for iPhone - IDC: Q3 Global Growth for Smartphones up 2.6%, Strongest Q3 Growth Ever for iPhone - Tata Subsidiary Buys Its Way Further Into Apple's Business - Apple Seeds Third blankOS 26.1 Betas to Public Testers - Public Testers Get New AirPods Firmware Betas - Cue Talks State of Apple TV: The Subscription Service - Apple TV Outs Trailer for “The Family Plan 2” - Third Season of “Loot” Starts on Apple TV - The FBI says sites are spoofing the FBI. Plus - a medical imaging company loses patient PII with no compensation. It's all on Checklist No. 444 - Find it today at checklist.libsyn.com - Catch Ken on Mastodon - @macosken@mastodon.social - Send Ken an email: info@macosken.com - Chat with us on Patreon for as little as $1 a month. Support the show at Patreon.com/macosken
- Cook Says iPhone Air to Hit China Next Week - iPhone Air Ships in China on Wednesday 22 October - Carriers Get Special eSIM Permission for iPhone Air - Apple Seeds Third Betas of blankOS 26.1 to Developers - AppleInsider Lists New Features Found in iOS 26.1 Developer Beta - Apple Drops the + from Apple TV+ - F1: The Movie Hits Apple TV: The Streaming Service on 12 December - Apple Store in Carlsbad, CA Makes Temporary Move for Renovations - Apple Ends Sound Service Programs for iPhone 12 Models and Original AirPods Pro - AirPods Pro 3 Make TIME's Best Inventions of 2025 - Sponsored by CleanMyMac - Now with Cloud Cleanup. Try 7 days free and use code MACOSKEN20 for 20% off at clnmy.com/MACOSKEN - The FBI says sites are spoofing the FBI. Plus - a medical imaging company loses patient PII with no compensation. It's all on Checklist No. 444 - Find it today at checklist.libsyn.com - Catch Ken on Mastodon - @macosken@mastodon.social - Send Ken an email: info@macosken.com - Chat with us on Patreon for as little as $1 a month. Support the show at Patreon.com/macosken
- Bloomberg's Gurman Expects at Least Two Updated Apple Products This Week - AT&T Website May Confirm M5-Powered iPad Pro - Ming-Chi Kuo: Hinge May Cost Less Than Expected on iPhone Foldable - Sites See blankOS 26.0.2 in Visitor Logs - Report: Apple Close to Buying Computer Vision Startup Prompt AI - SUNY Professors Sue Apple for Using Their Writing to Train A.I. - Apple Original Films and Chernin to Develop “Five Secrets” Feature - “Knife Edge: Chasing Michelin Stars” Hits Apple TV+ - Apple TV+ Crashes Into “The Last Frontier” - Apple Pulls “Clips” from App Store, Plans No More Updates - Sponsored by CleanMyMac - Now with Cloud Cleanup. Try 7 days free and use code MACOSKEN20 for 20% off at clnmy.com/MACOSKEN - The FBI says sites are spoofing the FBI. Plus - a medical imaging company loses patient PII with no compensation. It's all on Checklist No. 444 - Find it today at checklist.libsyn.com - Catch Ken on Mastodon - @macosken@mastodon.social - Send Ken an email: info@macosken.com - Chat with us on Patreon for as little as $1 a month. Support the show at Patreon.com/macosken
Here are the top 5 takeaways from this episode with Sateesh Nori of Just-Tech, LLC:* AI and Technology Are Transforming Legal ServicesThe legal profession is undergoing a major shift as AI tools like RAG bots (Retrieval Augmented Generation) and platforms such as Paxton are making legal information and services more accessible, efficient, and affordable. These technologies can help lawyers serve more clients at scale and reduce overhead.* The Billable Hour Model Is Outdated and RestrictiveThe traditional billable hour model limits access to justice, incentivizes inefficiency, and perpetuates a pyramid scheme within law firms. Alternative fee arrangements, especially subscription models, empower lawyers to focus on value and client outcomes rather than time spent.* Access to Justice Remains a Critical ChallengeA vast majority of Americans' legal needs go unmet each year due to high costs and systemic barriers. Technology and new business models can help bridge this gap, allowing lawyers to serve the “latent legal market” and provide affordable legal help to more people.* Legal Education and Professional Culture Need ReformLaw schools and the broader legal culture are slow to adapt to technological change and alternative business models. There's a need for legal education to teach technology, business skills, and new ways of delivering legal services, rather than focusing solely on traditional paths.* Actionable Legal Information vs. Legal AdviceThe line between legal information and legal advice is blurry and often protectionist. AI tools can provide actionable legal information at scale, but regulatory frameworks need to evolve to allow innovation while protecting consumers. Lawyers should embrace these tools to remain competitive and relevant.__________________________Learn more about Just-Tech, LLC.Sign up for Paxton, my all-in-one AI legal assistant, helping me with legal research, analysis, drafting, and enhancing existing legal work product.Here's a link to purchase lifetime access to the recordings of My Shingle's AI Teach-In if you couldn't make it live.I've partnered with Pii to make it easy for you to purchase the hardware I use in my law firm: (1) Studio Setup; (2) Midrange Setup; (3) Highrange Setup.Get Connected with SixFifty, a business and employment legal document automation tool.Sign up for Gavel, an automation platform for law firms.Check out my other show, the Law for Kids Podcast.Visit Law Subscribed to subscribe to the weekly newsletter to listen from your web browser.Prefer monthly updates? Sign up for the Law Subscribed Monthly Digest on LinkedIn.Want to use the subscription model for your law firm? Sign up for the Subscription Seminar waitlist at subscriptionseminar.com.Check out Mathew Kerbis' law firm Subscription Attorney LLC. Get full access to Law Subscribed at www.lawsubscribed.com/subscribe
Nick Constantino and Brian Jungles dive into the surprising resurgence of direct mail marketing. From data-driven targeting to fraud-free impressions, they unpack why this “unsexy” channel is outperforming digital in today's AI-saturated landscape. Learn how tactile media is reclaiming its place in full-funnel strategies and why marketers should rethink their approach to brand and lead generation.✅ Key Takeaways:• Direct mail offers 100% deliverability and high-value targeting using PII and layered data.• Digital ad fraud is rampant—up to 50% of traffic can be fake or wasted.• Direct mail impressions are tactile, memorable, and often live in homes for weeks.• Integrated campaigns (mail + digital + CTV + retargeting) outperform siloed efforts.• Unique offers and strong creative are essential—don't reuse billboard/web ads.• Measurement tools like QR codes, call tracking, and A/B testing are now standard.• Success requires repetition—one-off mailers don't work.
Download Perplexity Comet: AI-native Browser; Web Adoption and Security Talk with Favour Obasi-Ike | Get exclusive SEO newsletters in your inbox.Perplexity AI's free "Comet" web browser, which occurred this past Thursday. We expressed excitement over this development, highlighting Comet's functionality as an AI-powered browser that can import Google Chrome extensions and act as a personal assistant, shopping, and email agent. The conversation extensively examines the implications of Comet's introduction on the browser market share, particularly in relation to the dominance of Google Chrome, and explores how this new tool affects Search Engine Optimization (SEO) strategies and content visibility for businesses. Finally, a significant portion of the discussion addresses crucial concerns regarding user privacy and data security when utilizing these advanced AI tools, emphasizing the need for caution and strategic use.Next Steps for Digital Marketing + SEO Services:>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Need more information? Visit our Work and PLAY Entertainment website to learn about our digital marketing services.>> Visit our Official website for the best digital marketing, SEO, and AI strategies today!FAQs about this episode1. What is the Perplexity AI Comet Browser?Comet is an AI web browser released by Perplexity AI. Comet essentially integrates Perplexity AI capabilities into a browser format. The concept involves having an AI web browser, similar to using Google Chrome but with AI integration.2. When was the Comet browser released, and to whom?The free Comet browser was recently made available to everyone worldwide. It was announced on a Thursday. However, Comet was initially released to people who had Perplexity Max in July. This three-month period (July to October) allowed Perplexity to keep it exclusive within their beta program or exclusive community before releasing it universally.3. How can I download the Comet browser, and what platforms is it available on?You can download the Comet browser by visiting perplexity.ai/comment. It is available for both Mac and Windows.4. What are the key features and capabilities of the Comet browser?The Comet browser offers several features that distinguish it from traditional browsers:• Extension Import: You can import your Google Chrome extensions into the Comet AI browser.• Agentic Capabilities: It is described as a personal assistant that helps with many things. It can: ◦ Autonomously control browser actions, such as closing tabs and opening pages. ◦ Fill out forms. ◦ Control Google Drive. ◦ Shop for you. ◦ Send out emails, leveraging a feature called "background assistant".• Current Focus: It is currently heavily focused on the web, though a mobile app is anticipated, similar to the existing Google Chrome app and Perplexity app.5. Why did Perplexity AI release the Comet browser?Perplexity is doing this to gain market share and compete with major rivals, particularly Google. The current browser market is heavily dominated by Google Chrome, which holds about 72% of the market share (specifically cited as 71.77% to 71.86% recently).6. How is Perplexity AI related to Microsoft and other platforms?Perplexity is closely associated with Microsoft and Bing. The platforms are interconnected, as LinkedIn is also owned by Microsoft. It is noted that Microsoft is also involved with Copilot and is "somewhere in the mix" of OpenAI/ChatGPT content, further connecting it to Comet.7. What are the major concerns regarding security and privacy with agentic AI browsers?The primary concerns revolve around security, privacy, and user adoption. Since the Comet browser can autonomously control browser actions, access Google Drive, and fill out forms, there are questions about how much security is provided.• Data Compromise: One critical concern is that if a company's chosen AI platform (like Comet) lacks necessary security measures, a client could be exposed to a hack, potentially compromising years of hard work.• Lack of Regulation: There is a belief that there is not enough regulation surrounding privacy in the AI space, often favoring convenience and productivity over individual privacy.8. How will AI search browsers impact SEO and business visibility?AI search models are changing how businesses achieve visibility:• Beyond Top 10: AI models are no longer just scanning the top 10 search pages; they are scanning anywhere between 10 to 40 links or sources. Businesses should aim to be in this "Top 40 listing".• Platform Diversity: Visibility is achieved when a brand is interconnected across various platforms, including LinkedIn, YouTube, Google, Pinterest, the website, blogs, videos, audios, and podcasts.• LinkedIn Importance: If Perplexity uses LinkedIn as one of its information sources, having a complete and active LinkedIn profile is significant for search results.• Contextual Content: Content needs to be contextually relevant, moving beyond just typing basic search phrases like "best restaurant near me".• SEO Relevance: SEO remains important; even if AI models like ChatGPT handle e-commerce orders, they are still pulling information from sources with high domain authority, which is based on SEO principles.9. What are the best practices for leveraging AI tools like Comet?Users should adopt a strategic approach when using these new AI tools:• Strategy and Learning: Use AI to strategize, discover different angles, and find solutions to problems you haven't considered. Ask AI how to improve upon an idea or find what is missing from your strategy.• Strategy vs. Dependence: Use AI as a tool to improve yourself and learn, but do not depend on it.• Privacy Protection: Exercise caution regarding privacy. Do not give out personal identifying information (PII) such as your specific address, phone number, or names of family members. Ask general questions instead of highly specific personal ones.• Prompt Awareness: Be aware that all prompts written into ChatGPT are typically indexed into Google unless you change your settings.Digital Marketing SEO Resources:>> Join our exclusive SEO Marketing community>> Read SEO Articles>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Subscribe to the We Don't PLAY PodcastSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
“Our approach is simple: remove the PII from the data stream, and you don't have to worry about compliance,” said Bill Placke, President, Americas at SecurePII. At WebexOne in San Diego, Doug Green, Publisher of Technology Reseller News, spoke with Jason Thals, COO of BroadSource, and Placke of SecurePII about their finalist recognition in Cisco's Dynamic Duo competition. The joint solution, built on Cisco Webex Contact Center, is designed to unlock AI's potential by enabling enterprises to leverage large language models without exposing sensitive personal data. SecurePII's flagship product, SecureCall, was purpose-built for Webex (and also available on Genesys) to deliver PCI compliance while removing personally identifiable information from voice interactions. This enables organizations to deploy AI and agentic automation confidently, without the regulatory risk tied to data privacy laws across the U.S., GDPR, and beyond. Thals emphasized BroadSource's role in delivering services that complement CCaaS and UCaaS platforms globally, while Placke framed the opportunity for Cisco partners: “This is a super easy bolt-on, available in the Webex App Hub. Customers can be up and running in 30 minutes and compliant.” The collaboration, already proven with a government-regulated client in Australia, is industry-agnostic and scalable from small deployments to 50,000+ users. For Cisco resellers, it represents a powerful, sticky service that integrates seamlessly into channel models while helping enterprises stay compliant as they modernize customer engagement. Learn more at BroadSource and SecurePII.
In this episode, we talk about practical guardrails for LLMs with data scientist Nicholas Brathwaite. We focus on how to stop PII leaks, retrieve data, and evaluate safety with real limits. We weigh managed solutions like AWS Bedrock against open-source approaches and discuss when to skip LLMs altogether.• Why guardrails matter for PII, secrets, and access control• Where to place controls across prompt, training, and output• Prompt injection, jailbreaks, and adversarial handling• RAG design with vector DB separation and permissions• Evaluation methods, risk scoring, and cost trade-offs• AWS Bedrock guardrails vs open-source customization• Domain-adapted safety models and policy matching• When deterministic systems beat LLM complexityThis episode is part of our "AI in Practice” series, where we invite guests to talk about the reality of their work in AI. From hands-on development to scientific research, be sure to check out other episodes under this heading in our listings.Related research:Building trustworthy AI: Guardrail technologies and strategies (N. Brathwaite)Nic's GitHubWhat did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
CISA gives federal agencies 24 hours to patch a critical Cisco firewall bug. Researchers uncover the first known malicious MCP server used in a supply chain attack. The New York SIM card threat may have been overblown. Microsoft tags a new variant of the XCSSET macOS malware. An exposed auto insurance claims database puts PII at risk. Amazon will pay $2.5 billion to settle dark pattern allegations. Researchers uncover North Korea's hybrid playbook of cybercrime and insider threats. An old Hikvision security camera vulnerability rears its ugly head. Dan Trujillo from the Air Force Research Laboratory's Space Vehicles Directorate joins Maria Varmazis, host of T-Minus Space Daily to discuss how his team is securing satellites and space systems from cyber threats. DOGE delivers dysfunction, disarray, and disappointment. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn.CyberWire Guest Dan Trujillo from the Air Force Research Laboratory's Space Vehicles Directorate joins Maria Varmazis, host of T-Minus Space Daily to discuss how his team is securing satellites and space systems from cyber threats and also shares advice for breaking into the fast-growing field of space cybersecurity Selected Reading Federal agencies given one day to patch exploited Cisco firewall bugs (The Record) First malicious MCP Server discovered, stealing data from AI-Powered email systems (Beyond Machines) Secret Service faces backlash over SIM farm bust as experts challenge threat claims (Metacurity) Microsoft warns of new XCSSET macOS malware variant targeting Xcode devs (Bleeping Computer) Microsoft cuts off cloud services to Israeli military unit after report of storing Palestinians' phone calls (CNBC) Auto Insurance Platform Exposed Over 5 Million Records Including Documents Containing PII (Website Planet) Amazon pays $2.5 billion to settle Prime memberships lawsuit (Bleeping Computer) DeceptiveDevelopment: From primitive crypto theft to sophisticated AI-based deception (We Live Security) Critical 8 years old Hikvision Camera flaw actively exploited again (Beyond Machines) The Story of DOGE, as Told by Federal Workers (WIRED) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Security leaders from CyberArk, Fortra, and Sysdig share actionable strategies for securely implementing generative AI and reveal real-world insights on data protection and agent management.Topics Include:Panel explores practical security approaches for GenAI from prototype to productionThree-phase framework discussed: planning, pre-production, and production security considerationsSecurity must be built-in from start - data foundation is criticalUnderstanding data location, usage, transformation, and regulatory requirements is essentialFortra's security conglomerate approach integrates with AWS native tools and partnersMachine data initially easier for compliance - no PII or HIPAA concernsIdentity paradigm shift: agents can dynamically take human and non-human roles97% of organizations using AI tools lack identity and access policiesSecurity responsibility increases as you move up the customization stackOWASP Top 10 for GenAI addresses prompt injection and data poisoningRigorous model testing including adversarial attacks before deployment is crucialSysdig spent 6-9 months stress testing their agent before production releaseTension exists between moving fast and implementing proper security controlsDifferent security approaches needed based on data sensitivity and model usageZero-standing privilege and intent-based policies critical for agent managementMulti-agent systems create "Internet of Agents" with exponentially multiplying risksDiscovery challenge: finding where GenAI is running across enterprise environmentsAPI security and gateway protection becoming critical with acceptable latencyTop customer need: translating written AI policies into actionable controlsThreat modeling should focus on impact rather than just vulnerability severityParticipants:Prashant Tyagi - Go-To-Market Identity Security Technology Strategy Lead, CyberArkMike Reed – Field CISO, Cloud Security & AI, FortraZaher Hulays – Vice President Strategic Partnerships, SysdigMatthew Girdharry - WW Leader for Observability & Security Partnerships, Amazon Web ServicesFurther Links:CyberArk: Website – LinkedIn – AWS MarketplaceFortra: Website – LinkedIn – AWS MarketplaceSysdig: Website – LinkedIn – AWS MarketplaceSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
In the leadership and communications segment, Lack of board access: The No. 1 factor for CISO dissatisfaction, Pressure on CISOs to stay silent about security incidents growing, The Secret to Building a High-Performing Team, and more! Jackie McGuire sits down with Chuck Randolph, SVP of Strategic Intelligence & Security at 360 Privacy, for a gripping conversation about the evolution of executive protection in the digital age. With over 30 years of experience, Chuck shares how targeted violence has shifted from physical threats to online ideation—and why it now starts with a click. From PII abuse to unregulated data brokers, generative AI manipulation, and real-world convergence of cyber and physical risks—this is a must-watch for CISOs, CSOs, CEOs, and anyone navigating modern threat landscapes. Hear real-world examples, including shocking stories of doxxing, AI-fueled radicalization, and the hidden dangers of digital exhaust. Whether you're in cyber, physical security, or executive leadership, this interview lays out the urgent need for converged risk strategies, narrative control, and a new approach to duty of care in a remote-first world. Learn what every security leader needs to do now to protect key personnel, prevent exploitation, and build a unified, proactive risk posture. This segment is sponsored by 360 Privacy. Learn how to integrate privacy and protective intelligence to get ahead of the next threat vector at https://securityweekly.com/360privacybh! In this exclusive Black Hat 2025 interview, CyberRisk TV host Matt Alderman sits down with Tom Pore, AVP of Sales Engineering at Pentera, to dive into the rapidly evolving world of AI-driven cyberattacks. What's happening? Attackers are already using AI and LLMs to launch thousands of attacks per second—targeting modern web apps, exploiting PII, and bypassing traditional testing methods. Tom explains how automated AI payload generation, context-aware red teaming, and language/system-aware attack modeling are reshaping the security landscape. The twist? Pentera flips the script by empowering security teams to think like an attacker—using continuous, AI-powered penetration testing to uncover hidden risks before threat actors do. This includes finding hardcoded credentials, leveraging leaked identities, and pivoting across systems just like real adversaries. To learn more about Pentera's proactive Ransomware testing please visit: https://securityweekly.com/penterabh Visit https://www.securityweekly.com/bsw for all the latest episodes! Show Notes: https://securityweekly.com/bsw-413
In the leadership and communications segment, Lack of board access: The No. 1 factor for CISO dissatisfaction, Pressure on CISOs to stay silent about security incidents growing, The Secret to Building a High-Performing Team, and more! Jackie McGuire sits down with Chuck Randolph, SVP of Strategic Intelligence & Security at 360 Privacy, for a gripping conversation about the evolution of executive protection in the digital age. With over 30 years of experience, Chuck shares how targeted violence has shifted from physical threats to online ideation—and why it now starts with a click. From PII abuse to unregulated data brokers, generative AI manipulation, and real-world convergence of cyber and physical risks—this is a must-watch for CISOs, CSOs, CEOs, and anyone navigating modern threat landscapes. Hear real-world examples, including shocking stories of doxxing, AI-fueled radicalization, and the hidden dangers of digital exhaust. Whether you're in cyber, physical security, or executive leadership, this interview lays out the urgent need for converged risk strategies, narrative control, and a new approach to duty of care in a remote-first world. Learn what every security leader needs to do now to protect key personnel, prevent exploitation, and build a unified, proactive risk posture. This segment is sponsored by 360 Privacy. Learn how to integrate privacy and protective intelligence to get ahead of the next threat vector at https://securityweekly.com/360privacybh! In this exclusive Black Hat 2025 interview, CyberRisk TV host Matt Alderman sits down with Tom Pore, AVP of Sales Engineering at Pentera, to dive into the rapidly evolving world of AI-driven cyberattacks. What's happening? Attackers are already using AI and LLMs to launch thousands of attacks per second—targeting modern web apps, exploiting PII, and bypassing traditional testing methods. Tom explains how automated AI payload generation, context-aware red teaming, and language/system-aware attack modeling are reshaping the security landscape. The twist? Pentera flips the script by empowering security teams to think like an attacker—using continuous, AI-powered penetration testing to uncover hidden risks before threat actors do. This includes finding hardcoded credentials, leveraging leaked identities, and pivoting across systems just like real adversaries. To learn more about Pentera's proactive Ransomware testing please visit: https://securityweekly.com/penterabh Show Notes: https://securityweekly.com/bsw-413
NumberEight converts mobile sensor data into contextual audience segments without capturing PII, addressing the fundamental breakdown of cookie-based targeting as media consumption fragments across podcasts, gaming, and connected TV. What began as a thesis project for contextual SoundCloud recommendations has evolved into a B2B data platform serving podcast platforms, media sales houses, and agencies. In this episode of Category Visionaries, we sat down with Abhishek Sen to unpack how NumberEight navigates the complex adtech ecosystem and the tactical GTM strategies that drive their expansion across multiple customer segments simultaneously. Topics Discussed: How NumberEight evolved from a Netherlands thesis project (contextual SoundCloud recommendations) to solving adtech's identity crisis Technical architecture: converting mobile sensor data to contextual audience segments without PII collection Multi-segment GTM approach across podcast platforms (AdSwizz, Triton), media sales houses, and agencies Why the company targets podcasting and gaming simultaneously despite different data density challenges Conference strategy: 45+ targeted meetings per event while completely avoiding booths Building category credibility through IAB Tech Lab standards work and white paper contributions The breakdown of cookie-based targeting as consumption fragments beyond web browsers GTM Lessons For B2B Founders: Execute systematic conference preparation to maximize deal flow: Sen books 45+ targeted meetings across 4-day conferences like Cannes Lions through advance relationship mapping and mutual connection identification. The tactical framework: pre-research each prospect's annual priorities, identify shared connections for warm introductions, and plan specific value propositions for each conversation. Execute daily follow-up during the conference to prevent pipeline degradation. Sen's insight: "Prep is incredibly important... we evaluate okay, Brett, head of monetization at ABC Company. Who does Brett know that I know? What is the actual proposition we want to discuss?" Avoid booth competition when capital-constrained: NumberEight deliberately avoids exhibition booths at major conferences, recognizing the futility of competing against Amazon's "entire city mockups" and Google's massive displays. Instead, they focus on authentic relationship building through targeted meetings and dinner sponsorships. The strategic principle: startups should leverage their authenticity advantage rather than attempting to out-spend established players in awareness channels where they're fundamentally disadvantaged. Maintain strict messaging separation between investor and customer tracks: Sen emphasizes the critical disconnect between vision-focused investor pitches and problem-focused customer conversations. His customer insight: "You tell any customer you're going to revolutionize... they're like 'man, you make me money, I'll be your friend.'" The implementation: develop completely separate messaging frameworks where investor decks emphasize market transformation while customer presentations focus exclusively on measurable business impact and revenue generation. Build category authority through standards body participation: NumberEight invests significant engineering resources in IAB Tech Lab white papers and industry standards development without direct revenue impact. This work establishes credibility when defining new data categories in established industries. Sen's co-founder leads technical working groups on identity-less targeting standards. The strategic value: "If you're trying to change the game, you have to be seen as someone giving back to the ecosystem and that helps drive your credibility." Time market entry around regulatory and consumption pattern shifts: NumberEight's positioning leverages two simultaneous disruptions: privacy regulation breakdown of cookie-based targeting and consumption fragmentation beyond web browsers. Sen identifies the core market inefficiency: "Consumption has moved beyond the web... but the data companies, in terms of how data is actually collected, hasn't changed. There's a mismatch." Founders should identify regulatory or technological shifts that create incumbent solution inadequacy and time market entry accordingly. Focus on vertical-specific events over broad industry conferences: NumberEight exclusively attends podcasting-focused (specific platforms), gaming-focused, or adtech-specific conferences rather than generalist marketing events. Sen explains: "We don't attend any conferences that are generalistic... The ones we attend are very focused on either podcasting or gaming or adtech focused ones. That's where we get the most bang for buck." This concentration strategy yields higher prospect quality and more productive pipeline development than broad industry networking. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
Patrick (Tracer Labs) breaks down Trust ID, a consent + identity layer that replaces cookie pop-ups with a portable, user-owned identity (and embedded wallet). We dig into how Tracer helps brands unify siloed data without storing PII, verify real humans amid AI traffic, and enable one-click privacy that travels site-to-site.Timestamps[00:00] AI = most traffic; attribution is broken [00:01] Intro — Patrick, Tracer Labs & Trust ID [00:02] Patrick's crypto origin story & prior ventures [00:05] The problem: siloed brand data + compliance burden [00:06] What Trust ID does: consent + identity + embedded wallet [00:07] One-click wedge: spin up wallet, tokenize consent, no more cookies [00:09] Brands get real humans, no PII; users keep privacy & control [00:12] GDPR/CCPA costs; why a new US standard is needed[00:15] AI search & bot traffic: restoring pre-intent signal[00:18] Federated identity, modular plug-in, keep existing auth[00:19] Agentic “child IDs” w/ wallets & rule sets (Q1 roadmap)[00:20] KYC/KYB as commoditized credentials that travel with you [00:22] Live MVP; replacing legacy consent managers; early clients [00:24] Who's adopting: cards, casinos, banks, travel; multi-brand SSO [00:25] Unifying loyalty & rewards across properties [00:26] Founder advice: talk to customers on day one [00:31] Digital identity misconceptions; why this time is different [00:33] Abstraction for users; less friction, fewer decisions[00:36] Vision: 0.5–1B users; cut spam; programmatic commerce [00:38] The ask: hiring devs; enterprise intros; $15M seed openConnecthttps://www.tracerlabs.com/https://www.linkedin.com/company/tracerlabs/https://www.linkedin.com/in/patrickmoynihan1/DisclaimerNothing mentioned in this podcast is investment advice and please do your own research. Finally, it would mean a lot if you can leave a review of this podcast on Apple Podcasts or Spotify and share this podcast with a friend.Be a guest on the podcast or contact us - https://www.web3pod.xyz/
Nonprofits, your “10 blue links” era is over. In this episode, Avinash Kaushik (Human-Made Machine; Occam's Razor) breaks down Answer Engine Optimization—why LLMs now decide who gets seen, why third-party chatter outweighs your own site, and what to do about it. We get tactical: build AI-resistant content (genuine novelty + depth), go multimodal (text, video, audio), and stamp everything with real attribution so bots can't regurgitate you into sludge. We also cover measurement that isn't delusional—group your AEO referrals, expect fewer visits but higher intent, and stop worshiping last-click and vanity metrics. Avinash updates the 10/90 rule for the AI age (invest in people, plus “synthetic interns”), and torpedoes linear funnels in favor of See-Think-Do-Care anchored in intent. If you want a blunt, practical playbook for staying visible—and actually converting—when answers beat searches, this is it. About Avinash Avinash Kaushik is a leading voice in marketing analytics—the author of Web Analytics: An Hour a Day and Web Analytics 2.0, publisher of the Marketing Analytics Intersect newsletter, and longtime writer of the Occam's Razor blog. He leads strategy at Human Made Machine, advises Tapestry on brand strategy/marketing transformation, and previously served as Google's Digital Marketing Evangelist. Uniquely, he donates 100% of his book royalties and paid newsletter revenue to charity (civil rights, early childhood education, UN OCHA; previously Smile Train and Doctors Without Borders). He also co-founded Market Motive. Resource Links Avinash Kaushik — Occam's Razor (site/home) Occam's Razor by Avinash Kaushik Marketing Analytics Intersect (newsletter sign-up) Occam's Razor by Avinash Kaushik AEO series starter: “AI Age Marketing: Bye SEO, Hello AEO!” Occam's Razor by Avinash Kaushik See-Think-Do-Care (framework explainer) Occam's Razor by Avinash Kaushik Books: Web Analytics: An Hour a Day | Web Analytics 2.0 (author pages) Occam's Razor by Avinash Kaushik+1 Human Made Machine (creative pre-testing) — Home | About | Products humanmademachine.com+2humanmademachine.com+2 Tapestry (Coach, Kate Spade) (company site) Tapestry Tools mentioned (AEO measurement): Trakkr (AI visibility / prompts / sentiment) Trakkr Evertune (AI Brand Index & monitoring) evertune.ai GA4 how-tos (for your AEO channel + attribution): Custom Channel Groups (create an “AEO” channel) Google Help Attribution Paths report (multi-touch view) Google Help Nonprofit vetting (Avinash's donation diligence): Charity Navigator (ratings) Charity Navigator Google for Nonprofits — Gemini & NotebookLM (AI access) Announcement / overview | Workspace AI for nonprofits blog.googleGoogle Help Example NGO Avinash supports: EMERGENCY (Italy) EMERGENCY Transcript Avinash Kaushik: [00:00:00] So traffic's gonna go down. So if you're a business, you're a nonprofit, how. Do you deal with the fact that you're gonna lose a lot of traffic that you get from a search engine? Today, when all of humanity moves to the answer Engine W world, only about two or 3% of the people are doing it. It's growing very rapidly. Um, and so the art of answer engine optimization is making sure that we are building for these LMS and not getting stuck with only solving for Google with the old SEO techniques. Some of them still work, but you need to learn a lot of new stuff because on average, organic traffic will drop between 16 to 64% negative and paid search traffic will drop between five to 30% negative. And that is a huge challenge. And the reason you should start with AEO now George Weiner: [00:01:00] This week's guest, Avinash Kaushik is an absolute hero of mine because of his amazing, uh, work in the field of web analytics. And also, more importantly, I'd say education. Avinash Kaushik, , digital marketing evangelist at Google for Google Analytics. He spent 16 years there. He basically is. In the room where it happened, when the underlying ability to understand what's going on on our websites was was created. More importantly, I think for me, you know, he joined us on episode 45 back in 2016, and he still is, I believe, on the cutting edge of what's about to happen with AEO and the death of SEO. I wanna unpack that 'cause we kind of fly through terms [00:02:00] before we get into this podcast interview AEO. Answer engine optimization. It's this world of saying, alright, how do we create content that can't just be, , regurgitated by bots, , wholesale taken. And it's a big shift from SEO search engine optimization. This classic work of creating content for Google to give us 10 blue links for people to click on that behavior is changing. And when. We go through a period of change. I always wanna look at primary sources. The people that, , are likely to know the most and do the most. And he operates in the for-profit world. But make no mistake, he cares deeply about nonprofits. His expertise, , has frankly been tested, proven and reproven. So I pay attention when he says things like, SEO is going away, and AEO is here to stay. So I give you Avan Kashic. I'm beyond excited that he has come back. He was on our 45th episode and now we are well over our 450th episode. So, , who knows what'll happen next time we talk to him. [00:03:00] This week on the podcast, we have Avinash Kaushik. He is currently the chief strategy officer at Human Made Machine, but actually returning guest after many, many years, and I know him because he basically introduced me to Google Analytics, wrote the literal book on it, and also helped, by the way. No big deal. Literally birth Google Analytics for everyone. During his time at Google, I could spend the entire podcast talking about, uh, the amazing amounts that you have contributed to, uh, marketing and analytics. But I'd rather just real quick, uh, how are you doing and how would you describe your, uh, your role right now? Avinash Kaushik: Oh, thank you. So it's very excited to be back. Um, look forward to the discussion today. I do, I do several things concurrently, of course. I, I, I am an author and I write this weekly newsletter on marketing and analytics. Um, I am the Chief Strategy Officer at Human Made Machine, a company [00:04:00] that obsesses about helping brands win before they spend by doing creative pretesting. And then I also do, uh, uh, consulting at Tapestry, which owns Coach and Kate Spades. And my work focuses on brand strategy and marketing transformation globally. George Weiner: , Amazing. And of course, Occam's Razor. The, the, yes, the blog, which is incredible. I happen to be a, uh, a subscriber. You know, I often think of you in the nonprofit landscape, even though you operate, um, across many different brands, because personally, you also actually donate all of your proceeds from your books, from your blog, from your subscription. You are donating all of that, um, because that's just who you are and what you do. So I also look at you as like team nonprofit, though. Avinash Kaushik: You're very kind. No, no, I, I, yeah. All the proceeds from both of my books and now my newsletter, premium newsletter. It's about $200,000 a year, uh, donated to nonprofits, and a hundred [00:05:00] percent of the revenue is donated nonprofit, uh, nonprofits. And, and for me, it, it's been ai. Then I have to figure out. Which ones, and so I research nonprofits and I look up their cha charity navigators, and I follow up with the people and I check in on the works while, while don't work at a nonprofit, but as a customer of nonprofits, if you will. I, I keep sort of very close tabs on the amazing work that these charities do around the world. So feel very close to the people that you work with very closely. George Weiner: So recently I got an all caps subject line from you. Well, not from you talking about this new acronym that was coming to destroy the world, I think is what you, no, AEO. Can you help us understand what answer engine optimization is? Avinash Kaushik: Yes, of course. Of course. We all are very excited about ai. Obviously you, you, you would've to live in. Some backwaters not to be excited about it. And we know [00:06:00] that, um, at the very edge, lots of people are using large language models, chat, GPT, Claude, Gemini, et cetera, et cetera, in the world. And, and increasingly over the last year, what you have begun to notice is that instead of using a traditional search engine like Google or using the old Google interface with the 10 blue links, et cetera. People are beginning to use these lms. They just go to chat, GPT to get the answer that they want. And the one big difference in this, this behavior is I actually have on September 8th, I have a keynote here in New York and I have to be in Shanghai the next day. That is physically impossible because it, it just, the time it takes to travel. But that's my thing. So today, if I wanted to figure out what is the fastest way. On September 8th, I can leave New York and get to Shanghai. I would go to Google flights. I would put in the destinations. It will come back with a crap load of data. Then I poke and prod and sort and filter, and I have to figure out which flight is right for that. For this need I have. [00:07:00] So that is the old search engine world. I'm doing all the work, hunting and pecking, drilling down, visiting websites, et cetera, et cetera. Instead, actually what I did is I went to charge GBT 'cause I, I have a plus I, I'm a paying member of charge GBT and I said to charge GBTI have to do a keynote between four and five o'clock on September 8th in New York and I have to be in Shanghai as fast as I possibly can be After my keynote, can you find me the best flight? And I just typed in those two sentences. He came back and said, this Korean airline website flight is the best one for you. You will not get to your destination on time until, unless you take a private jet flight for $300,000. There is your best option. They're gonna get to Shanghai on, uh, September 10th at 10 o'clock in the morning if you follow these steps. And so what happened there? I didn't have to hunt and pack and dig and go to 15 websites to find the answer I wanted. The engine found the [00:08:00] answer I wanted at the end and did all the work for me that you are seeing from searching, clicking, clicking, clicking, clicking, clicking to just having somebody get you. The final answer is what I call the, the, the underlying change in consumer behavior that makes answer engine so exciting. Obviously, it creates a challenge for us because what happened between those two things, George is. I didn't have to visit many websites. So traffic is going down, obviously, and these interfaces at the moment don't have paid search links for now. They will come, they will come, but they don't at the moment. So traffic's gonna go down. So if you're a business, you're a nonprofit, how. Do you deal with the fact that you're gonna lose a lot of traffic that you get from a search engine? Today, when all of humanity moves to the answer Engine W world, only about two or 3% of the people are doing it. It's growing very rapidly. Um, and so the art of answer engine optimization [00:09:00] is making sure that we are building for these LMS and not getting stuck with only solving for Google with the old SEO techniques. Some of them still work, but you need to learn a lot of new stuff because on average, organic traffic will drop between 16 to 64% negative and paid search traffic will drop between five to 30% negative. And that is a huge challenge. And the reason you should start with AEO now George Weiner: that you know. Is a window large enough to drive a metaphorical data bus through? And I think talk to your data doctor results may vary. You are absolutely right. We have been seeing this with our nonprofit clients, with our own traffic that yes, basically staying even is the new growth. Yeah. But I want to sort of talk about the secondary implications of an AI that has ripped and gripped [00:10:00] my website's content. Then added whatever, whatever other flavors of my brand and information out there, and is then advising somebody or talking about my brand. Can you maybe unwrap that a little bit more? What are the secondary impacts of frankly, uh, an AI answering what is the best international aid organization I should donate to? Yes. As you just said, you do Avinash Kaushik: exactly. No, no, no. This such a, such a wonderful question. It gets to the crux. What used to influence Google, by the way, Google also has an answer engine called Gemini. So I just, when I say Google, I'm referring to the current Google that most people use with four paid links and 10 SEO links. So when I say Google, I'm referring to that one. But Google also has an answer engine. I, I don't want anybody saying Google does is not getting into the answer engine business. It is. So Google is very much influenced by content George that you create. I call it one P content, [00:11:00] first party content. Your website, your mobile app, your YouTube channel, your Facebook page, your, your, your, your, and it sprinkles on some amount of third party content. Some websites might have reviews about you like Yelp, some websites might have PR releases about you light some third party content. Between search engine and engines. Answer Engines seem to overvalue third party content. My for one p content, my website, my mobile app, my YouTube channel. My, my, my, everything actually is going down in influence while on Google it's pretty high. So as here you do SEO, you're, you're good, good ranking traffic. But these LLMs are using many, many, many, literally tens of thousands more sources. To understand who you are, who you are as a nonprofit, and it's [00:12:00] using everybody's videos, everybody's Reddit posts, everybody's Facebook things, and tens of thousands of more people who write blogs and all kinds of stuff in order to understand who you are as a nonprofit, what services you offer, how good you are, where you're falling short, all those negative reviews or positive reviews, it's all creepy influence. Has gone through the roof, P has come down, which is why it has become very, very important for us to build a new content strategy to figure out how we can influence these LMS about who we are. Because the scary thing is at this early stage in answer engines, someone else is telling the LLMs who you are instead of you. A more, and that's, it feels a little scary. It feels as scary as a as as a brand. It feels very scary as I'm a chief strategy officer, human made machine. It feels scary for HMM. It feels scary for coach. [00:13:00] It's scary for everybody, uh, which is why you really urgently need to get a handle on your content strategy. George Weiner: Yeah, I mean, what you just described, if it doesn't give you like anxiety, just stop right now. Just replay what we just did. And that is the second order effects. And you know, one of my concerns, you mentioned it early on, is that sort of traditional SEO, we've been playing the 10 Blue Link game for so long, and I'm worried that. Because of the changes right now, roughly what 20% of a, uh, search is AI overview, that number's not gonna go down. You're mentioning third party stuff. All of Instagram back to 2020, just quietly got tossed into the soup of your AI brand footprint, as we call it. Talk to me about. There's a nonprofit listening to this right now, and then probably if they're smart, other organizations, what is coming in the next year? They're sitting down to write the same style of, you know, [00:14:00] ai, SEO, optimized content, right? They have their content calendar. If you could have like that, I'm sitting, you're sitting in the room with them. What are you telling that classic content strategy team right now that's about to embark on 2026? Avinash Kaushik: Yes. So actually I, I published this newsletter just last night, and this is like the, the fourth in my AEO series, uh, newsletter, talks about how to create your content portfolio strategy. Because in the past we were like, we've got a product pages, you know, the equivalent of our, our product pages. We've got some, some, uh, charitable stories on our website and uh, so on and so forth. And that's good. That's basic. You need to do the basics. The interesting thing is you need to do so much more both on first party. So for example, one of the first things to appreciate is LMS or answer engines are far more influenced by multimodal content. So what does that mean? Text plus [00:15:00] video plus audio. Video and audio were also helpful in Google. And remember when I say Google, I'm referring to the old linky linking Google, not Gemini. But now video has ton more influence. So if you're creating a content strategy for next year, you should say many. Actually, lemme do one at a time. Text. You have to figure out more types of things. Authoritative Q and as. Very educational deep content around your charity's efforts. Lots of text. Third. Any seasonality, trends and patterns that happen in your charity that make a difference? I support a school in, in Nepal and, and during the winter they have very different kind of needs than they do during the summer. And so I bumped into this because I was searching about something seasonality related. This particular school for Tibetan children popped up in Nepal, and it's that content they wrote around winter and winter struggles and coats and all this stuff. I'm like. [00:16:00] It popped up in the answer engine and I'm like, okay. I research a bit more. They have good stories about it, and I'm supporting them q and a. Very, very important. Testimonials. Very, very important interviews. Very, very important. Super, super duper important with both the givers and the recipients, supporters of your nonprofit, but also the recipient recipients of very few nonprofits actually interview the people who support them. George Weiner: Like, why not like donors or be like, Hey, why did you support us? What was the, were the two things that moved you from Aware to care? Avinash Kaushik: Like for, for the i I Support Emergency, which is a Italian nonprofit like Ms. Frontiers and I would go on their website and speak a fiercely about why I absolutely love the work they do. Content, yeah. So first is text, then video. You gotta figure out how to use video a lot more. And most nonprofits are not agile in being able to use video. And the third [00:17:00] thing that I think will be a little bit of a struggle is to figure out how to use audio. 'cause audio also plays a very influential role. So for as you are planning your uh, uh, content calendar for the next year. Have the word multimodal. I'm sorry, it's profoundly unsexy, but put multimodal at the top, underneath it, say text, then say video, then audio, and start to fill those holes in. And if those people need ideas and example of how to use audio, they should just call you George. You are the king of podcasting and you can absolutely give them better advice than I could around how nonprofits could use audio. But the one big thing you have to think about is multimodality for next year George Weiner: that you know, is incredibly powerful. Underlying that, there's this nuance that I really want to make sure that we understand, which is the fact that the type of content is uniquely different. It's not like there's a hunger organization listening right now. It's not 10 facts about hunger during the winter. [00:18:00] Uh, days of being able to be an information resource that would then bring people in and then bring them down your, you know, your path. It's game over. If not now, soon. Absolutely. So how you are creating things that AI can't create and that's why you, according to whom, is what I like to think about. Like, you're gonna say something, you're gonna write something according to whom? Is it the CEO? Is it the stakeholder? Is it the donor? And if you can put a attribution there, suddenly the AI can't just lift and shift it. It has to take that as a block and be like, no, it was attributed here. This is the organization. Is that about right? Or like first, first party data, right? Avinash Kaushik: I'll, I'll add one more, one more. Uh, I'll give a proper definition. So, the fir i I made 11 recommendations last night in the newsletter. The very first one is focus on creating AI resistant content. So what, what does that mean? AI resistant means, uh, any one of us from nonprofits could [00:19:00] open chat, GPT type in a few queries and chat. GD PT can write our next nonprofit newsletter. It could write the next page for our donation. It could create the damn page for our donation, right? Remember, AI can create way more content than you can, but if you can use AI to create content, 67 million other nonprofits are doing the same thing. So what you have to do is figure out how to build AI resistant content, and my definition is very simple. George, what is AI resistance? It's content of genuine novelty. So to tie back to your recommendation, your CEO of a nonprofit that you just recommended, the attribution to George. Your CEO has a unique voice, a unique experience. The AI hasn't learned what makes your CEO your frontline staff solving problems. You are a person who went and gave a speech at the United Nations on behalf of your nonprofit. Whatever you are [00:20:00] doing is very special, and what you have to figure out is how to get out of the AI slop. You have to get out of all the things that AI can automatically type. Figure out if your content meets this very simple, standard, genuine novelty and depth 'cause it's the one thing AI isn't good at. That's how you rank higher. And not only will will it, will it rank you, but to make another point you made, George, it's gonna just lift, blanc it out there and attribute credit to you. Boom. But if you're not genuine, novelty and depth. Thousand other nonprofits are using AI to generate text and video. Could George Weiner: you just, could you just quit whatever you're doing and start a school instead? I seriously can't say it enough that your point about AI slop is terrifying me because I see it. We've built an AI tool and the subtle lesson here is that think about how quickly this AI was able to output that newsletter. Generic old school blog post and if this tool can do it, which [00:21:00] by the way is built on your local data set, we have the rag, which doesn't pause for a second and realize if this AI can make it, some other AI is going to be able to reproduce it. So how are you bringing the human back into this? And it's a style of writing and a style of strategic thinking that please just start a school and like help every single college kid leaving that just GPT their way through a degree. Didn't freaking get, Avinash Kaushik: so it's very, very important to make sure. Content is of genuine novelty and depth because it cannot be replicated by the ai. And by the way, this, by the way, George, it sounds really high, but honestly to, to use your point, if you're a CEO of a nonprofit, you are in it for something that speaks to you. You're in it. Because ai, I mean nonprofit is not your path to becoming the next Bill Gates, you're doing it because you just have this hair. Whoa, spoiler alert. No, I'm sorry. [00:22:00] Maybe, maybe that is. I, I didn't, I didn't mean any negative emotion there, but No, I love it. It's all, it's like a, it's like a sense of passion you are bringing. There's something that speaks to you. Just put that on paper, put that on video, put that on audio, because that is what makes you unique. And the collection of those stories of genuine depth and novelty will make your nonprofit unique and stand out when people are looking for answers. George Weiner: So I have to point to the next elephant in the room here, which is measurement. Yes. Yes. Right now, somebody is talking about human made machine. Someone's talking about whole whale. Someone's talking about your nonprofit having a discussion in an answer engine somewhere. Yes. And I have no idea. How do I go about understanding measurement in this new game? Avinash Kaushik: I have. I have two recommendations. For nonprofits, I would recommend a tool called Tracker ai, TRA, KKR [00:23:00] ai, and it has a free version, that's why I'm recommending it. Some of the many of these tools are paid tools, but with Tracker, do ai. It allows you to identify your website, URL, et cetera, et cetera, and it'll give you some really wonderful and fantastic, helpful report It. Tracker helps you understand prompt tracking, which is what are other people writing about you when they're seeking? You? Think of this, George, as your old webmaster tools. What keywords are people using to search? Except you can get the prompts that people are using to get a more robust understanding. It also monitors your brand's visibility. How often are you showing up and how often is your competitor showing up, et cetera, et cetera. And then he does that across multiple search engines. So you can say, oh, I'm actually pretty strong in OpenAI for some reason, and I'm not that strong in Gemini. Or, you know what, I have like the highest rating in cloud, but I don't have it in OpenAI. And this begins to help you understand where your current content strategy is working and where it is not [00:24:00] working. So that's your brand visibility. And the third thing that you get from Tracker is active sentiment tracking. This is the scary part because remember, you and I were both worried about what other people saying about us. So this, this are very helpful that we can go out and see what it is. What is the sentiment around our nonprofit that is coming across in, um, in these lms? So Tracker ai, it have a free and a paid version. So I would, I would recommend using it for these three purposes. If, if you have funding to invest in a tool. Then there's a tool called Ever Tool, E-V-E-R-T-U-N-E Ever. Tune is a paid tool. It's extremely sophisticated and robust, and they do brand monitoring, site audit, content strategy, consumer preference report, ai, brand index, just the. Step and breadth of metrics that they provide is quite extensive, but, but it is a paid tool. It does cost money. It's not actually crazy expensive, but uh, I know I have worked with them before, so full disclosure [00:25:00] and having evaluated lots of different tools, I have sort of settled on those two. If it's a enterprise type client I'm working with, then I'll use Evert Tune if I am working with a nonprofit or some of my personal stuff. I'll use Tracker AI because it's good enough for a person that is, uh, smaller in size and revenue, et cetera. So those two tools, so we have new metrics coming, uh, from these tools. They help us understand the kind of things we use webmaster tools for in the past. Then your other thing you will want to track very, very closely is using Google Analytics or some other tool on your website. You are able to currently track your, uh, organic traffic and if you're taking advantage of paid ads, uh, through a grant program on Google, which, uh, provides free paid search credits to nonprofits. Then you're tracking your page search traffic to continue to track that track trends, patterns over time. But now you will begin to see in your referrals report, in your referrals report, you're gonna begin to seeing open [00:26:00] ai. You're gonna begin to see these new answer engines. And while you don't know the keywords that are sending this traffic and so on and so forth, it is important to keep track of the traffic because of two important reasons. One, one, you want to know how to highly prioritize. AEO. That's one reason. But the other reason I found George is syn is so freaking hard to rank in an answer engine. When people do come to my websites from Answer engine, the businesses I work with that is very high intent person, they tend to be very, very valuable because they gave the answer engine a very complex question to answer the answers. Engine said you. The right answer for it. So when I show up, I'm ready to buy, I'm ready to donate. I'm ready to do the action that I was looking for. So the percent of people who are coming from answer engines to your nonprofit carry significantly higher intention, and coming from Google, who also carry [00:27:00] intent. But this man, you stood out in an answer engine, you're a gift from God. Person coming thinks you're very important and is likely to engage in some sort of business with you. So I, even if it's like a hundred people, I care a lot about those a hundred people, even if it's not 10,000 at the moment. Does that make sense George? George Weiner: It does, and I think, I'm glad you pointed to, you know, the, the good old Google Analytics. I'm like, it has to be a way, and I, I think. I gave maximum effort to this problem inside of Google Analytics, and I'm still frustrated that search console is not showing me, and it's just blending it all together into one big soup. But. I want you to poke a hole in this thinking or say yes or no. You can create an AI channel, an AEO channel cluster together, and we have a guide on that cluster together. All of those types of referral traffic, as you mentioned, right from there. I actually know thanks to CloudFlare, the ratios of the amount of scrapes versus the actual clicks sent [00:28:00] for roughly 20, 30% of. Traffic globally. So is it fair to say I could assume like a 2% clickthrough or a 1% clickthrough, or even worse in some cases based on that referral and then reverse engineer, basically divide those clicks by the clickthrough rate and essentially get a rough share of voice metric on that platform? Yeah. Avinash Kaushik: So, so for, um, kind of, kind of at the moment, the problem is that unlike Google giving us some decent amount of data through webmaster tools. None of these LLMs are giving us any data. As a business owner, none of them are giving us any data. So we're relying on third parties like Tracker. We're relying on third parties like Evert Tune. You understand? How often are we showing up so we could get a damn click through, right? Right. We don't quite have that for now. So the AI Brand Index in Evert Tune comes the closest. Giving you some information we could use in the, so your thinking is absolutely right. Your recommendation is ly, right? Even if you can just get the number of clicks, even if you're tracking them very [00:29:00] carefully, it's very important. Please do exactly what you said. Make the channel, it's really important. But don't, don't read too much into the click-through rate bits, because we're missing the. We're missing a very important piece of information. Now remember when Google first came out, we didn't have tons of data. Um, and that's okay. These LLMs Pro probably will realize over time if they get into the advertising business that it's nice to give data out to other people, and so we might get more data. Until then, we are relying on these third parties that are hacking these tools to find us some data. So we can use it to understand, uh, some of the things we readily understand about keywords and things today related to Google. So we, we sadly don't have as much visibility today as we would like to have. George Weiner: Yeah. We really don't. Alright. I have, have a segment that I just invented. Just for you called Avanade's War Corner. And in Avanade's War Corner, I noticed that you go to war on various concepts, which I love because it brings energy and attention to [00:30:00] frankly data and finding answers in there. So if you'll humor me in our war corner, I wanna to go through some, some classic, classic avan. Um, all right, so can you talk to me a little bit about vanity metrics, because I think they are in play. Every day. Avinash Kaushik: Absolutely. No, no, no. Across the board, I think in whatever we do. So, so actually I'll, I'll, I'll do three. You know, so there's vanity metrics, activity metrics and outcome metrics. So basically everything goes into these three buckets essentially. So vanity metrics are, are the ones that are very easy to find, but them moving up and down has nothing to do with the number of donations you're gonna get as a nonprofit. They're just there to ease our ego. So, for example. Let's say we are a nonprofit and we run some display ads, so measure the number of impressions that were delivered for our display ad. That's a vanity metric. It doesn't tell you anything. You could have billions of impressions. You could have 10 impressions, doesn't matter, but it is easily [00:31:00] available. The count is easily available, so we report it. Now, what matters? What matters are, did anybody engage with the ad? What were the percent of people who hovered on the ad? What were the number of people who clicked on the ad activity metrics? Activity metrics are a little more useful than vanity metrics, but what does it matter for you as a non nonprofit? The number of donations you received in the last 24 hours. That's an outcome metric. Vanity activity outcome. Focus on activity to diagnose how well our campaigns or efforts are doing in marketing. Focus on outcomes to understand if we're gonna stay in business or not. Sorry, dramatic. The vanity metrics. Chasing is just like good for ego. Number of likes is a very famous one. The number of followers on a social paia, a very famous one. Number of emails sent is another favorite one. There's like a whole host of vanity metrics that are very easy to get. I cannot emphasize this enough, but when you unpack and or do meta-analysis of [00:32:00] relationship between vanity metrics and outcomes, there's a relationship between them. So we always advise people that. Start by looking at activity metrics to help you understand the user's behavior, and then move to understanding outcome metrics because they are the reason you'll thrive. You will get more donations or you will figure out what are the things that drive more donations. Otherwise, what you end up doing is saying. If I post provocative stuff on Facebook, I get more likes. Is that what you really wanna be doing? But if your nonprofit says, get me more likes, pretty soon, there's like a naked person on Facebook that gets a lot of likes, but it's corrupting. Yeah. So I would go with cute George Weiner: cat, I would say, you know, you, you get the generic cute cat. But yeah, same idea. The Internet's built on cats Avinash Kaushik: and yes, so, so that's why I, I actively recommend people stay away from vanity metrics. George Weiner: Yeah. Next up in War Corner, the last click [00:33:00] fallacy, right? The overweighting of this last moment of purchase, or as you'd maybe say in the do column of the See, think, do care. Avinash Kaushik: Yes. George Weiner: Yes. Avinash Kaushik: So when the, when the, when we all started to get Google Analytics, we got Adobe Analytics web trends, remember them, we all wanted to know like what drove the conversion. Mm-hmm. I got this donation for a hundred dollars. I got a donation for a hundred thousand dollars. What drove the conversion. And so what lo logically people would just say is, oh, where did this person come from? And I say, oh, the person came from Google. Google drove this conversion. Yeah, his last click analysis just before the conversion. Where did the person come from? Let's give them credit. But the reality is it turns out that if you look at consumer behavior, you look at days to donation, visits to donation. Those are two metrics available in Google. It turns out that people visit multiple times before [00:34:00] they make a donation. They may have come through email, their interest might have been triggered through your email. Then they suddenly remembered, oh yeah, yeah, I wanted to go to the nonprofit and donate something. This is Google, you. And then Google helps them find you and they come through. Now, who do you give credit Email or the Google, right? And what if you came 5, 7, 8, 10 times? So the last click fallacy is that it doesn't allow you to see the full consumer journey. It gives credit to whoever was the last person who sent you this, who introduced this person to your website. And so very soon we move to looking at what we call MTI, Multi-Touch Attribution, which is a free solution built into Google. So you just go to your multichannel funnel reports and it will help you understand that. One, uh, 150 people came from email. Then they came from Google. Then there was a gap of nine days, and they came back from Facebook and then they [00:35:00] converted. And what is happening is you're beginning to understand the consumer journey. If you understand the consumer journey better, we can come with better marketing. Otherwise, you would've said, oh, close shop. We don't need as many marketing people. We'll just buy ads on Google. We'll just do SEO. We're done. Oh, now you realize there's a more complex behavior happening in the consumer. They need to solve for email. You solve for Google, you need to solve Facebook. In my hypothetical example, so I, I'm very actively recommend people look at the built-in free MTA reports inside the Google nalytics. Understand the path flow that is happening to drive donations and then undertake activities that are showing up more often in the path, and do fewer of those things that are showing up less in the path. George Weiner: Bring these up because they have been waiting on my mind in the land of AEO. And by the way, we're not done with war. The war corner segment. There's more war there's, but there's more, more than time. But with both of these metrics where AEO, if I'm putting these glasses back on, comes [00:36:00] into play, is. Look, we're saying goodbye to frankly, what was probably somewhat of a vanity metric with regard to organic traffic coming in on that 10 facts about cube cats. You know, like, was that really how we were like hanging our hat at night, being like. Job done. I think there's very much that in play. And then I'm a little concerned that we just told everyone to go create an AEO channel on their Google Analytics and they're gonna come in here. Avinash told me that those people are buyers. They're immediately gonna come and buy, and why aren't they converting? What is going on here? Can you actually maybe couch that last click with the AI channel inbound? Like should I expect that to be like 10 x the amount of conversions? Avinash Kaushik: All we can say is it's, it's going to be people with high intention. And so with the businesses that I'm working with, what we are finding is that the conversion rates are higher. Mm. This game is too early to establish any kind of sense of if anybody has standards for AEO, they're smoking crack. Like the [00:37:00] game is simply too early. So what we I'm noticing is that in some cases, if the average conversion rate is two point half percent, the AEO traffic is converting at three, three point half. In two or three cases, it's converting at six, seven and a half. But there is not enough stability in the data. All of this is new. There's not enough stability in the data to say, Hey, definitely you can expect it to be double or 10% more or 50% more. We, we have no idea this early stage of the game, but, but George, if we were doing this again in a year, year and a half, I think we'll have a lot more data and we'll be able to come up with some kind of standards for, for now, what's important to understand is, first thing is you're not gonna rank in an answer engine. You just won't. If you do rank in an answer engine, you fought really hard for it. The person decided, oh my God, I really like this. Just just think of the user behavior and say, this person is really high intent because somehow [00:38:00] you showed up and somehow they found you and came to you. Chances are they're caring. Very high intent. George Weiner: Yeah. They just left a conversation with a super intelligent like entity to come to your freaking 2001 website, HTML CSS rendered silliness. Avinash Kaushik: Whatever it is, it could be the iffiest thing in the world, but they, they found me and they came to you and they decided that in the answer engine, they like you as the answer the most. And, and it took that to get there. And so all, all, all is I'm finding in the data is that they carry higher intent and that that higher intent converts into higher conversion rates, higher donations, as to is it gonna be five 10 x higher? It's unclear at the moment, but remember, the other reason you should care about it is. Every single day. As more people move away from Google search engines to answer engines, you're losing a ton of traffic. If somebody new showing up, treat them with, respect them with love. Treat them with [00:39:00] care because they're very precious. Just lost a hundred. Check the landing George Weiner: pages. 'cause you may be surprised where your front door is when complexity is bringing them to you, and it's not where you spent all of your design effort on the homepage. Spoiler. That's exactly Avinash Kaushik: right. No. Exactly. In fact, uh, the doping deeper into your websites is becoming even more prevalent with answer engines. Mm-hmm. Um, uh, than it used to be with search engines. The search always tried to get you the, the top things. There's still a lot of diversity. Your homepage likely is still only 30% of your traffic. Everybody else is landing on other homepage or as you call them, landing pages. So it's really, really important to look beyond your homepage. I mean, it was true yesterday. It's even truer today. George Weiner: Yeah, my hunch and what I'm starting to see in our data is that it is also much higher on the assisted conversion like it is. Yes. Yes, it is. Like if you have come to us from there, we are going to be seeing you again. That's right. That's right. More likely than others. It over indexes consistently for us there. Avinash Kaushik: [00:40:00] Yes. Again, it ties back to the person has higher intent, so if they didn't convert in that lab first session, their higher intent is gonna bring them back to you. So you are absolutely right about the data that you're seeing. George Weiner: Um, alright. War corner, the 10 90 rule. Can you unpack this and then maybe apply it to somebody who thinks that their like AI strategy is done? 'cause they spend $20 or $200 a month on some tool and then like, call it a day. 'cause they did ai. Avinash Kaushik: Yes, yes. No, it's, it's good. I, I developed it in context of analytics. When I was at my, uh, job at Intuit, I used to, I was at Intuit, senior director for research and analytics. And one of the things I found is people would consistently spend lots of money on tools in that time, web analytics tools, research tools, et cetera. And, uh, so they're spending a contract of a few hundred thousand dollars or hundreds of thousands of dollars, and then they give it to a fresh graduate to find insights. [00:41:00] I was like, wait, wait, wait. So you took this $300,000 thing and gave it to somebody. You're paying $45,000 a year. Who is young in their career, young in their career, and expecting them to make you tons of money using this tool? It's not the tool, it's the human. And so that's why I developed the the 10 90 rule, which is that if you have a, if you have a hundred dollars to invest in making smarter decisions, invest $10 in the tool, $90 in the human. We all have access to so much data, so much complexity. The world is changing so fast that it is the human that is going to figure out how to make sense of these insights rather than the tool magically spewing and understanding your business enough to tell you exactly what to do. So that, that's sort of where the 10 90 rule came from. Now, sort of we are in this, in this, um, this is very good for nonprofits by the way. So we're in this era. Where On the 90 side? No. So the 10, look, don't spend insane money on tools that is just silly. So don't do that. Now the 90, let's talk about the [00:42:00] 90. Up until two years ago, I had to spell all of the 90 on what I now call organic humans. You George Weiner: glasses wearing humans, huh? Avinash Kaushik: The development of LLM means that every single nonprofit in the world has access to roughly a third year bachelor's degree student. Like a really smart intern. For free. For free. In fact, in some instances, for some nonprofits, let's say I I just reading about this nonprofit that is cleaning up plastics in the ocean for this particular nonprofit, they have access to a p HT level environmentalist using the latest Chad GP PT 4.5, like PhD level. So the little caveat I'm beginning to put in the 10 90 rule is on the 90. You give the 90 to the human and for free. Get the human, a very smart Bachelor's student by using LLMs in some instances. Get [00:43:00] for free a very smart TH using the LLMs. So the LLMs have now to be incorporated into your research, into your analysis, into building a next dashboard, into building a next website, into building your next mobile game into whatever the hell you're doing for free. You can get that so you have your organic human. Less the synthetic human for free. Both of those are in the 90 and, and for nonprofit, so, so in my work at at Coach and Kate Spade. I have access now to a couple of interns who do free work for me, well for 20 minor $20 a month because I have to pay for the plus version of G bt. So the intern costs $20 a month, but I have access to this syn synthetic human who can do a whole lot of work for me for $20 a month in my case, but it could also do it for free for you. Don't forget synthetic humans. You no longer have to rely only on the organic humans to do the 90 part. You would be stunned. Upload [00:44:00] your latest, actually take last year's worth of donations, where they came from and all this data from you. Have a spreadsheet lying around. Dump it into chat. GPT, I'll ask it to analyze it. Help you find where most donations came from, and visualize trends to present to board of directors. It will blow your mind how good it is at do it with Gemini. I'm not biased, I'm just seeing chat. GPD 'cause everybody knows it so much Better try it with mistrial a, a small LLM from France. So I, I wanna emphasize that what has changed over the last year is the ability for us to compliment our organic humans with these synthetic entities. Sometimes I say synthetic humans, but you get the point. George Weiner: Yeah. I think, you know, definitely dump that spreadsheet in. Pull out the PII real quick, just, you know, make me feel better as, you know, the, the person who's gonna be promoting this to everybody, but also, you know, sort of. With that. I want to make it clear too, that like actually inside of Gemini, like Google for nonprofits has opened up access to Gemini for free is not a per user, per whatever. You have that [00:45:00] you have notebook, LLM, and these. Are sitting in their backyards for free every day and it's like a user to lose it. 'cause you have a certain amount of intelligence tokens a day. Can you, I just like wanna climb like the tallest tree out here and just start yelling from a high building about this. Make the case of why a nonprofit should be leveraging this free like PhD student that is sitting with their hands underneath their butts, doing nothing for them right now. Avinash Kaushik: No, it is such a shame. By the way, I cannot add to your recommendation in using your Gemini Pro account if it's free, on top of, uh, all the benefits you can get. Gemini Pro also comes with restrictions around their ability to use your data. They won't, uh, their ability to put your data anywhere. Gemini free versus Gemini Pro is a very protected environment. Enterprise version. So more, more security, more privacy, et cetera. That's a great benefit. And by the way, as you said, George, they can get it for free. So, um, the, the, the, the posture you should adopt is what big companies are doing, [00:46:00] which is anytime there is a job to be done, the first question you, you should ask is, can I make the, can an AI do the job? You don't say, oh, let me send it to George. Let me email Simon, let me email Sarah. No, no, no. The first thing that should hit your head is. I do the job because most of the time for, again, remember, third year bachelor's degree, student type, type experience and intelligence, um, AI can do it better than any human. So your instincts to be, let me outsource that kind of work so I can free up George's cycles for the harder problems that the AI cannot solve. And by the way, you can do many things. For example, you got a grant and now Meta allows you to run X number of ads for free. Your first thing, single it. What kind of ad should I create? Go type in your nonprofit, tell it the kind of things you're doing. Tell it. Tell it the donations you want, tell it the size, donation, want. Let it create the first 10 ads for you for free. And then you pick the one you like. And even if you have an internal [00:47:00] designer who makes ads, they'll start with ideas rather than from scratch. It's just one small example. Or you wanna figure out. You know, my email program is stuck. I'm not getting yield rates for donations. The thing I want click the button that called that is called deep research or thinking in the LL. Click one of those two buttons and then say, I'm really struggling. I'm at wits end. I've tried all these things. Write all the detail. Write all the detail about what you've tried and now working. Can you please give me three new ideas that have worked for nonprofits who are working in water conservation? Hmm. This would've taken a human like a few days to do. You'll have an answer in under 90 seconds. I just give two simple use cases where we can use these synthetic entities to send us, do the work for us. So the default posture in nonprofits should be, look, we're resource scrapped anyway. Why not use a free bachelor's degree student, or in some case a free PhD student to do the job, or at least get us started on a job. So just spending 10 [00:48:00] hours on it. We only spend the last two hours. The entity entity does the first date, and that is super attractive. I use it every single day in, in one of my browsers. I have three traps open permanently. I've got Claude, I've got Mistrial, I've got Charge GPT. They are doing jobs for me all day long. Like all day long. They're working for me. $20 each. George Weiner: Yeah, it's an, it, it, it's truly, it's an embarrassment of riches, but also getting back to the, uh, the 10 90 is, it's still sitting there. If you haven't brought that capacity building to the person on how to prompt how to play that game of linguistic tennis with these tools, right. They're still just a hammer on a. Avinash Kaushik: That's exactly right. That's exactly right. Or, or in your case, you, you have access to Gemini for nonprofits. It's a fantastic tool. It's like a really nice card that could take you different places you insist on cycling everywhere. It's, it's okay cycle once in a while for health reasons. Otherwise, just take the car, it's free. George Weiner: Ha, you've [00:49:00] been so generous with your time. Uh, I do have one more quick war. If you, if you have, have a minute, uh, your war on funnels, and maybe this is not. Fully fair. And I am like, I hear you yelling at me every time I'm showing our marketing funnel. And I'm like, yeah, but I also have have a circle over here. Can you, can you unpack your war on funnels and maybe bring us through, see, think, do, care and in the land of ai? Avinash Kaushik: Yeah. Okay. So the marketing funnel is very old. It's been around for a very long time, and once I, I sort of started working at Google, access to lots more consumer research, lots more consumer behavior. Like 20 years ago, I began to understand that there's no such thing as funnel. So what does the funnel say? The funnel says there's a group of people running around the world, they're not aware of your brand. Find them, scream at them, spray and pray advertising at them, make them aware, and then somehow magically find the exact same people again and shut them down the fricking funnel and make them consider your product.[00:50:00] And now that they're considering, find them again, exactly the same people, and then shove them one more time. Move their purchase index and then drag them to your website. The thing is this linearity that there's no evidence in the universe that this linearity exists. For example, uh, I'm going on a, I like long bike rides, um, and I just got thirsty. I picked up the first brand. I could see a water. No awareness, no consideration, no purchase in debt. I just need water. A lot of people will buy your brand because you happen to be the cheapest. I don't give a crap about anything else, right? So, um, uh, uh, the other thing to understand is, uh, one of the brands I adore and have lots of is the brand. Patagonia. I love Patagonia. I, I don't use the word love for I think any other brand. I love Patagonia, right? For Patagonia. I'm always in the awareness stage because I always want these incredible stories that brand ambassadors tell about how they're helping the environment. [00:51:00] I have more Patagonia products than I should have. I'm already customer. I'm always open to new considerations of Patagonia products, new innovations they're bringing, and then once in a while, I'm always in need to buy a Patagonia product. I'm evaluating them. So this idea that the human is in one of these stages and your job is to shove them down, the funnel is just fatally flawed, no evidence for it. Instead, what you want to do is what is Ash's intent at the moment? He would like environmental stories about how we're improving planet earth. Patagonia will say, I wanna make him aware of my environmental stories, but if they only thought of marketing and selling, they wouldn't put me in the awareness because I'm already a customer who buys lots of stuff from already, right? Or sometimes I'm like, oh, I'm, I'm heading over to London next week. Um, I need a thing, jacket. So yeah, consideration show up even though I'm your customer. So this seating do care is a framework that [00:52:00] says, rather than shoving people down things that don't exist and wasting your money, your marketing should be able to discern any human's intent and then be able to respond with a piece of content. Sometimes that piece of content in an is an ad. Sometimes it's a webpage, sometimes it's an email. Sometimes it's a video. Sometimes it's a podcast. This idea of understanding intent is the bedrock on which seat do care is built about, and it creates fully customer-centric marketing. It is harder to do because intent is harder to infer, but if you wanna build a competitive advantage for yourself. Intent is the magic. George Weiner: Well, I think that's a, a great point to, to end on. And again, so generous with, uh, you know, all the work you do and also supporting nonprofits in the many ways that you do. And I'm, uh, always, always watching and seeing what I'm missing when, um, when a new, uh, AKA's Razor and Newsletter come out. So any final sign off [00:53:00] here on how do people find you? How do people help you? Let's hear it. Avinash Kaushik: You can just Google or answer Engine Me. It's, I'm not hard. I hard to find, but if you're a nonprofit, you can sign up for my newsletter, TMAI marketing analytics newsletter. Um, there's a free one and a paid one, so you can just sign up for the free one. It's a newsletter that comes out every five weeks. It's completely free, no strings or anything. And that way I'll be happy to share my stories around better marketing and analytics using the free newsletter for you so you can sign up for that. George Weiner: Brilliant. Well, thank you so much, Avan. And maybe, maybe we'll have to take you up on that offer to talk sometime next year and see, uh, if maybe we're, we're all just sort of, uh, hanging out with synthetic humans nonstop. Thank you so much. It was fun, George. [00:54:00]
Officials in St. Paul, Minnesota declare a state of emergency following a cyberattack. Hackers disrupt a major French telecom. A power outage causes widespread service disruptions for cloud provider Linode. Researchers reveal a critical authentication bypass flaw in an AI-driven app development platform. A new study shows AI training data is chock full of PII. Fallout continues for the Tea dating safety app. Hackers are actively exploiting a critical SAP NetWeaver vulnerability to deploy malware. CISA and the FBI update their Scattered Spider advisory. A Florida prison exposes personal information of visitors to all of its inmates. Our guest today is Keith Mularski, Chief Global Ambassador at Qintel, retired FBI Special Agent, and co-host of Only Malware in the Building. CISA and Senator Wyden come to terms —mostly— over the long-buried US Telecommunications Insecurity Report. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Our guest today is Keith Mularski, Chief Global Ambassador at Qintel, retired FBI Special Agent, and co-host of Only Malware in the Building discussing what it's like to be the new host on the N2K CyberWire network and giving a glimpse into some upcoming episodes. You can catch Keith and his co-hosts Selena Larson, Staff Threat Researcher and Lead, Intelligence Analysis and Strategy at Proofpoint, and our own Dave Bittner the first Tuesday of each month on your favorite podcast app with new episodes of Only Malware. Selected Reading Major cyberattack hits St. Paul, shuts down many services (Star Tribune) French telecom giant Orange discloses cyberattack (Bleeping Computer) Power Outage at Newark Data Center Disrupts Linode, Took LWN Offline (FOSS Force) Critical authentication bypass flaw reported in AI coding platform Base44 (Beyond Machines) A major AI training data set contains millions of examples of personal data (MIT Technology Review) Dating safety app Tea suspends messaging after hack (BBC) Hackers exploit SAP NetWeaver bug to deploy Linux Auto-Color malware (Bleeping Computer) CISA and FBI Release Tactics, Techniques, and Procedures of the Scattered Spider Hacker Group (gb hackers) Florida prison data breach exposes visitors' contact information to inmates (Florida Phoenix) CISA to release long-buried US telco security report (The Register) Audience Survey Complete our annual audience survey before August 31. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
//The Wire//2300Z July 25, 2025////ROUTINE////BLUF: "DATING" APP DATA BREACH HIGHLIGHTS NATIONAL SECURITY CONCERNS.// -----BEGIN TEARLINE------HomeFront-USA: This morning a major PII leak was exploited on the Tea app, the infamous app that has gained notoriety around the United States. This data leak was not a hack by any means; the selfie ID feature and driver's license images used to register users were stored unencrypted on the app's servers for anyone on the internet to see. Furthermore, the location data was not scrubbed from the images, so the exact GPS coordinate of each user was also leaked, with tens of thousands of users' private location data being leaked online.-----END TEARLINE-----Analyst Comments: This app gained infamy as it's entire purpose is to serve as a "Yelp" for women to rate men, and to allow women to secretly share personal information regarding prospective dates, all without men being allowed to either face their accusers or even know that they are being gossiped about (thus the name of the app being a slang term that serves as a synonym for "gossip"). Most importantly, the app uses facial recognition to prevent biological males from obtaining an account. Beyond the unfortunate origins of the app and the equally unfortunate data leak, examination of the data that was leaked is likely to cause exceptionally grave risks to national security. The "gossipy" nature of this story doesn't matter, a bunch of unflattering selfies doesn't matter either; what does matter is that this may have inadvertently revealed significant national security concerns.For instance, preliminary analysis of the datasets indicates that many users of the Tea app downloaded the app, took a selfie, and registered for an account while at work. In some cases, at government facilities or on military bases...such as the rather unfortunate individual who decided it was a good idea to register for this app while stationed at Marine Corps Base Quantico. Or the person who felt that they needed to use this app while on a gunnery range at the Aberdeen Proving Grounds. So far, other interesting sites located via personnel taking a selfie to register for this app at work include the following locations:- An ammunition storage bunker at Naval Weapons Station Earle in New Jersey.- The legislative offices at the Connecticut State Capitol building.- One of the headquarters buildings at Minot Air Force Base.- A maintenance site on the airfield at Eglin Air Force Base.- Alumni Hall at the US Naval Academy in Annapolis.- And the off-base housing complexes at nearly every single military base in the United States.Of course, these data points only encompass the GPS coordinates that were embedded in the metadata of the selfies taken when users created an account on the app, so the data that was leaked is merely a snapshot of wherever a person was when they registered an account. Most of the GPS points presented in this data were very precise, pinpointing users within a diameter of 36ft or so on average. GPS errors are also likely to throw off this dataset, so it's probable that quite a few data points are inaccurate. However, most of the data (as leaked) is good enough for nationstate-level malign actors to have a field day when it comes to espionage. A person who is unhappy with the person they are in a relationship with, who is also willing to submit their full legal name and street address (or GPS location) makes for a prime espionage target when this data is cross-referenced with other data. It takes exactly two clicks to import the leaked data to a map, and overlay that map with known sensitive military sites around the nation...perhaps in the process finding a few new locations as well. It is also easy to cross-reference this data with property ownership documents to find out how many people took a selfie at a different ad