Web search engine developed by Google
POPULARITY
Categories
Want more acupuncture patients from Google without feeling salesy? This episode breaks down a calm, practical SEO plan to help your practice show up in local search and turn visibility into booked appointments. You'll learn what to optimize first, which website pages to build, how to improve your Google Business Profile, and a simple content strategy that supports rankings without becoming a full-time content creator. Episode webpage: https://propelyourcompany.com/acupuncturist-seo-strategySend in your questions. ❤ We'd love to hear from you!NEW Webinar: How to dominate Google Search, Google Maps, AI-driven search results, and get more new patients.>> Save your spot
Do This, NOT That: Marketing Tips with Jay Schwedelson l Presented By Marigold
Marketing attribution often feels like a guessing game where everyone just ends up giving credit to Google Search. Jay Schwedelson connects with Daniel Murray from The Marketing Millennials to break down why you should treat attribution data as a directional compass rather than a precise GPS. They discuss practical ways to validate your data, including the power of holdout groups and geo-testing, plus a random side conversation about whether Olympic curlers are actually the best athletes in the world.Follow Daniel on LinkedIn and check out The Marketing Millennials podcast for sharp, no-fluff marketing insights. Subscribe to Ari Murray's newsletter at gotomillions.co for sharp, actionable marketing insights.Best Moments:(01:50) Why attribution should be viewed as a compass instead of a turn-by-turn GPS(02:22) The problem with giving Google Search all the credit for last-touch conversions(03:10) How to set up a holdout group to measure the true lift of your campaigns(04:09) Using geo-based testing to see if specific channels are actually driving growth(04:45) Why it is a red flag if one channel claims 100% of the credit in a multi-touch world(05:15) The surprisingly effective "How did you hear about us?" form field strategy(06:40) A hot take on whether curling athletes are actually the best in the worldCheck out Jay's YOUTUBE Channel: https://www.youtube.com/@schwedelsonCheck out Jay's TIKTOK: https://www.tiktok.com/@schwedelsonCheck Out Jay's INSTAGRAM: https://www.instagram.com/jayschwedelson/ㅤPre-order Jay Schwedelson's new book, Stupider People Have Done It (out April 21, 2026). All net proceeds are donated to The V Foundation for Cancer Research—let's kick cancer's butt: https://www.amazon.com/Stupider-People-Have-Done-Marketing/dp/1637635206
Attribution is one of the most talked-about topics in marketing…and also one of the most misleading. Jay and Daniel explain why most attribution models are basically garbage, especially last-touch attribution, and why marketers keep over-investing in channels like Google Search simply because they get the final click. Jay walks through one of the most underused measurement tactics in marketing: holdout groups, where you intentionally exclude part of your audience from campaigns to measure real lift. Daniel adds the simplest attribution hack of all: just asking customers where they heard about you. If you're tired of dashboards that tell you what you want to hear instead of what's real, this episode Follow Jay: LinkedIn: https://www.linkedin.com/in/schwedelson/ Podcast: Do This, Not That Follow Daniel: YouTube: https://www.youtube.com/@themarketingmillennials/featured Twitter: https://www.twitter.com/Dmurr68 LinkedIn: https://www.linkedin.com/in/daniel-murray-marketing Sign up for The Marketing Millennials newsletter: https://themarketingmillennials.com/ Daniel is a Workweek friend, working to produce amazing podcasts. To find out more, visit: https://workweek.com/
Voice search and AI-powered search are changing how patients find clinics in 2026, but the winning strategy is still simple, clear local SEO and content that answers real patient questions. In this episode, you'll learn how voice and AI queries differ from typed searches, what to update on your website and Google Business Profile, and five practical upgrades to help you show up more often and turn visibility into booked appointments. Episode Webpage: https://propelyourcompany.com/voice-search/Live Webinar: Fix Your AI Visibility Blind Spots - https://propelyourcompany.com/fix/Send in your questions. ❤ We'd love to hear from you!NEW Webinar: How to dominate Google Search, Google Maps, AI-driven search results, and get more new patients.>> Save your spot
Emmanuel et Guillaume discutent de divers sujets liés à la programmation, notamment les systèmes de fichiers en Java, le Data Oriented Programming, les défis de JPA avec Kotlin, et les nouvelles fonctionnalités de Quarkus. Ils explorent également des sujets un peu fous comme la création de datacenters dans l'espace. Pas mal d'architecture aussi. Enregistré le 13 février 2026 Téléchargement de l'épisode LesCastCodeurs-Episode-337.mp3 ou en vidéo sur YouTube. News Langages Comment implémenter un file system en Java https://foojay.io/today/bootstrapping-a-java-file-system/ Créer un système de fichiers Java personnalisé avec NIO.2 pour des usages variés (VCS, archives, systèmes distants). Évolution Java: java.io.File (1.0) -> NIO (1.4) -> NIO.2 (1.7) pour personnalisation via FileSystem. Recommander conception préalable; API Java est orientée POSIX. Composants clés à considérer: Conception URI (scheme unique, chemin). Gestion de l'arborescence (BD, métadonnées, efficacité). Stockage binaire (emplacement, chiffrement, versions). Minimum pour démarrer (4 composants): Implémenter Path (représente fichier/répertoire). Étendre FileSystem (instance du système). Étendre FileSystemProvider (moteur, enregistré par scheme). Enregistrer FileSystemProvider via META-INF/services. Étapes suivantes: Couche BD (arborescence), opérations répertoire/fichier de base, stockage, tests. Processus long et exigeant, mais gratifiant. Un article de brian goetz sur le futur du data oriented programming en Java https://openjdk.org/projects/amber/design-notes/beyond-records Le projet Amber de Java introduit les "carrier classes", une évolution des records qui permet plus de flexibilité tout en gardant les avantages du pattern matching et de la reconstruction Les records imposent des contraintes strictes (immutabilité, représentation exacte de l'état) qui limitent leur usage pour des classes avec état muable ou dérivé Les carrier classes permettent de déclarer une state description complète et canonique sans imposer que la représentation interne corresponde exactement à l'API publique Le modificateur "component" sur les champs permet au compilateur de dériver automatiquement les accesseurs pour les composants alignés avec la state description Les compact constructors sont généralisés aux carrier classes, générant automatiquement l'initialisation des component fields Les carrier classes supportent la déconstruction via pattern matching comme les records, rendant possible leur usage dans les instanceof et switch Les carrier interfaces permettent de définir une state description sur une interface, obligeant les implémentations à fournir les accesseurs correspondants L'extension entre carrier classes est possible, avec dérivation automatique des appels super() quand les composants parent sont subsumés par l'enfant Les records deviennent un cas particulier de carrier classes avec des contraintes supplémentaires (final, extends Record, component fields privés et finaux obligatoires) L'évolution compatible des records est améliorée en permettant l'ajout de composants en fin de liste et la déconstruction partielle par préfixe Comment éviter les pièges courants avec JPA et Kotlin - https://blog.jetbrains.com/idea/2026/01/how-to-avoid-common-pitfalls-with-jpa-and-kotlin/ JPA est une spécification Java pour la persistance objet-relationnel, mais son utilisation avec Kotlin présente des incompatibilités dues aux différences de conception des deux langages Les classes Kotlin sont finales par défaut, ce qui empêche la création de proxies par JPA pour le lazy loading et les opérations transactionnelles Le plugin kotlin-jpa génère automatiquement des constructeurs sans argument et rend les classes open, résolvant les problèmes de compatibilité Les data classes Kotlin ne sont pas adaptées aux entités JPA car elles génèrent equals/hashCode basés sur tous les champs, causant des problèmes avec les relations lazy L'utilisation de lateinit var pour les relations peut provoquer des exceptions si on accède aux propriétés avant leur initialisation par JPA Les types non-nullables Kotlin peuvent entrer en conflit avec le comportement de JPA qui initialise les entités avec des valeurs null temporaires Le backing field direct dans les getters/setters personnalisés peut contourner la logique de JPA et casser le lazy loading IntelliJ IDEA 2024.3 introduit des inspections pour détecter automatiquement ces problèmes et propose des quick-fixes L'IDE détecte les entités finales, les data classes inappropriées, les problèmes de constructeurs et l'usage incorrect de lateinit Ces nouvelles fonctionnalités aident les développeurs à éviter les bugs subtils liés à l'utilisation de JPA avec Kotlin Librairies Guide sur MapStruct @IterableMapping - https://www.baeldung.com/java-mapstruct-iterablemapping MapStruct est une bibliothèque Java pour générer automatiquement des mappers entre beans, l'annotation @IterableMapping permet de configurer finement le mapping de collections L'attribut dateFormat permet de formater automatiquement des dates lors du mapping de listes sans écrire de boucle manuelle L'attribut qualifiedByName permet de spécifier quelle méthode custom appliquer sur chaque élément de la collection à mapper Exemple d'usage : filtrer des données sensibles comme des mots de passe en mappant uniquement certains champs via une méthode dédiée L'attribut nullValueMappingStrategy permet de contrôler le comportement quand la collection source est null (retourner null ou une collection vide) L'annotation fonctionne pour tous types de collections Java (List, Set, etc.) et génère le code de boucle nécessaire Possibilité d'appliquer des formats numériques avec numberFormat pour convertir des nombres en chaînes avec un format spécifique MapStruct génère l'implémentation complète du mapper au moment de la compilation, éliminant le code boilerplate L'annotation peut être combinée avec @Named pour créer des méthodes de mapping réutilisables et nommées Le mapping des collections supporte les conversions de types complexes au-delà des simples conversions de types primitifs Accès aux fichiers Samba depuis Java avec JCIFS - https://www.baeldung.com/java-samba-jcifs JCIFS est une bibliothèque Java permettant d'accéder aux partages Samba/SMB sans monter de lecteur réseau, supportant le protocole SMB3 on pense aux galériens qui doivent se connecter aux systèmes dit legacy La configuration nécessite un contexte CIFS (CIFSContext) et des objets SmbFile pour représenter les ressources distantes L'authentification se fait via NtlmPasswordAuthenticator avec domaine, nom d'utilisateur et mot de passe La bibliothèque permet de lister les fichiers et dossiers avec listFiles() et vérifier leurs propriétés (taille, date de modification) Création de fichiers avec createNewFile() et de dossiers avec mkdir() ou mkdirs() pour créer toute une arborescence Suppression via delete() qui peut parcourir et supprimer récursivement des arborescences entières Copie de fichiers entre partages Samba avec copyTo(), mais impossibilité de copier depuis le système de fichiers local Pour copier depuis le système local, utilisation des streams SmbFileInputStream et SmbFileOutputStream Les opérations peuvent cibler différents serveurs Samba et différents partages (anonymes ou protégés par mot de passe) La bibliothèque s'intègre dans des blocs try-with-resources pour une gestion automatique des ressources Quarkus 3.31 - Support complet Java 25, nouveau packaging Maven et Panache Next - https://quarkus.io/blog/quarkus-3-31-released/ Support complet de Java 25 avec images runtime et native Nouveau packaging Maven de type quarkus avec lifecycle optimisé pour des builds plus rapides voici un article complet pour plus de detail https://quarkus.io/blog/building-large-applications/ Introduction de Panache Next, nouvelle génération avec meilleure expérience développeur et API unifiée ORM/Reactive Mise à jour vers Hibernate ORM 7.2, Reactive 3.2, Search 8.2 Support de Hibernate Spatial pour les données géospatiales Passage à Testcontainers 2 et JUnit 6 Annotations de sécurité supportées sur les repositories Jakarta Data Chiffrement des tokens OIDC pour les implémentations custom TokenStateManager Support OAuth 2.0 Pushed Authorization Requests dans l'extension OIDC Maven 3.9 maintenant requis minimum pour les projets Quarkus A2A Java SDK 1.0.0.Alpha1 - Alignement avec la spécification 1.0 du protocole Agent2Agent - https://quarkus.io/blog/a2a-java-sdk-1-0-0-alpha1/ Le SDK Java A2A implémente le protocole Agent2Agent qui permet la communication standardisée entre agents IA pour découvrir des capacités, déléguer des tâches et collaborer Passage à la version 1.0 de la spécification marque la transition d'expérimental à production-ready avec des changements cassants assumés Modernisation complète du module spec avec des Java records partout remplaçant le mix précédent de classes et records pour plus de cohérence Adoption de Protocol Buffers comme source de vérité avec des mappers MapStruct pour la conversion et Gson pour JSON-RPC Les builders utilisent maintenant des méthodes factory statiques au lieu de constructeurs publics suivant les best practices Java modernes Introduction de trois BOMs Maven pour simplifier la gestion des dépendances du SDK core, des extensions et des implémentations de référence Quarkus AgentCard évolue avec une liste supportedInterfaces remplaçant url et preferredTransport pour plus de flexibilité dans la déclaration des protocoles Support de la pagination ajouté pour ListTasks et les endpoints de configuration des notifications push avec des wrappers Result appropriés Interface A2AHttpClient pluggable permettant des implémentations HTTP personnalisées avec une implémentation Vert.x fournie Travail continu vers la conformité complète avec le TCK 1.0 en cours de développement parallèlement à la finalisation de la spécification Pourquoi Quarkus finit par "cliquer" : les 10 questions que se posent les développeurs Java - https://www.the-main-thread.com/p/quarkus-java-developers-top-questions-2025 un article qui revele et repond aux questions des gens qui ont utilisé Quarkus depuis 4-6 mois, les non noob questions Quarkus est un framework Java moderne optimisé pour le cloud qui propose des temps de démarrage ultra-rapides et une empreinte mémoire réduite Pourquoi Quarkus démarre si vite ? Le framework effectue le travail lourd au moment du build (scanning, indexation, génération de bytecode) plutôt qu'au runtime Quand utiliser le mode réactif plutôt qu'impératif ? Le réactif est pertinent pour les workloads avec haute concurrence et dominance I/O, l'impératif reste plus simple dans les autres cas Quelle est la différence entre Dev Services et Testcontainers ? Dev Services utilise Testcontainers en gérant automatiquement le cycle de vie, les ports et la configuration sans cérémonie Comment la DI de Quarkus diffère de Spring ? CDI est un standard basé sur la sécurité des types et la découverte au build-time, différent de l'approche framework de Spring Comment gérer la configuration entre environnements ? Quarkus permet de scaler depuis le développement local jusqu'à Kubernetes avec des profils, fichiers multiples et configuration externe Comment tester correctement les applications Quarkus ? @QuarkusTest démarre l'application une fois pour toute la suite de tests, changeant le modèle mental par rapport à Spring Boot Que fait vraiment Panache en coulisses ? Panache est du JPA avec des opinions fortes et des défauts propres, enveloppant Hibernate avec un style Active Record Doit-on utiliser les images natives et quand ? Les images natives brillent pour le serverless et l'edge grâce au démarrage rapide et la faible empreinte mémoire, mais tous les apps n'en bénéficient pas Comment Quarkus s'intègre avec Kubernetes ? Le framework génère automatiquement les ressources Kubernetes, gère les health checks et métriques comme s'il était nativement conçu pour cet écosystème Comment intégrer l'IA dans une application Quarkus ? LangChain4j permet d'ajouter embeddings, retrieval, guardrails et observabilité directement en Java sans passer par Python Infrastructure Les alternatives à MinIO https://rmoff.net/2026/01/14/alternatives-to-minio-for-single-node-local-s3/ MinIO a abandonné le support single-node fin 2025 pour des raisons commerciales, cassant de nombreuses démos et pipelines CI/CD qui l'utilisaient pour émuler S3 localement L'auteur cherche un remplacement simple avec image Docker, compatibilité S3, licence open source, déploiement mono-nœud facile et communauté active S3Proxy est très léger et facile à configurer, semble être l'option la plus simple mais repose sur un seul contributeur RustFS est facile à utiliser et inclut une GUI, mais c'est un projet très récent en version alpha avec une faille de sécurité majeure récente SeaweedFS existe depuis 2012 avec support S3 depuis 2018, relativement facile à configurer et dispose d'une interface web basique Zenko CloudServer remplace facilement MinIO mais la documentation et le branding (cloudserver/zenko/scality) peuvent prêter à confusion Garage nécessite une configuration complexe avec fichier TOML et conteneur d'initialisation séparé, pas un simple remplacement drop-in Apache Ozone requiert au minimum quatre nœuds pour fonctionner, beaucoup trop lourd pour un usage local simple L'auteur recommande SeaweedFS et S3Proxy comme remplaçants viables, RustFS en maybe, et élimine Garage et Ozone pour leur complexité Garage a une histoire tres associative, il vient du collectif https://deuxfleurs.fr/ qui offre un cloud distribué sans datacenter C'est certainement pas une bonne idée, les datacenters dans l'espace https://taranis.ie/datacenters-in-space-are-a-terrible-horrible-no-good-idea/ Avis d'expert (ex-NASA/Google, Dr en électronique spatiale) : Centres de données spatiaux, une "terrible" idée. Incompatibilité fondamentale : L'électronique (surtout IA/GPU) est inadaptée à l'environnement spatial. Énergie : Accès limité. Le solaire (type ISS) est insuffisant pour l'échelle de l'IA. Le nucléaire (RTG) est trop faible. Refroidissement : L'espace n'est pas "froid" ; absence de convection. Nécessite des radiateurs gigantesques (ex: 531m² pour 200kW). Radiations : Provoque erreurs (SEU, SEL) et dommages. Les GPU sont très vulnérables. Blindage lourd et inefficace. Les puces "durcies" sont très lentes. Communications : Bande passante très limitée (1Gbps radio vs 100Gbps terrestre). Le laser est tributaire des conditions atmosphériques. Conclusion : Projet extrêmement difficile, coûteux et aux performances médiocres. Data et Intelligence Artificielle Guillaume a développé un serveur MCP pour arXiv (le site de publication de papiers de recherche) en Java avec le framework Quarkus https://glaforge.dev/posts/2026/01/18/implementing-an-arxiv-mcp-server-with-quarkus-in-java/ Implémentation d'un serveur MCP (Model Context Protocol) arXiv en Java avec Quarkus. Objectif : Accéder aux publications arXiv et illustrer les fonctionnalités moins connues du protocole MCP. Mise en œuvre : Utilisation du framework Quarkus (Java) et son support MCP étendu. Assistance par Antigravity (IDE agentique) pour le développement et l'intégration de l'API arXiv. Interaction avec l'API arXiv : requêtes HTTP, format XML Atom pour les résultats, parser XML Jackson. Fonctionnalités MCP exposées : Outils (@Tool) : Recherche de publications (search_papers). Ressources (@Resource, @ResourceTemplate) : Taxonomie des catégories arXiv, métadonnées des articles (via un template d'URI). Prompts (@Prompt) : Exemples pour résumer des articles ou construire des requêtes de recherche. Configuration : Le serveur peut fonctionner en STDIO (local) ou via HTTP Streamable (local ou distant), avec une configuration simple dans des clients comme Gemini CLI. Conclusion : Quarkus simplifie la création de serveurs MCP riches en fonctionnalités, rendant les données et services "prêts pour l'IA" avec l'aide d'outils d'IA comme Antigravity. Anthropic ne mettra pas de pub dans Claude https://www.anthropic.com/news/claude-is-a-space-to-think c'est en reaction au plan non public d'OpenAi de mettre de la pub pour pousser les gens au mode payant OpenAI a besoin de cash et est probablement le plus utilisé pour gratuit au monde Anthropic annonce que Claude restera sans publicité pour préserver son rôle d'assistant conversationnel dédié au travail et à la réflexion approfondie. Les conversations avec Claude sont souvent sensibles, personnelles ou impliquent des tâches complexes d'ingénierie logicielle où les publicités seraient inappropriées. L'analyse des conversations montre qu'une part significative aborde des sujets délicats similaires à ceux évoqués avec un conseiller de confiance. Un modèle publicitaire créerait des incitations contradictoires avec le principe fondamental d'être "genuinely helpful" inscrit dans la Constitution de Claude. Les publicités introduiraient un conflit d'intérêt potentiel où les recommandations pourraient être influencées par des motivations commerciales plutôt que par l'intérêt de l'utilisateur. Le modèle économique d'Anthropic repose sur les contrats entreprise et les abonnements payants, permettant de réinvestir dans l'amélioration de Claude. Anthropic maintient l'accès gratuit avec des modèles de pointe et propose des tarifs réduits pour les ONG et l'éducation dans plus de 60 pays. Le commerce "agentique" sera supporté mais uniquement à l'initiative de l'utilisateur, jamais des annonceurs, pour préserver la confiance. Les intégrations tierces comme Figma, Asana ou Canva continueront d'être développées en gardant l'utilisateur aux commandes. Anthropic compare Claude à un cahier ou un tableau blanc : des espaces de pensée purs, sans publicité. Infinispan 16.1 est sorti https://infinispan.org/blog/2026/02/04/infinispan-16-1 déjà le nom de la release mérite une mention Le memory bounded par cache et par ensemble de cache s est pas facile à faire en Java Une nouvelle api OpenAPI AOT caché dans les images container Un serveur MCP local juste avec un fichier Java ? C'est possible avec LangChain4j et JBang https://glaforge.dev/posts/2026/02/11/zero-boilerplate-java-stdio-mcp-servers-with-langchain4j-and-jbang/ Création rapide de serveurs MCP Java sans boilerplate. MCP (Model Context Protocol): standard pour connecter les LLM à des outils et données. Le tutoriel répond au manque d'options simples pour les développeurs Java, face à une prédominance de Python/TypeScript dans l'écosystème MCP. La solution utilise: LangChain4j: qui intègre un nouveau module serveur MCP pour le protocole STDIO. JBang: permet d'exécuter des fichiers Java comme des scripts, éliminant les fichiers de build (pom.xml, Gradle). Implémentation: se fait via un seul fichier .java. JBang gère automatiquement les dépendances (//DEPS). L'annotation @Tool de LangChain4j expose les méthodes Java aux LLM. StdioMcpServerTransport gère la communication JSON-RPC via l'entrée/sortie standard (STDIO). Point crucial: Les logs doivent impérativement être redirigés vers System.err pour éviter de corrompre System.out, qui est réservé à la communication MCP (messages JSON-RPC). Facilite l'intégration locale avec des outils comme Gemini CLI, Claude Code, etc. Reciprocal Rank Fusion : un algorithme utile et souvent utilisé pour faire de la recherche hybride, pour mélanger du RAG et des recherches par mots-clé https://glaforge.dev/posts/2026/02/10/advanced-rag-understanding-reciprocal-rank-fusion-in-hybrid-search/ RAG : Qualité LLM dépend de la récupération. Recherche Hybride : Combiner vectoriel et mots-clés (BM25) est optimal. Défi : Fusionner des scores d'échelles différentes. Solution : Reciprocal Rank Fusion (RRF). RRF : Algorithme robuste qui fusionne des listes de résultats en se basant uniquement sur le rang des documents, ignorant les scores. Avantages RRF : Pas de normalisation de scores, scalable, excellente première étape de réorganisation. Architecture RAG fréquente : RRF (large sélection) + Cross-Encoder / modèle de reranking (précision fine). RAG-Fusion : Utilise un LLM pour générer plusieurs variantes de requête, puis RRF agrège tous les résultats pour renforcer le consensus et réduire les hallucinations. Implémentation : LangChain4j utilise RRF par défaut pour agréger les résultats de plusieurs retrievers. Les dernières fonctionnalités de Gemini et Nano Banana supportées dans LangChain4j https://glaforge.dev/posts/2026/02/06/latest-gemini-and-nano-banana-enhancements-in-langchain4j/ Nouveaux modèles d'images Nano Banana (Gemini 2.5/3.0) pour génération et édition (jusqu'à 4K). "Grounding" via Google Search (pour images et texte) et Google Maps (localisation, Gemini 2.5). Outil de contexte URL (Gemini 3.0) pour lecture directe de pages web. Agents multimodaux (AiServices) capables de générer des images. Configuration de la réflexion (profondeur Chain-of-Thought) pour Gemini 3.0. Métadonnées enrichies : usage des tokens et détails des sources de "grounding". Comment configurer Gemini CLI comment agent de code dans IntelliJ grâce au protocole ACP https://glaforge.dev/posts/2026/02/01/how-to-integrate-gemini-cli-with-intellij-idea-using-acp/ But : Intégrer Gemini CLI à IntelliJ IDEA via l'Agent Client Protocol (ACP). Prérequis : IntelliJ IDEA 2025.3+, Node.js (v20+), Gemini CLI. Étapes : Installer Gemini CLI (npm install -g @google/gemini-cli). Localiser l'exécutable gemini. Configurer ~/.jetbrains/acp.json (chemin exécutable, --experimental-acp, use_idea_mcp: true). Redémarrer IDEA, sélectionner "Gemini CLI" dans l'Assistant IA. Usage : Gemini interagit avec le code et exécute des commandes (contexte projet). Important : S'assurer du flag --experimental-acp dans la configuration. Outillage PipeNet, une alternative (open source aussi) à LocalTunnel, mais un plus évoluée https://pipenet.dev/ pipenet: Alternative open-source et moderne à localtunnel (client + serveur). Usages: Développement local (partage, webhooks), intégration SDK, auto-hébergement sécurisé. Fonctionnalités: Client (expose ports locaux, sous-domaines), Serveur (déploiement, domaines personnalisés, optimisé cloud mono-port). Avantages vs localtunnel: Déploiement cloud sur un seul port, support multi-domaines, TypeScript/ESM, maintenance active. Protocoles: HTTP/S, WebSocket, SSE, HTTP Streaming. Intégration: CLI ou SDK JavaScript. JSON-IO — une librairie comme Jackson ou GSON, supportant JSON5, TOON, et qui pourrait être utile pour l'utilisation du "structured output" des LLMs quand ils ne produisent pas du JSON parfait https://github.com/jdereg/json-io json-io : Librairie Java pour la sérialisation et désérialisation JSON/TOON. Gère les graphes d'objets complexes, les références cycliques et les types polymorphes. Support complet JSON5 (lecture et écriture), y compris des fonctionnalités non prises en charge par Jackson/Gson. Format TOON : Notation orientée token, optimisée pour les LLM, réduisant l'utilisation de tokens de 40 à 50% par rapport au JSON. Légère : Aucune dépendance externe (sauf java-util), taille de JAR réduite (~330K). Compatible JDK 1.8 à 24, ainsi qu'avec les environnements JPMS et OSGi. Deux modes de conversion : vers des objets Java typés (toJava()) ou vers des Map (toMaps()). Options de configuration étendues via ReadOptionsBuilder et WriteOptionsBuilder. Optimisée pour les déploiements cloud natifs et les architectures de microservices. Utiliser mailpit et testcontainer pour tester vos envois d'emails https://foojay.io/today/testing-emails-with-testcontainers-and-mailpit/ l'article montre via SpringBoot et sans. Et voici l'extension Quarkus https://quarkus.io/extensions/io.quarkiverse.mailpit/quarkus-mailpit/?tab=docs Tester l'envoi d'emails en développement est complexe car on ne peut pas utiliser de vrais serveurs SMTP Mailpit est un serveur SMTP de test qui capture les emails et propose une interface web pour les consulter Testcontainers permet de démarrer Mailpit dans un conteneur Docker pour les tests d'intégration L'article montre comment configurer une application SpringBoot pour envoyer des emails via JavaMail Un module Testcontainers dédié à Mailpit facilite son intégration dans les tests Le conteneur Mailpit expose un port SMTP (1025) et une API HTTP (8025) pour vérifier les emails reçus Les tests peuvent interroger l'API HTTP de Mailpit pour valider le contenu des emails envoyés Cette approche évite d'utiliser des mocks et teste réellement l'envoi d'emails Mailpit peut aussi servir en développement local pour visualiser les emails sans les envoyer réellement La solution fonctionne avec n'importe quel framework Java supportant JavaMail Architecture Comment scaler un système de 0 à 10 millions d'utilisateurs https://blog.algomaster.io/p/scaling-a-system-from-0-to-10-million-users Philosophie : Scalabilité incrémentale, résoudre les goulots d'étranglement sans sur-ingénierie. 0-100 utilisateurs : Serveur unique (app, DB, jobs). 100-1K : Séparer app et DB (services gérés, pooling). 1K-10K : Équilibreur de charge, multi-serveurs d'app (stateless via sessions partagées). 10K-100K : Caching, réplicas de lecture DB, CDN (réduire charge DB). 100K-500K : Auto-scaling, applications stateless (authentification JWT). 500K-10M : Sharding DB, microservices, files de messages (traitement asynchrone). 10M+ : Déploiement multi-régions, CQRS, persistance polyglotte, infra personnalisée. Principes clés : Simplicité, mesure, stateless essentiel, cache/asynchrone, sharding prudent, compromis (CAP), coût de la complexité. Patterns d'Architecture 2026 - Du Hype à la Réalité du Terrain (Part 1/2) - https://blog.ippon.fr/2026/01/30/patterns-darchitecture-2026-part-1/ L'article présente quatre patterns d'architecture logicielle pour répondre aux enjeux de scalabilité, résilience et agilité business dans les systèmes modernes Il présentent leurs raisons et leurs pièges Un bon rappel L'Event-Driven Architecture permet une communication asynchrone entre systèmes via des événements publiés et consommés, évitant le couplage direct Les bénéfices de l'EDA incluent la scalabilité indépendante des composants, la résilience face aux pannes et l'ajout facile de nouveaux cas d'usage Le pattern API-First associé à un API Gateway centralise la sécurité, le routage et l'observabilité des APIs avec un catalogue unifié Le Backend for Frontend crée des APIs spécifiques par canal (mobile, web, partenaires) pour optimiser l'expérience utilisateur CQRS sépare les modèles de lecture et d'écriture avec des bases optimisées distinctes, tandis que l'Event Sourcing stocke tous les événements plutôt que l'état actuel Le Saga Pattern gère les transactions distribuées via orchestration centralisée ou chorégraphie événementielle pour coordonner plusieurs microservices Les pièges courants incluent l'explosion d'événements granulaires, la complexité du debugging distribué, et la mauvaise gestion de la cohérence finale Les technologies phares sont Kafka pour l'event streaming, Kong pour l'API Gateway, EventStoreDB pour l'Event Sourcing et Temporal pour les Sagas Ces patterns nécessitent une maturité technique et ne sont pas adaptés aux applications CRUD simples ou aux équipes junior Patterns d'architecture 2026 : du hype à la réalité terrain part. 2 - https://blog.ippon.fr/2026/02/04/patterns-darchitecture-2026-part-2/ Deuxième partie d'un guide pratique sur les patterns d'architecture logicielle et système éprouvés pour moderniser et structurer les applications en 2026 Strangler Fig permet de migrer progressivement un système legacy en l'enveloppant petit à petit plutôt que de tout réécrire d'un coup (70% d'échec pour les big bang) Anti-Corruption Layer protège votre nouveau domaine métier des modèles externes et legacy en créant une couche de traduction entre les systèmes Service Mesh gère automatiquement la communication inter-services dans les architectures microservices (sécurité mTLS, observabilité, résilience) Architecture Hexagonale sépare le coeur métier des détails techniques via des ports et adaptateurs pour améliorer la testabilité et l'évolutivité Chaque pattern est illustré par un cas client concret avec résultats mesurables et liste des pièges à éviter lors de l'implémentation Les technologies 2026 mentionnées incluent Istio, Linkerd pour service mesh, LaunchDarkly pour feature flags, NGINX et Kong pour API gateway Tableau comparatif final aide à choisir le bon pattern selon la complexité, le scope et le use case spécifique du projet L'article insiste sur une approche pragmatique : ne pas utiliser un pattern juste parce qu'il est moderne mais parce qu'il résout un problème réel Pour les systèmes simples type CRUD ou avec peu de services, ces patterns peuvent introduire une complexité inutile qu'il faut savoir éviter Méthodologies Le rêve récurrent de remplacer voire supprimer les développeurs https://www.caimito.net/en/blog/2025/12/07/the-recurring-dream-of-replacing-developers.html Depuis 1969, chaque décennie voit une tentative de réduire le besoin de développeurs (de COBOL, UML, visual builders… à IA). Motivation : frustration des dirigeants face aux délais et coûts de développement. La complexité logicielle est intrinsèque et intellectuelle, non pas une question d'outils. Chaque vague technologique apporte de la valeur mais ne supprime pas l'expertise humaine. L'IA assiste les développeurs, améliore l'efficacité, mais ne remplace ni le jugement ni la gestion de la complexité. La demande de logiciels excède l'offre car la contrainte majeure est la réflexion nécessaire pour gérer cette complexité. Pour les dirigeants : les outils rendent-ils nos développeurs plus efficaces sur les problèmes complexes et réduisent-ils les tâches répétitives ? Le "rêve" de remplacer les développeurs, irréalisable, est un moteur d'innovation créant des outils précieux. Comment creuser des sujets à l'ère de l'IA générative. Quid du partage et la curation de ces recherches ? https://glaforge.dev/posts/2026/02/04/researching-topics-in-the-age-of-ai-rock-solid-webhooks-case-study/ Recherche initiale de l'auteur sur les webhooks en 2019, processus long et manuel. L'IA (Deep Research, Gemini, NotebookLM) facilite désormais la recherche approfondie, l'exploration de sujets et le partage des résultats. L'IA a identifié et validé des pratiques clés pour des déploiements de webhooks résilients, en grande partie les mêmes que celles trouvées précédemment par l'auteur. Génération d'artefacts par l'IA : rapport détaillé, résumé concis, illustration sketchnote, et même une présentation (slide deck). Guillaume s'interroge sur le partage public de ces rapports de recherche générés par l'IA, tout en souhaitant éviter le "AI Slop". Loi, société et organisation Le logiciel menacé par le vibe coding https://www.techbuzz.ai/articles/we-built-a-monday-com-clone-in-under-an-hour-with-ai Deux journalistes de CNBC sans expérience de code ont créé un clone fonctionnel de Monday.com en moins de 60 minutes pour 5 à 15 dollars. L'expérience valide les craintes des investisseurs qui ont provoqué une baisse de 30% des actions des entreprises SaaS. L'IA a non seulement reproduit les fonctionnalités de base mais a aussi recherché Monday.com de manière autonome pour identifier et recréer ses fonctionnalités clés. Cette technique appelée "vibe-coding" permet aux non-développeurs de construire des applications via des instructions en anglais courant. Les entreprises les plus vulnérables sont celles offrant des outils "qui se posent sur le travail" comme Atlassian, Adobe, HubSpot, Zendesk et Smartsheet. Les entreprises de cybersécurité comme CrowdStrike et Palo Alto sont considérées plus protégées grâce aux effets de réseau et aux barrières réglementaires. Les systèmes d'enregistrement comme Salesforce restent plus difficiles à répliquer en raison de leur profondeur d'intégration et de données d'entreprise. Le coût de 5 à 15 dollars par construction permet aux entreprises de prototyper plusieurs solutions personnalisées pour moins cher qu'une seule licence Monday.com. L'expérience soulève des questions sur la pérennité du marché de 5 milliards de dollars des outils de gestion de projet face à l'IA générative. Conférences En complément de l'agenda des conférences de Aurélie Vache, il y a également le site https://javaconferences.org/ (fait par Brian Vermeer) avec toutes les conférences Java à venir ! La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 12-13 février 2026 : Touraine Tech #26 - Tours (France) 12-13 février 2026 : World Artificial Intelligence Cannes Festival - Cannes (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 6 mars 2026 : WordCamp Nice 2026 - Nice (France) 18 mars 2026 : Jupyter Workshops: AI in Jupyter: Building Extensible AI Capabilities for Interactive Computing - Saint-Maur-des-Fossés (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 20 mars 2026 : Atlantique Day 2026 - Nantes (France) 26 mars 2026 : Data Days Lille - Lille (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 26-27 mars 2026 : REACT PARIS - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 31 mars 2026-1 avril 2026 : FlowCon France 2026 - Paris (France) 1 avril 2026 : AWS Summit Paris - Paris (France) 2 avril 2026 : Pragma Cannes 2026 - Cannes (France) 2-3 avril 2026 : Xen Spring Meetup 2026 - Grenoble (France) 7 avril 2026 : PyTorch Conference Europe - Paris (France) 9-10 avril 2026 : Android Makers by droidcon 2026 - Paris (France) 9-11 avril 2026 : Drupalcamp Grenoble 2026 - Grenoble (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 17-18 avril 2026 : Faiseuses du Web 5 - Dinan (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 12 mai 2026 : Lead Innovation Day - Leadership Edition - Paris (France) 19 mai 2026 : La Product Conf Paris 2026 - Paris (France) 21-22 mai 2026 : Flupa UX Days 2026 - Paris (France) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 28 mai 2026 : DevCon 27 : I.A. & Vibe Coding - Paris (France) 28 mai 2026 : Cloud Toulouse 2026 - Toulouse (France) 29 mai 2026 : NG Baguette Conf 2026 - Paris (France) 29 mai 2026 : Agile Tour Strasbourg 2026 - Strasbourg (France) 2-3 juin 2026 : Agile Tour Rennes 2026 - Rennes (France) 2-3 juin 2026 : OW2Con - Paris-Châtillon (France) 3 juin 2026 : IA–NA - La Rochelle (France) 5 juin 2026 : TechReady - Nantes (France) 5 juin 2026 : Fork it! - Rouen - Rouen (France) 6 juin 2026 : Polycloud - Montpellier (France) 9 juin 2026 : JFTL - Montrouge (France) 9 juin 2026 : C: - Caen (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 12 juin 2026 : Tech F'Est 2026 - Nancy (France) 16 juin 2026 : Mobilis In Mobile 2026 - Nantes (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 17-20 juin 2026 : VivaTech - Paris (France) 18 juin 2026 : Tech'Work - Lyon (France) 22-26 juin 2026 : Galaxy Community Conference - Clermont-Ferrand (France) 24-25 juin 2026 : Agi'Lille 2026 - Lille (France) 24-26 juin 2026 : BreizhCamp 2026 - Rennes (France) 2 juillet 2026 : Azur Tech Summer 2026 - Valbonne (France) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 3 juillet 2026 : Agile Lyon 2026 - Lyon (France) 6-8 juillet 2026 : Riviera Dev - Sophia Antipolis (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 20-22 août 2026 : 4th Tech Summit on AI & Robotics - Paris (France) & Online 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 24 septembre 2026 : PlatformCon Live Day Paris 2026 - Paris (France) 1 octobre 2026 : WAX 2026 - Marseille (France) 1-2 octobre 2026 : Volcamp - Clermont-Ferrand (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
This week in search we have more ongoing Google search ranking volatility. Bing Webmaster Tools rolled out new AI Performance reports with a new design. Google AI Overviews tests new overlay cards. Grokipedia is seeing a decline in visibility in Google Search and ChatGPT...
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
Therapy practices need SEO, but the usual marketing advice can cross privacy and ethics lines fast. This episode breaks down a privacy-first SEO plan that helps you rank locally and in AI-driven search without relying on reviews, testimonials, or client stories. You'll learn what pages to create, how to structure content for clarity, and how to set boundaries that protect clients, your license, and your peace.Episode webpage: https://propelyourcompany.com/seo-for-therapists/Book a discovery call: https://calendly.com/propelyourcompany/discovery-callCompany website: https://propelyourcompany.com/Send in your questions. ❤ We'd love to hear from you!NEW Webinar: How to dominate Google Search, Google Maps, AI-driven search results, and get more new patients.>> Save your spot
In this episode of Search Off the Record, Martin and Gary from the Google Search Relations team tackle a deceptively simple question: do you still need a website in 2026? Starting from the recurring industry claim that "the web is dead," they explore how the web has evolved through the rise of apps, AI chatbots, and social platforms, and why the answer almost always ends up being "it depends." Tune in for an engaging discussion on how websites remain relevant and what it means for content creation and discovery. Episode transcript → https://goo.gle/sotr103-transcript Listen to more Search Off the Record → https://goo.gle/sotr-yt Subscribe to Google Search Channel → https://goo.gle/SearchCentral Search Off the Record is a podcast series that takes you behind the scenes of Google Search with the Search Relations team. #SOTRpodcast #SEO #GoogleSearch Speakers: Martin Splitt, Gary Illyes
Your website might not need more pages; it may need a cleanup. This 2026 “remove list” walks clinic owners through what to delete, replace, or rewrite on their website to improve SEO, trust, and conversions, including outdated content, generic messaging, weak calls to action, and slow elements that hurt mobile performance. Episode webpage, checklist, & shownotes: https://propelyourcompany.com/what-to-delete-or-rewrite-on-clinic-websites-now/Send in your questions. ❤ We'd love to hear from you!NEW Webinar: How to dominate Google Search, Google Maps, AI-driven search results, and get more new patients.>> Save your spot
In this episode of Around the Desk, Sean Emory, Founder and Chief Investment Officer at Avory & Co., steps back from the AI noise to focus on what actually matters right now.Using recent earnings from Google, Microsoft, Amazon, and Meta, this conversation breaks down what the massive AI CapEx buildout really signals, how different business models monetize AI very differently, and why many of the fears around software disruption may be overstated.This episode explores AI through a capital allocation lens, separating defensive spending from offensive opportunity, and what Big Tech behavior tells us about the true health of the underlying economy.Topics covered include:• The scale of Big Tech AI CapEx and why it matters more than feature launches • Defensive vs offensive AI spending and how to think about moats • Why AI CapEx is also an economic confidence signal • Different monetization paths at Amazon, Microsoft, Meta, and Google • Why Meta may be the cleanest AI beneficiary • The narrative vs data gap around Google Search and AI disruption • Why the “AI breaks software” panic may be overdone • Enterprise security, governance, and why AI rollout feels fast and slow at the same time • Platforms vs single-purpose tools and where risk actually sits • What recent software earnings say about demand, renewals, and long-term contracts • How AI likely becomes embedded inside platforms rather than replacing themThis conversation is for informational purposes only and should not be considered investment advice. Avory & Co. may hold positions in some of the companies discussed. Please do your own research before making any investment decisions._____DisclaimerAvory is not an investor in either company mentioned. .Avory & Co. is a Registered Investment Adviser. This platform is solely for informational purposes. Advisory services are only offered to clients or prospective clients where Avory & Co. and its representatives are properly licensed or exempt from licensure. Past performance is no guarantee of future returns. Investing involves risk and possible loss of principal capital. No advice may be rendered by Avory & Co. unless a client service agreement is in place.Listeners and viewers are encouraged to seek advice from a qualified tax, legal, or investment adviser to determine whether any information presented may be suitable for their specific situation. Past performance is not indicative of future performance.“Likes” are not intended to be endorsements of our firm, our advisors, or our services. While we monitor comments and “likes,” we do not endorse or necessarily share the opinions expressed by site users. Any form of testimony from current or past clients about their experience with our firm is strictly forbidden under current securities laws. Please limit posts to industry-related educational information and comments.Third-party rankings and recognitions are no guarantee of future investment success and do not ensure that a client or prospective client will experience a higher level of performance or results. These ratings should not be construed as an endorsement of the advisor by any client nor are they representative of any one client's evaluation.Please reach out to Houston Hess, our Head of Compliance and Operations, for any further details.
What if search becomes.... proactive?
In this episode, we explore the strange signals people use to interpret global events, from Pentagon pizza orders and satellite data to the Big Mac Index and other unconventional measures of economic reality. We examine the decline of Google search, the rise of AI-powered alternatives, and why new tools are changing how people actually find information. For the “foolishness of the week”, we detail an unfortunate incident involving a piece of World War I artillery, before turning to a broader cultural debate about nostalgia for the 1950s. With guest Andrew Heaton, we unpack myths about work, gender roles, housing, healthcare, and prosperity, comparing mid-century life to modern standards of living. Along the way, we discuss food abundance, technological progress, wage compensation, inequality, and whether people genuinely want to return to the past or simply romanticize it from a distance. 00:00 Introduction and Overview 00:28 Pentagon Pizza Orders and “Pizza Intelligence” 02:51 Proxy Signals, Satellite Data, and the Waffle House Index 04:25 The Big Mac Index and Measuring Cost of Living 05:00 The Decline of Google Search and Sponsored Results 07:19 Switching Search Engines and the Myth of Google Monopoly 09:54 AI Search Tools and Why They Actually Work 11:28 Foolishness of the Week: World War I Artillery Incident 13:43 How Bad Ideas Escalate at Parties 15:51 Introducing Andrew Heaton 16:39 Was the 1950s a Time or a Place? 18:43 Economic Reality vs 1950s Nostalgia 20:58 Women's Work, Household Labor, and Misleading Myths 23:56 Food Costs, Eating Out, and Modern Abundance 25:46 Medicine, Lifespan, and Why 50s Healthcare Was Worse 27:57 Housing Size, Zoning, and the Cost of Homes 30:01 Cars, Air Conditioning, and Quality of Life Improvements 31:17 Mortgage Rates and Why Housing Feels Unaffordable Now 34:02 Manufacturing, Exports, and the “We Don't Make Anything” Myth 35:35 Agricultural Productivity and Modern Farming 37:19 Food Waste as a Measure of Prosperity 37:42 Great Depression Scarcity and Generational Habits 39:59 Transportation Costs and Higher Quality Modern Vehicles 42:50 Car Safety, Seatbelts, and Survival Rates 43:42 Wages, Benefits, and What “Compensation” Really Means 45:29 What the 1950s Actually Did Better 47:52 Inequality, Community, and Social Capital in the 50s 49:44 Technology, Isolation, and Choosing Modern Life 52:05 Longing for Silence from Technology 53:18 The Mythology of Happy Days Learn more about your ad choices. Visit podcastchoices.com/adchoices
Physical therapists are in a competitive industry which means their SEO (search engine optimization) must be up to par to compete online. SEO is a crucial element of your digital marketing campaign and an effective way to get patients through your doors. So, how do you find new patients? Or, should we say: How do new patients find you? Getting your website in front of your target audience is the answer. This is done through a strategic search engine optimization (SEO) campaign. Whether you're a physical therapist new to SEO or have been utilizing it for years, this post will give you invaluable insight into creating a stellar SEO campaign that makes your visitors, Google, and ROI happy.
In today's MadTech Daily, we cover the DOJ appealing the Google search monopoly ruling, WeChat blocking a Tencent AI chatbot giveaway, and Netflix and Warner Bros struggling over a potential merger.
2/4/26Episode SummaryScott selects several real Shopify stores by searching a specific long-tail query (“Valentine's gift for wife of 30 years”) and examines how they appear in different search experiences (Google Search, Google Shopping, Google Gemini) and how well the stores are optimized for that query.He walks through multiple Shopify stores and product pages to evaluate how they communicate value, structure their content, and use structured data (for things like holiday relevance), highlighting areas where many stores could better tailor for specific shopping scenarios like Valentine's.Throughout the episode he discusses practical aspects such as SEO structured data, visual merchandising, how stores promote seasonal offers, and accessibility (e.g., ADA compliance scores).The episode is focused on actionable insights to help Shopify merchants improve product visibility and on-site experience by learning from real examples.Show LinksProductsArtic Angel - https://articangel.com/products/special-gift-for-wife-i-cant-live-without-you-gold-heart-necklaceBearaby - https://bearaby.com/products/the-napperLola Blankets - https://lolablankets.com/products/rosewaterMaster & Dynamic - https://www.masterdynamic.com/products/mw75-active-noise-cancelling-wireless-headphonesPrime Choice - https://primechoiceshop.com/products/i-cant-live-without-you-to-my-wife-necklace-1Pure Enrichment - https://pureenrichment.com/products/purebliss-luxury-towel-warmerAppsDatify - https://apps.shopify.com/datifyBadgezilla - https://apps.shopify.com/badgezillaLinear Shopping Experiences - https://apps.shopify.com/linear-shopping-experienceVideo & Transcript https://jadepuma.com/blogs/the-shopify-solutions-podcast/episode-178-lets-review-some-shopify-stores
What's your most embarrassing Google search? "Do turtles get itchy?"See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Agent Marketer Podcast - Real Estate Marketing for the Modern Agent
Send us a textThinking about running paid ads in 2026? Pump the brakes.Frazier and Michael are breaking down the truth about paid advertising — what works, what doesn't, and where most loan officers go wrong. Whether you're sitting on a 30K marketing budget or barely spending $300 a month, this episode is a tactical deep dive into interruption vs. intent, social vs. search, and why Google still wins if you're serious about ROI.No fluff. No theory. Just straight talk on what actually drives deals — and what will drain your wallet.What You'll LearnWhy social media ads aren't dead — but most of them are dumbThe difference between passive brand amplification and transactional lead genWhy low-cost leads are usually garbage (and what actually matters)How intent changes everything — and why interruption-based ads have a short fuseWhat to realistically budget for Google or social ads in 2026Real Talk Quotes“You don't own the customer. You're just renting attention.”“Lead cost means nothing if the leads suck.”“You're never gonna get a CTC on your feelings.”“Intent-based leads wait for you. Interruption leads forget you.”“We can all get 50-cent leads. That's not the flex you think it is.”Tactical Takeaways✅ Expect to spend $1,500/month minimum if you want transactional ROI✅ Social media ads work best to amplify brand — not chase cold leads✅ Google Search ads are still the king of high-intent lead generation✅ YouTube & TikTok are the only social platforms with real search intent✅ If you're not following up with automation, you're burning money✅ Don't run your own ads unless you want to waste time and budgetWhy This Episode MattersMost LOs are playing checkers when it comes to paid ads — this episode helps you play chess. Whether you're doing consumer direct or trying to convert your social audience, you'll learn where to invest, what to expect, and how to actually get deals from your dollars.Want Help Running Ads That Work?This episode is powered by Empower LO, the trusted team behind hundreds of top-producing LOs running scalable Google ad campaigns.Learn more at empowerlo.com
Join Martin and Gary as they dive into Search Off the Record's Episode 103, unpacking the 2025 Year-End report on crawling issues. Discover fascinating insights on faceted navigation, action parameters, irrelevant parameters and more, highlighting the biggest challenges faced by web crawlers last year. With humor and expert analysis, this episode reveals critical takeaways for webmasters and SEO professionals. Don't miss valuable tips to enhance your site's crawl efficiency! Resources: URL structure best practices for Google Search → https://developers.google.com/search/docs/crawling-indexing/url-structure Crawling December: Faceted navigation → https://developers.google.com/search/blog/2024/12/crawling-december-faceted-nav Xkcd (programmer humor) → https://xkcd.com/327/
AI took center stage at NRF 2026, and few moments underscored its importance more than Google CEO Sundar Pichai's keynote, where he outlined how shopping is evolving in an increasingly agentic, AI-driven world.This episode of Retail Remix, recorded live from the show floor, features host Nicole Silberstein in conversation with Anil Jain, who leads Global Strategic Industries at Google Cloud. Anil shares how Google Cloud is working with retailers to reimagine everything from product discovery to post-purchase service and why agentic AI represents a fundamental shift in how consumers will interact with brands.Key TakeawaysWhy AI is becoming the great equalizer, helping smaller companies compete with limited resources;How AI experiences in general-use platforms like Google Search are upping the ante for everyone, and how to keep up;What multimodal search unlocks when consumers can shop using not just text, but also voice, images and video;Why hyper-personalization is finally within reach after decades of promise;The change management that will be required as AI shifts the way we all work; How Google and its Cloud division are building for this future.Related LinksRelated reading: Google Launches Direct Checkout in Search, GeminiLearn how Google Cloud is helping retailers adopt AI at scaleExplore more NRF26 coverage and retail insights from Retail TouchPointsSubscribe so you don't miss more episodes of Retail Remix from the show floor of NRF26
Google is phasing out the classic Q&A section on Google Business Profiles and replacing it with AI-powered "Ask" features in Google Maps, driven by Gemini. For clinic owners, chiropractors, physical therapists, acupuncturists, med spas, and other healthcare providers, this shift means patients now get instant AI-generated answers about your services, insurance, hours, accessibility, and more - pulled from your profile, reviews, website, photos, and beyond.⚡Episode guide, blog & podcast notes: https://propelyourcompany.com/google-ai-answers/If your info isn't clear and consistent, the AI might say "I don't have enough information" or get it wrong - costing you leads to competitors.In this episode of the Clinic Marketing Podcast, Darcy Sullivan from Propel Marketing and Design breaks down:Why the old Q&A vanished and where the new AI "Ask about this place" button is appearing (especially in Google Maps).Why healthcare categories (like many medical clinics) are rolling out unevenly - but the change is coming.Real patient questions clinics face: "Do you take my insurance?", "Same-day appointments?", "Do you treat kids/sciatica/migraines?", "Parking available?", "Wheelchair accessible?"A clinic-specific AI-feeding checklist: GBP basics, categories/services/attributes, strategic photos/videos, review prompts for detailed language, website FAQs, social posts, and more.7-day action plan to audit and optimize your Google Business Profile + website this week.How to monitor AI answers and avoid misinformation risks.Even if the feature hasn't hit your listing yet, building an "AI-ready" info ecosystem is one of the top local SEO moves for clinics right now - boosting visibility in Maps and search.Tune in for practical steps to make sure Google's AI answers questions the right way... your way.Send in your questions. ❤ We'd love to hear from you!NEW Webinar: How to dominate Google Search, Google Maps, AI-driven search results, and get more new patients.>> Save your spot
AI is everywhere in marketing right now. But here's the truth — it's not a solution. It's a tool. And like any tool, it's only as powerful as the person using it. Our guest, Kaspar Szymanski, knows this better than most. A former Google Search team member and one of the world's leading SEO experts,…
Send us a textAI isn't killing search — it's reshaping how people discover, evaluate, and choose businesses.In this in terview with Crystal Carter of Wix, we unpack how large language models, AI assistants, and emerging “agent” experiences are changing consumer behavior, local search, and brand visibility. From ChatGPT and Google's AI-driven results to personalization, intent modeling, and task completion, we explore what actually changes — and what doesn't.We also dig into:• Why AI acts more like a complement to search than a replacement• How “choice” and “consideration” evolve in AI-first experiences• What happens when interfaces collapse research, comparison, and action into one flow• Why local, reviews, and brand signals still matter — just differently• What businesses should prepare for as agents begin acting on behalf of usersIf you care about search, local, UX, or how consumers actually make decisions, this is the conversation you want to hear.Subscribe to our newsletters and other content at https://www.nearmedia.co/subscribe/
Marc Andreessen is a founder, investor, and co-founder of Netscape, as well as co-founder of the venture capital firm Andreessen Horowitz (a16z). In this conversation, we dig into why we're living through a unique and one of the most incredible times in history, and what comes next.We discuss:1. Why AI is arriving at the perfect moment to counter demographic collapse and declining productivity2. How Marc has raised his 10-year-old kid to thrive in an AI-driven world3. What's actually going to happen with AI and jobs (spoiler: he thinks the panic is “totally off base”)4. The “Mexican standoff” that's happening between product managers, designers, and engineers5. Why you should still learn to code (even with AI)6. How to develop an “E-shaped” career that combines multiple skills, with AI as a force multiplier7. The career advice he keeps coming back to (“Don't be fungible”)8. How AI can democratize one-on-one tutoring, potentially transforming education9. His media diet: X and old books, nothing in between—Brought to you by:DX—The developer intelligence platform designed by leading researchersBrex—The banking solution for startupsDatadog—Now home to Eppo, the leading experimentation and feature flagging platform—Episode transcript: https://www.lennysnewsletter.com/p/marc-andreessen-the-real-ai-boom—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Marc Andreessen:• X: https://x.com/pmarca• Substack: https://pmarca.substack.com• Andreessen Horowitz's website: https://a16z.com• Andreessen Horowitz's YouTube channel: https://www.youtube.com/@a16z—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Marc Andreessen(04:27) The historic moment we're living in(06:52) The impact of AI on society(11:14) AI's role in education and parenting(22:15) The future of jobs in an AI-driven world(30:15) Marc's past predictions(35:35) The Mexican standoff of tech roles(39:28) Adapting to changing job tasks(42:15) The shift to scripting languages(44:50) The importance of understanding code(51:37) The value of design in the AI era(53:30) The T-shaped skill strategy(01:02:05) AI's impact on founders and companies(01:05:58) The concept of one-person billion-dollar companies(01:08:33) Debating AI moats and market dynamics(01:14:39) The rapid evolution of AI models(01:18:05) Indeterminate optimism in venture capital(01:22:17) The concept of AGI and its implications(01:30:00) Marc's media diet(01:36:18) Favorite movies and AI voice technology(01:39:24) Marc's product diet(01:43:16) Closing thoughts and recommendations—Referenced:• Linus Torvalds on LinkedIn: https://www.linkedin.com/in/linustorvalds• The philosopher's stone: https://en.wikipedia.org/wiki/Philosopher%27s_stone• Alexander the Great: https://en.wikipedia.org/wiki/Alexander_the_Great• Aristotle: https://en.wikipedia.org/wiki/Aristotle• Bloom's 2 sigma problem: https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem• Alpha School: https://alpha.school• In Tech We Trust? A Debate with Peter Thiel and Marc Andreessen: https://a16z.com/in-tech-we-trust-a-debate-with-peter-thiel-and-marc-andreessen• John Woo: https://en.wikipedia.org/wiki/John_Woo• Assembly: https://en.wikipedia.org/wiki/Assembly_language• C programming language: https://en.wikipedia.org/wiki/C_(programming_language)• Python: https://www.python.org• Netscape: https://en.wikipedia.org/wiki/Netscape• Perl: https://www.perl.org• Scott Adams: https://en.wikipedia.org/wiki/Scott_Adams• Larry Summers's website: https://larrysummers.com• Nano Banana: https://gemini.google/overview/image-generation• Bitcoin: https://bitcoin.org• Ethereum: https://ethereum.org• Satoshi Nakamoto: https://en.wikipedia.org/wiki/Satoshi_Nakamoto• Inside ChatGPT: The fastest-growing product in history | Nick Turley (Head of ChatGPT at OpenAI): https://www.lennysnewsletter.com/p/inside-chatgpt-nick-turley• Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann: https://www.lennysnewsletter.com/p/anthropic-co-founder-benjamin-mann• Inside Google's AI turnaround: The rise of AI Mode, strategy behind AI Overviews, and their vision for AI-powered search | Robby Stein (VP of Product, Google Search): https://www.lennysnewsletter.com/p/how-google-built-ai-mode-in-under-a-year• DeepSeek: https://www.deepseek.com• Cowork: https://support.claude.com/en/articles/13345190-getting-started-with-cowork• Definite vs. indefinite thinking: Notes from Zero to One by Peter Thiel: https://boxkitemachine.net/posts/zero-to-one-peter-thiel-definite-vs-indefinite-thinking• Henry Ford: https://www.thehenryford.org/explore/stories-of-innovation/visionaries/henry-ford• Lex Fridman Podcast: https://lexfridman.com/podcast• $46B of hard truths from Ben Horowitz: Why founders fail and why you need to run toward fear (a16z co-founder): https://www.lennysnewsletter.com/p/46b-of-hard-truths-from-ben-horowitz• Eddington: https://www.imdb.com/title/tt31176520• Joaquin Phoenix: https://en.wikipedia.org/wiki/Joaquin_Phoenix• Pedro Pascal: https://en.wikipedia.org/wiki/Pedro_Pascal• George Floyd: https://en.wikipedia.org/wiki/George_Floyd• Replit: https://replit.com• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• Grok Bad Rudi: https://grok.com/badrudi• Wispr Flow: https://wisprflow.ai• Star Trek: The Next Generation: https://www.imdb.com/title/tt0092455• Star Trek: Starfleet Academy: https://www.imdb.com/title/tt8622160• a16z: The Power Brokers: https://www.notboring.co/p/a16z-the-power-brokers—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
If you're a chiropractor trying to improve your website traffic and get more patients through Google, this episode is for you. Discover where most clinics go wrong with SEO — and the four key areas you should be focusing on instead. Episode Webpage & Show Notes: https://propelyourcompany.com/seo-for-chiropractors-what-works/Send in your questions. ❤ We'd love to hear from you!NEW Webinar: How to dominate Google Search, Google Maps, AI-driven search results, and get more new patients.>> Save your spot
In this episode, we're talking about preventive care marketing, how to attract patients who want to stay ahead of problems, not just react when something hurts.If your marketing mostly speaks to pain and urgent symptoms, you can end up in a cycle of one-time visits and inconsistent momentum. Preventive care content helps you reach the “I feel fine, but…” crowd, the desk workers, active adults, busy parents, and anyone noticing early warning signs who wants a clear plan before things spiral.You'll learn a simple framework for what to publish, how to talk about prevention without sounding pushy, and how to guide someone from awareness to taking action. I'll also share an easy monthly content strategy you can repeat without posting every day, plus the language that helps this kind of content convert.If you want to build a steady stream of patients who value consistency and long-term progress, this is for you.
When you hear the words “data privacy,” what do you first imagine?Maybe you picture going into your social media apps and setting your profile and posts to private. Maybe you think about who you've shared your location with and deciding to revoke some of that access. Maybe you want to remove a few apps entirely from your smartphone, maybe you want to try a new web browser, maybe you even want to skirt the type of street-level surveillance provided by Automated License Plate Readers, which can record your car model, license plate number, and location on your morning drive to work.Importantly, all of these are “data privacy,” but trying to do all of these things at once can feel impossible.That's why, this year, for Data Privacy Day, Malwarebytes Senior Privacy Advocate (and Lock and Code host) David Ruiz is sharing the one thing he's doing different to improve his privacy. And it's this: He's given up Google Search entirely.When Ruiz requested the data that Google had collected about him last year, he saw that the company had recorded an eye-popping 8,000 searches in just the span of 18 months. And those 8,000 searches didn't just reveal what he was thinking about on any given day—including his shopping interests, his home improvement projects, and his late-night medical concerns—they also revealed when he clicked on an ad based on the words he searched. This type of data, which connects a person's searches to the likelihood of engaging with an online ad, is vital to Google's revenue, and it's the type of thing that Ruiz is seeking to finally cut off.So, for 2026, he has switched to a new search engine, Brave Search.Today, on the Lock and Code podcast, Ruiz explains why he made the switch, what he values about Brave Search, and why he also refused to switch to any of the major AI platforms in replacing Google.Tune in today.You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
This week, we covered the doubly heated Google Search ranking volatility, but nothing was confirmed by Google. OpenAI will soon test ads in ChatGPT responses and they will charge on an impression basis...
Confused about Instagram hashtags and whether you should still be using them? Let's clear it up.In 2025, Adam Mosseri shared that Instagram was prioritizing SEO keywords over hashtags. Instead, keywords in captions, bios, and especially on-screen text started playing a much bigger role in discoverability.Fast forward to 2026, and Instagram made another update that's important to understand.Hashtags are back — but with guardrails.You can now use up to five hashtags per post, and they're meant to support SEO, not replace it. They only work if they're highly intentional.Why this works is pretty simple.It guarantees keyword coverage, so you're not relying only on captions.It reinforces SEO discipline without cluttering the actual caption.And it creates consistency across posts and across accounts.It is also important to note that Instagram content is now Google-searchable.Public posts from Business and Creator accounts can show up in Google Search. That includes static posts, carousels, Reels, and captions.This unlocks reach beyond the Instagram feed.It increases discoverability through Google.It gives content a longer shelf life.And it strengthens your brand's overall SEO footprint.The takeaway is that Instagram isn't just a social platform anymore — it's part of your search strategy.
Most leaders assume AI and search already see the whole internet. In reality, they all operate on the same tiny slice of the web.In this episode of IT Visionaries, host Chris Brandt sits down with Sudheesh Nair, Co-Founder and CEO of TinyFish and former CEO of ThoughtSpot, to unpack why only a small percentage of the web is indexable and how that cripples enterprise AI.Sudheesh explains why the next breakthrough won't come from bigger models or better search, but from agents that can operate the web at scale, logging in, filling forms, running workflows, and surfacing the long tail of opportunities that never appear on page one. He also shares why human craft, taste, and presence will matter more than ever in an agent-driven world. Key Moments:00:00 - The Deep Web Problem02:48 - The Amazon Search Trap04:26 - Why Search is Broken07:01 - Internet is No Longer a Library08:29 - AI Answers vs Blue Links13:05 - Introducing Tiny Fish's Mission16:00 - Search as a Poor Experience18:29 - The Deep Web: APIs, Workflows & Logins22:11 - Tackling the 93% Problem25:47 - The Eight-Room Hotel Success Story29:04 - Operating the Web vs Skimming It32:42 - Real-Time Personalized Workflows38:31 - Enterprise B2B Strategy40:18 - Taste Over Tools43:08 - AI Freeing Human Experience46:36 - Travel Experiences & Local Discovery50:00 - Democratizing the Internet56:39 - The Waving Guide in China1:01:12 - Optimism for AI's Future -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Five fast AI moves. One stronger Google Business Profile. More of the right patients finding you. This episode shows you how to use AI to keep your Google Business Profile Listing active and credible in a few minutes. You'll hear the key areas where AI helps, the pitfalls to avoid, and the prompts waiting for you in the show notes. Press play and turn views into appointments. Episode webpage, resources, copy-and-paste promts, & more: https://propelyourcompany.com/gbp-ai-hacks/Send in your questions. ❤ We'd love to hear from you!NEW Webinar: How to dominate Google Search, Google Maps, AI-driven search results, and get more new patients.>> Save your spot
Text me your questions. If you are interested in working together, please include your email address. The system doesn't let me respond. Thanks!What matters most for SEO this year? Is AI search or Google search more important? How do they differ? Where do you want your business to show up? Find out what matters to your online marketing program so your business can be visible and grow in 2026. Support the showRegister now for the free SEO class - https://www.etchedmarketing.com/registration-seo-class My free resources are here- https://www.etchedmarketing.com/freebies Want to work with me 1:1? https://www.etchedmarketing.com/marketing-consulting Join me in Simple SEO Content -https://www.etchedmarketing.com/yes Join Simple Podcast SEO and learn how to grow your show quickly and easily in the self-study podcast SEO program. - https://www.etchedmarketing.com/enroll My favorite marketing tools (affiliate links) Podcast recording and editing - Descript Podcast hosting - Buzzsprout Email Marketing - Active Campaign Marketing Website Analytics - Clicky SEO Tool - Ubersuggest Do you have a question you'd like me to answer on the podcast? Ask it here - https://forms.gle/Fbrqpmss6gxUnaMj7
In this short episode, we are talking about what actually moves the needle in clinic SEO right now, and why so many clinics feel stuck even when they are “doing all the things.”You will hear two real outcomes from clinics that focused on the right fixes in the right order, including a 30+ position jump in Google for the keyword phrase “chiropractor in their city,” plus what we are seeing with patients finding clinics through AI searches.I also walk you through a simple way to stop guessing, figure out what matters most for your clinic, and track progress without drowning in data. If you want clearer rankings in Google Search, better visibility in Google Maps, and a plan that keeps up with how search is changing in 2026, this episode will help you know what to do first.Learn more about the Ready. Set. Rank! Accelerator Program: https://propelyourcompany.com/ready-set-rank-accelerator/Send in your questions. ❤ We'd love to hear from you!NEW Webinar: How to dominate Google Search, Google Maps, AI-driven search results, and get more new patients.>> Save your spot
It is another week and more Google Search ranking volatility but this was a weird one. Google launched Personal Intelligence in the Gemini app and it is coming to AI Mode in Google Search. Google AI Mode new ad format...
Boost your chiropractic practice with these 10 easy website tips! Learn practical fixes to attract and retain more patients, from mobile-friendly design to online booking and SEO. Implement these changes this week to grow your clinic! Episode webpage & shownotes: https://propelyourcompany.com/10-chiropractic-website-tips/Send in your questions. ❤ We'd love to hear from you!NEW Webinar: How to dominate Google Search, Google Maps, AI-driven search results, and get more new patients.>> Save your spot
1. EquityI'm not sure how Google would define “equity,” but my definition of equity is “stored value.”As a homeowner, you understand home equity as the stored value that it offers you.Your equity in your home is a product of all the time, energy, and money that you have put into it, plus the value that has been added by the passage of time.Relational equity is accumulated in the same way.“What have we invested in each other? What have we endured? How many years have we traveled through life together?”Relational equity is why we tolerate annoyances and troubles from the people we love. They have added value to our lives, so they have relational equity in us.Likewise, customer-bonding ads create relational equity between today's businesses and tomorrow's customers. They do this by highlighting shared perspectives, beliefs, and values.Customer-bonding ads communicate authenticity, and vulnerability. And they are always there, 52 weeks a year. Authenticity, vulnerability, and the passage of time are not easy to fake or accelerate.Keep those things in mind as you read on.2. AIEighty-seven Wizards of Ads who stay in regular touch with nearly 1,000 businesses are a reliable finger on the pulse of what is happening.This is what is happening:Google Search results have been altered in a dramatic and unexpected way. Some companies have benefited greatly from Google's new methodology while other companies have been devastated by it.You'll understand what separates the winners from the losers in just a moment.With 6,000 employees, Edelman is the world's largest PR agency. They help companies worldwide manage their reputations and trust through stories published in mass media.Edelman has been doing what they do since 1952.On October 27, 2025, Christmas decorations were vibrating in anticipation of replacing Halloween decor when Brent Nelson – Chief Strategy Officer at Edelman – was quoted in Ad Age magazine.Explaining why Google dramatically expanded their results-ranking criteria, Nelson said,“What drives visibility isn't your ad budget or keyword bids; it's earned media. Analysis shows that 90% of what appears in AI summaries is ‘earned-driven'—pulled from reviews, press, blogs, forums and cultural chatter. Paid now plays a different role, amplifying what's already there.”“The new shelf space isn't a store; it's the AI summary. Brands need to understand their earned footprint across AI-generated answers.”“Who gets cited? Who's trusted? Who's missing? That's the new baseline of visibility.”In other words, Google is now rewarding Relational Equity.3. A PredictionHundreds of new companies are about to leap into the Public Relations business. Their goal will be to get their clients mentioned in online press, blogs, forums and cultural chatter.PR is an easy business to get into. It won't be long before you are approached by someone who has a PR solution to help you improve your AEO (Ask Engine Optimization).If you remember any of today's Monday Morning Memo, let it be this:“If you don't have anything interesting to say, don't let anyone convince you to pay money to say it.”Company slogans, mush-mouth clichés and traditional ad-speak are not going to move the needle.Every month or two, you are going to need something new, exciting, different, and entirely real to say.4. A SuggestionRadio stations would be smart to start a daily or weekly blog that is fun, quick, entertaining, easy-to-read, and full of valuable things that every consumer would want to know about.If I owned a station in Austin, I would call my blog “Cool Things Austin Needs to Know”If my blog was well written
You don't need to be a full-time YouTuber to get results from YouTube. In this episode, you'll learn a simple YouTube SEO strategy for clinics so your videos can actually get found by the right patients. We'll cover what topics to record first, how to choose one main keyword per video, and how to write titles, descriptions, tags, and thumbnails that support YouTube search. You'll also get a quick checklist, a four-video starter plan, and the key metrics to track so you know it's working. >> https://propelyourcompany.com/youtube-seo-for-clinics/Send in your questions. ❤ We'd love to hear from you!NEW Webinar: How to dominate Google Search, Google Maps, AI-driven search results, and get more new patients.>> Save your spot
This week, yep, we had another story on Google Search ranking volatility. I posted the Google webmaster report for January 2026. Google Discover seems to be showing too many...
Stay ahead of the roofing industry changes. Download the 2025-2028 Roofing Market Report: https://roofmarketreport.com/=============FREE TRAINING CENTERhttps://adamsfreestuff.com/ FREE ROOFING MARKET REPORT:https://roofmarketreport.com/FREE COACHING FROM MY AI CLONEhttps://secure.rsra.org/adams-cloneJOIN THE ROOFING & SOLAR REFORM ALLIANCE (RSRA)https://www.rsra.org/join/ GET MY BOOKhttps://a.co/d/7tsW3Lx GET A ROOFING SALES JOBhttps://secure.rsra.org/find-a-job CONTACTEmail: help@rsra.orgCall/Text: 303-222-7133PODCASTApple Podcasts: https://apple.co/3fSQiev Spotify: https://bit.ly/3eMAqJe Available everywhere else :)FOLLOW ADAM BENSMANhttps://www.facebook.com/adam.bensman/ https://www.facebook.com/RoofStrategist/ https://www.instagram.com/roofstrategist/ https://www.tiktok.com/@roofstrategist https://www.linkedin.com/in/roofstrategist/#roofstrategist #roofsales #d2d #solar #solarsales #roofing #roofer #canvassing #hail #wind #hurricane #sales #roofclaim #rsra #roofingandsolarreformalliance #reformers #adambensman
Google's new anonymous reviews are changing how your clinic shows up in Google Search and Google Maps. In this episode, I break down what the new anonymous review feature actually is, how it affects your reputation, and the simple steps you can take to manage it without losing your mind. We cover when to flag a review, how to respond without breaking privacy rules, and I share copy-and-paste reply templates you can customize for your own clinic so you are never stuck wondering what to say again.
In this episode, John Mueller and Danny Sullivan continue their conversation about the changing landscape of Search. They discuss the practicalities of hiring an SEO professional in the age of AI: Do you really need one? How do you evaluate them? Danny also shares his thoughts on third-party SEO tools, "domain scores," and why creators shouldn't rush to break their content into "bite-sized chunks" just to please Large Language Models (LLMs). Listen to more Search Off the Record → https://goo.gle/sotr-yt Subscribe to Google Search Channel → https://goo.gle/SearchCentral Search Off the Record is a podcast series that takes you behind the scenes of Google Search with the Search Relations team. #SOTRpodcast #SEO #GoogleSearch
Larry Page said in the early day, a guiding principle is Do No Evil. I wonder if we can say that today or is it just business as usual? Dave Young: Welcome to the Empire Builders Podcast, teaching business owners the not-so secret techniques that took famous businesses from mom-and-pop to major brands. Stephen Semple is a marketing consultant, story collector, and storyteller. I’m Stephen’s sidekick and business partner, Dave Young. Before we get into today’s episode, a word from our sponsor, which is, well, it’s us, but we’re highlighting ads we’ve written and produced for our clients. So, here’s one of those. [Out of this World Plumbing Ad] Dave Young: This is the Empire Builders Podcast, by the way. Dave Young here, Steve Semple there. I wonder, Stephen, if we could do this whole episode without mentioning the name of the company that we’re going to be talking about. I ask that for the simple reason of they already know. They already know what we’re talking about. They already know we’re talking about them. They probably knew we were going to talk about them. Stephen Semple: Because of all the research I’ve done on my computer. Dave Young: No, because they’re listening to everything. They probably already know the date that this is going to come out and how long it’s… I don’t know, right? When they first started, and I don’t think we felt that way about them, and I can remember back in the early 2000s, just after the turn- Stephen Semple: In the early days, they had a statement. Larry Page was very famous. Dave Young: Yeah, “Do no evil.” Stephen Semple: “Do know evil. Do no evil,” and that was a very, very big part. In fact, in the early stages, they made a bunch of decisions that challenged the company financially because they were like, “This is not good experience for the person on the other end.” I wonder if anybody’s guessed yet what we’re going to be talking about. Dave Young: Well, then you go public, and it’s all about shareholders, right? It’s like the shareholders are like, “Well, we don’t care if you do evil or not. We want you to make money.” That’s what it’s about because you have [inaudible 00:03:01]. Stephen Semple: All those things happen. Dave Young: Yeah. Stephen Semple: This company that we’re talking about, we’ll go a little while before we’ll let the name out, was founded… On September 4th in 1998 was when it was actually founded. Dave Young: Oh, ’98. It goes back before the turn of the century [inaudible 00:03:14]. Stephen Semple: Yeah. It was founded by Larry Page and Sergey Brin, who met at Stanford. Interesting note, the Stanford grads also created Yahoo. Dave Young: Okay, yeah. Stephen Semple: That’s giving you another little clue about the company that we might be talking about. Dave Young: In the same geek club. Stephen Semple: Yeah, so 1998. I was thinking back, one year after I graduated from university, Windows 98 is launched and, believe it or not, the last Seinfeld episode aired. Dave Young: Are you kidding me? Stephen Semple: No, isn’t that crazy? Dave Young: ’98. Stephen Semple: Yeah. Dave Young: I mean, I was busy raising four daughters in ’98. Stephen Semple: Yeah. Today, this company, as you said, because you didn’t want me to name the company, has more net income than any other business in US history. It has, now, I got to let the cat out of the bag, eight and a half billion searches a day happen. And yes, we’re talking about the birth of Google, which is also now known as part of the Alphabet group. Dave Young: Alphabet, yeah. It’s funny how they got to get a name that means everything. Did they have a name before Google? I know Google was like… Oh, it’s a number really, right? It’s a gazillion, bazillion Googleplex. Stephen Semple: As we’ll go into a little bit later, they actually spelled it wrong when they registered the site. That’s not actually the way that the word is spelled. I’ll have to go… But yeah, the first iteration was a product called BackRub was the name of it. Dave Young: Backrub, okay. Stephen Semple: Alphabet also owns the second largest search engine, which is YouTube. Together, basically, it’s a $2 trillion business, which is larger than the economy of Canada. It’s this amazing thing. Going back to 1998, there are dozens of search engines all using different business models. Now, today Alphabet’s like 90% in the market. Up until this point, it’s been unassailable, and it’s going to be really interesting to see what the future of AI and whatnot brings to that business. But we’re not talking about the future, we’re talking about the past here, so back to the start. Larry Page was born in Lansing, Michigan. His dad is a professor of computer science. His mom is also a computer academic. This is in the ’70s. Between 1979 and ’80, his dad does a stint at Stanford and then also goes to work at Microsoft. Now, Larry and Sergey meet at Stanford, and they’re very ambitious, they’re equal co-founders, but Larry had this thing he also talked about where he said, “You need to do more than just invent things.” It wasn’t about inventing things, it was about creating things that people would use. Here’s what’s going on in the world of the web at this time to understand what’s going on. Here’s some web stats. In 1993, there’s 130 websites in the world. In 1996, three years later, there’s 600,000 websites. That’s a 723% growth year over year. The world has never seen growth like that before. Dave Young: Right, yeah. It was amazing to experience it. People that are younger than us don’t realize what it was. Josh Johnson, the comedian, has a great routine on trying to explain to people what it was like before Google. You needed to know something- Stephen Semple: What it was like for the internet. Dave Young: Yeah. You had to ask somebody who knew. If you needed the answer to a question, you had to ask somebody. And if they didn’t know, then you had to find somebody else, or you had to go to the library and ask a librarian and they would help you find the answer- Stephen Semple: Well, I don’t think it’s like a- Dave Young: … maybe by giving you a book that may or may not have the answer. Stephen Semple: Here’s an important point. I want you to put a pin in that research. We’re going to come back to it. I was about to go down a rabbit hole, but let’s come back to this in just a moment, because this is a very, very important point here about the birth of Google. Larry and Sergey first worked on systems to allow people to make annotations and notes directly on websites with no human involved, but the problem is that that could just overrun a site because there was no systems for ranking or order or anything along that lines. The other question they started to ask is, “Which annotations should someone look at? What are the ones that have authority?” This then created the idea of page rankings. All of this became messy, and this led to them to asking the question, “What if we just focused on ranking webpages?” which led to ranking search. Now, whole idea was ranking was based upon authority and credibility, and they drew this idea from academia. So when we would do research, David, and you’d find that one book, what did you do to figure out who the authority was on the topic? You went and you saw what book did that cite, what research did this book cite. The further you went back in those citations, the closer you got to the true authority, right? Do you remember doing that type of research? Dave Young: Yeah, sure. Stephen Semple: Right. They looked at that and they went, “Well, that’s how you establish credibility and authority is who’s citing who.” Okay. They decided that what they were going to do was do that for the web, and the way the web did that was links, especially in the early days where a lot of it was research. Dave Young: Yeah. If a whole bunch of people linked to you, then that gives you authority over the words that they used to link on and- Stephen Semple: Well, and also in the early days, those links carried a lot of metadata around what the author thought, like, “Why was the link there?” In the early days, backlinks were incredibly important. Now, SEO weasels are still today talking about backlinks, which is complete. Dude, backlinks, yeah, they kind of matter, but they’re… Anyway, I could go down a rabbit hole. Dave Young: Yeah. It’s like anything, the grifters figure out a way to hack the system and make something that’s not authoritative seem like it is. Stephen Semple: Yeah. It’s harder that you can’t hack the system today. Anyway, but the technology challenge, how do you figure out who’s backedlinked to who? Well, the only way you can do it is you have to crawl the entire web, copy the entire web, and reverse engineer the computation to do this. Dave Young: Yeah. It’s huge. We’ve been talking about Google’s algorithm for as long as Google’s been around. That’s the magic of it, right? Stephen Semple: Yeah. In the early days, with them doing it as a research project, they could do it because there was hundreds of sites. If this happened even two years later, like 1996, it would’ve been completely impossible because the sheer size to do it as a research project, right? Now, they called this system BackRub, and they started to shop this technology to other search engines because, again, remember there was HotBot and Lyco and Archie and AltaVista and Yahoo and Excite and Infoseek. There were a ton of these search engines. Dave Young: Don’t forget Ask Jeeves. Stephen Semple: Ask Jeeves? Actually, Ask Jeeves might’ve even been a little bit later, but yeah, Ask Jeeves was one of them once when it was around. Dave Young: There was one that was Dogpile that was… It would search a bunch of search engines. Stephen Semple: Right, yeah. There was all sorts of things. Dave Young: Yeah. Stephen Semple: There was another one called Excite, and they got close to doing a deal with Excite. They got a meeting with them, and they’re looking at a license deal, million dollars for BackRub, and they would go into the summer and they would implement it because they were still students at Stanford. They got so far as running for the executives there a side-by-side test. They demo this test and the results were so good with BackRub. Here’s what execs at Excite said, “Why on earth would we want to use your engine? We want people to stay on our site,” because, again, it would push people off the site because web portals had this mentality of keeping people on the site instead of having them leave. So it was a no deal. They go back to school and no one wants BackRub, so they decide to build it for themselves at Stanford. The original name was going to be Whatbox. Dave Young: Whatbox? I’m glad they didn’t use Whatbox. Stephen Semple: Yeah. They thought it sounded too close to a porn site or something like that. Dave Young: Okay, I’ll give them that. Stephen Semple: Larry’s dorm mate suggested Google, which is the mathematical term of 10 to the 100th power, but it’s spelled G-O-O-G-O-L. Dave Young: Googol, mm-hmm. Stephen Semple: Correct. Now, there’s lots of things here. Did Larry Page misregister? Did he decide purposely? There’s all sorts of different stories there, but the one that seems to be the most popular, at least liked the most, is that he misspelled it when he did the registration to G-O-G-G-L-E. Dave Young: I think that’s probably a good thing because when you hear it said, that’s kind of the first thing you go- Stephen Semple: That’s kind of how you spell it. Dave Young: … how you spell it. I think we’d have figured it out, but- Stephen Semple: We would’ve, but things that are easier are always better, right? Dave Young: Yeah. Stephen Semple: By spring of ’98, they’re doing 10,000 searches a day all out of Stanford University. Dave Young: Wait, 10,000 a day out of one place. Stephen Semple: Are using university resources. Everyone else is just using keywords on a page, which led to keyword stuffing, again, another one of these BS SEO keyword stuffing. Now, at one point, one half of the entire computing power at Stanford University is being used for Google searches. It’s the end of the ’98 academic year, and these guys are still students there. Now, sidebar, to this day, Stanford still owns a chunk of Google. Dave Young: Okay. Stephen Semple: Worked out well for Stanford. Dave Young: Yeah, I guess. Stephen Semple: Yeah. Now, Larry and Sergey need some seed round financing because they’ve got to get it off of Stanford. They’ve got to start building computers. They raise a million dollars. Here’s the interesting thing I had no idea. Guess who one of the first round investors are who ended up owning 25% of the company in the seed round? Dave Young: Stay tuned. We’re going to wrap up this story and tell you how to apply this lesson to your business right after this. [Using Stories To Sell Ad] Dave Young: Let’s pick up our story where we left off and trust me you haven’t missed a thing. Stephen Semple: Guess who one of the first round investors are who ended up owning 25% of the company in the seed round? Jeff Bezos. Dave Young: Oh, no kidding. Stephen Semple: Yeah, yeah. Jeff Bezos was one of the first four investors in Google. Dave Young: Okay. Well, here we are. Stephen Semple: Isn’t that incredible? Dave Young: Yeah. Stephen Semple: Now, AltaVista created a very interesting technology because AltaVista grew out of DEC computers who were building super computers at the time. They were basically one of the pre-leaders in search because what they would do is everybody else crawled the internet in series. They were crawling the internet in parallel, and this was a big technological breakthrough. In other words, they didn’t have to do it one at a time. They could send out a whole ton of crawlers, crawling all sorts of different things, all sorts of different pieces, bringing it back and could reassemble it. Dave Young: Got you. Stephen Semple: AltaVista also had therefore the most number of sites indexed. I remember back in the day, launching websites, like pre-2000, and yeah, you would launch a site and you would have to wait for it to be indexed and it could take weeks- Dave Young: You submit it. Yeah, there were things you could do to submit- Stephen Semple: There was things you could submit. Dave Young: … the search engines. Stephen Semple: Yes, yeah, and you would sit and you would wait and you’d be like, “Oh, it got crawled.” Yeah, it was crazy. We don’t think about that today. [inaudible 00:15:57] websites crawl. Dave Young: You’d make updates to your site and you’d need to resubmit it, so it would get crawled again- Stephen Semple: Oh, yeah. Yeah. Dave Young: … if there was new information. Stephen Semple: People would search your site and it would be different than the site that you would have because the updates hadn’t come through and all those other things. In 1998, Yahoo was the largest player. They were a $20 billion business, and they had a hand-curated guide to the internet, which worked at the time, but the explosive growth killed that. There was a point where Yahoo just couldn’t keep up with it. Then Yahoo went to this hybrid where the top part was hand-curated and then backfilled with search engine results. Now, originally, Google was very against the whole idea of banner ads, and this was the way everyone else was making money, because what they knew is people didn’t like banner ads, but you’re tracking eyeballs, you’re growing, you need more infrastructure, because basically their way of doing is they’re copying the entire internet and putting it on their servers and you need more money. Now, one of the other technological breakthroughs is Google figured out how to do this on a whole pile of cheap computers that they just stacked on top of each other, but you still needed money. At this moment, had no model for making money. They were getting all these eyeballs, they were faster because they built data centers around the world because they also figured out that, by decentralizing it, it was faster. They had lots of constraints. What they needed to do at this point was create a business model. What does one do when one needs to create a business model? Well, it’s early 1999, they’re running out of money. They hire Salar Kamangar, who’s a Stanford student, and they give him the job of writing a business plan. “Here, intern, you’re writing the business plan for how we’re going to make money. Go put together a pitch deck.” Dave Young: I wonder if they’re still using the plan. Stephen Semple: What they found at that point was there was basically three ways to make the money. Way number 1 was sell Google Search technology to enterprises. In other words, companies can use this to search their own documents and intranets. Dave Young: I remember that, yeah. Stephen Semple: Yeah. Number 2, sell ads, banner ads, and number 3, license search results to other search engines. Dave Young: Okay. Stephen Semple: Based upon this plan, spring of ’99, they do a Series A fundraise. They raised more money, and they also meet Omid [inaudible 00:18:22] who’s from Netscape, and he’s kind of done with Netscape because Netscape had been just bought by AOL, and they recruit him as a chief revenue officer. Omid tries to sell the enterprise model, kind of fails, so things are not looking good on the revenue front. It’s year 2000, and the technology bubble is starting to burst. The customer base is still growing because people love it, love Google, but they’re running out of money again. They decide to do banner ads, because they just have got no money. Here’s the interesting thing is, in this day, 2000, I want you to think about this, you have to set up a sales force to go out and sell banner ads to agencies, people picking up the phone and walking into offices, reaching out to ad agencies. Dave Young: Yeah, didn’t have a platform for buying and selling… And banner ads, gosh, they were never… Google ads, in the most recent memory, are always context-related, right? Stephen Semple: Yes. Dave Young: But if you’re just selling banner ads to an agency, you might be looking for dog food and you’re going to see car ads and you’re going to see ads for high-tech servers and all kinds of things that don’t have anything to do with what you’re looking for. Stephen Semple: That’s how the early banner ads work. Hold that thought. You’re always one step ahead of me, Dave. Dave Young: Oh, sorry. Stephen Semple: Hold that thought. No, this is awesome. Dave Young: I’m holding it. Stephen Semple: What I want to stress is, when we talk about how the world has changed, in 2000, Google decides to do banner ads and how they have to do it is a sales force going out, reaching out to agencies, and agencies faxed in the banner ads. Dave Young: Okay. Yeah, sure. It would take too long for them- Stephen Semple: I’m not making this up. This is how much the world has changed in 25 years. Dave Young: “Fax me the banner.” Stephen Semple: Salespeople going out to sell ads to agencies for banners on Google where the insertions were sent back by fax. Dave Young: For the people under 20 listening to us, a fax machine- Stephen Semple: Who don’t even know what the hell a fax machine is, yeah. Dave Young: A fax machine, yeah, well, we won’t go there. Stephen Semple: Yeah. Now, here’s what they do. They also say to the advertisers at this point, “Google will only accept text for banner ads for speed.” Again, they start with the model of CPM, cost per a thousand views, which is basically how all the agencies were doing it, but they did do a twist on it. They sold around this idea of intent that the ads were showing keyword-based and they were the first to do that. What they did is they did a test to prove this. This was really cool. They set themselves up as an Amazon affiliate and dynamically generated a link on a book search and served up an ad, an affiliate ad, and they’re able to show they were able to sell a whole pile of books. The test proved the idea worked. And then what they did is they went out and they white-labeled this for others. For example, Yahoo did it, and it would show on the bottom of Yahoo, “Powered by Google.” But here’s the thing, as soon as you start saying, “Powered by Google,” what are you doing? You’re creating share of voice. Share of voice, right? Dave Young: Well, yeah, why don’t I just go to Google? Stephen Semple: Why don’t I just go to Google? Look, we had saw this a few years earlier when Hotmail was launched by Microsoft where you would get this email and go, “Powered by Hotmail,” and you’d be like, “What’s this Hotmail thing?” Suddenly, everybody was getting Hotmail accounts, right? Dave Young: Yeah. Stephen Semple: No one has a Hotmail account, no longer they have Gmail accounts, they hardly have Gmail accounts anymore. Dave Young: No, I could tell you that we’ve got a lot of people at Wizard Academy that email us off with a Hotmail. Stephen Semple: Still have Hotmail accounts? Dave Young: Sure. Stephen Semple: Oh, wow. So it’s still around? Okay. Dave Young: And then some Yahoos, yeah. Stephen Semple: Wow, that’s amazing. That’s amazing. Well, still- Dave Young: Yahoo, the email, not the customer. They’re not a Yahoo, but they have an account there. Stephen Semple: In October 2000, they launch AdWords with a test of 350 advertisers. And then, in 2002, they launched pay-per-click Advertising. And then 2004, they go public. Now, here’s one of the other things I want to talk about in terms of share of voice. They had a couple things going on with share of voice. They had that, “powered by Google,” which created share of voice because… We often think of share of voice as being just advertising in terms of how much are people knowing about us. I remember knowing nothing about Google and then learning about Google when Google went public because Google dragged out going public. They talked about it for a long time, but it meant it was financial press, it was front page news. It got a lot of PR and a lot of press around the time that they went public. That going public for them also created massive share of voice because there was suddenly a whole community that were not technologically savvy that we’re now suddenly aware of, “Oh, there’s this Google thing.” Dave Young: And they’re in the news, yeah. So I’ve got an idea for us, Steve. Stephen Semple: Yep, okay. Dave Young: All right. Stephen Semple: Let’s hear it. Dave Young: Let’s pick up part 2 of Google at the point they go public. Stephen Semple: All right, let’s do that. That’ll be an episode we’ll do in the future, yeah. Dave Young: We don’t do very many two-parters, but we’re already kind of a lengthy Empire Builder Podcast here. Stephen Semple: Oh, yeah. I was just taking it to this point, but I think that would be very interesting- Dave Young: Oh, okay. Stephen Semple: … because look, Google is a massive force in the world today- Dave Young: Unbelievable, yeah. Stephen Semple: … and I think it would be interesting to do the next part because there’s all sorts of things that they did to continue this path of attracting eyeballs. Dave Young: We haven’t even touched on Gmail yet. No, we have not. We have not. Stephen Semple: Because that happened after they went public. Correct. Let’s do that. Dave Young: Okay. Stephen Semple: Here’s the lesson that I think that I want people to understand is share of voice comes from other things, but we’re going to explore that even more in this part 2. I like the idea of doing this part 2. They really looked at this problem from a completely different set of eyeballs, and this is where I commend Google, from the standpoint of there’s all this stuff in the internet and what we really want to know is who is the authority. They looked at the academic world for how does it establish authority, and how authority is established is how much is your work cited by others, how much are other… So, now, Google has of course expanded that to direct search and there’s all these other things, but they’ve always looked at it from the standpoint of, “Who in this space has the most authority? Who is really and truly the expert on this topic? We’re going to try to figure that out and serve that up.” Dave Young: Yeah. Stephen Semple: That’s core to what their objective has been. Dave Young: We could talk about Google for four or five episodes probably. Stephen Semple: We may, but we know we’re going to do one more. Dave Young: All right. Stephen Semple: Awesome. Dave Young: Well, thanks for bringing it up. We did mention their name. Actually, if we just put this out there, “Hey, Google, why don’t you send us all the talking points we need for part 2?” There, I put it out there. Let me know how that works. Stephen Semple: My email’s about to get just slammed. All right. Thanks, David. Dave Young: You won’t know it’s from them though. You won’t know. You won’t know. Isn’t that good? Stephen Semple: That’s true. That’s true. Dave Young: Thank you, Stephen. Stephen Semple: All right. Thanks, David. Dave Young: Thanks for listening to the podcast. Please share us, subscribe on your favorite podcast app, and leave us a big, fat, juicy five-star rating and review at Apple Podcasts. And if you’d like to schedule your own 90-minute Empire Building session, you can do it at empirebuildingprogram.com.
In this episode, you'll learn how to diversify beyond Google without trying to be everywhere. You'll get a simple 3-tier framework to decide which platforms still matter for clinic SEO, reputation, and bookings, plus an easy scorecard to prioritize what's worth your time.We cover:The Tier 1 foundation, website, Apple Maps, Bing, and reviewsTier 2 high-leverage platforms, one social channel, and the right directoriesWhy YellowPages can still support local visibility and consistencyHow to get “AI-ready” so your clinic is easier to find in tools like ChatGPT and GrokThe metrics that matter (calls, clicks, direction requests, and appointments)Walk away with a clear 90-day focus plan and a “good enough” checklist for each platform.>> Episode webpage, blog, and show notes: https://propelyourcompany.com/clinic-seo-beyond-google-platforms-that-still-matter/Send in your questions. ❤ We'd love to hear from you!NEW Webinar: How to dominate Google Search, Google Maps, AI-driven search results, and get more new patients.>> Save your spot
Welcome to Episode 381 of New Dad Gaming! The 2025 "No New Games" challenge is finally complete, and the floodgates are open. In this 2026 kickoff special, Trevor and Jeff break down the massive list of games they denied themselves last year and the upcoming titles they can't wait to pre-order. We discuss the shifting landscape of 2026, including the looming release of GTA 6, the slump in console sales, and why we are both pivoting toward handhelds like the PlayStation Portal and Steam Deck to fit gaming into dad life. We also do a live "Google Search" deep dive into the top PC games coming this year, debating everything from Kingdom Come: Deliverance 2 to cozy farming sims like Fantasy Life. Plus, we discuss what a household looks like after a Roblox Ban and whether Fortnite or Star Wars Jedi will take its place. Topics Covered: The Floodgates Open: How it feels to finally be allowed to buy games again. Hardware Shift: Why we are ignoring the PS6 rumors to focus on Portable Gaming (Steam Deck & PS Portal). The 2026 Shopping List: Kingdom Come: Deliverance 2, Death Stranding 2, and Clair Obscur: Expedition 33. Backlog Catch-Up: Why Space Marine 2 is the first "old" game we are buying. Dad Life: Embracing "Mobile Gaming" (Red Dead on iPad) and why we are hoping for a Final Fantasy 7 Part 3demo. The Kids' Rotation: Life after banning Roblox and the move to Star Wars Jedi: Survivor. Timestamps: 0:00 - Intro: Welcome to 2026! 2:15 - The State of Gaming: GTA 6 & The Steam Deck Juggernaut 5:00 - Jeff's Wishlist: Final Fantasy 7 Part 3 & The PS Portal 8:30 - Trevor's Wishlist: Retro Handhelds & Red Dead on iPad 13:10 - Kids' Gaming Forecast: Can we keep Roblox out of the house? 17:45 - The "Immediate Buy" List: KCD2, Expedition 33, & Space Marine 2 22:00 - Live Reaction: Googling the "Top PC Games of 2026" (Arc Raiders & Donkey Kong?) 27:30 - Final Thoughts & Call for New Challenges Connect with New Dad Gaming:
SEO is one of the most powerful ways to attract new patients to your clinic—but here's the big question: should you handle it yourself or hire experts to do it for you?In this episode, we dive deep into the debate between DIY SEO vs. working with professionals, breaking down the pros, cons, and even a middle-ground option that gives you expert guidance without the overwhelm.
Steve Gruber discusses news and headlines
Welcome back to The Weekly Scroll with Call Her Creator where I break down the biggest Instagram updates of the week. If you want to understand how Instagram SEO and discovery is changing (using AI) and how to adjust your content strategy without overthinking it, this is the episode to listen to. We're talking about major shifts in: How Instagram posts are showing up in Google Search and what they're using to scrape your SEO on posts Why Instagram is prioritizing recommendations and non-follower reach What recent Reels updates mean for content creation and growth How these changes impact smaller creators and businesses I'll explain what's changing, what to pay attention to, and what creators should stop stressing about moving into the new year. This episode is for creators, entrepreneurs, and business owners who want clarity instead of chaos when it comes to the Instagram algorithm. No fluff. No fear-mongering. Just the updates that matter and how to think about them. Listen now to stay ahead of what's coming next on Instagram. Join Club Enfluence for viral Reels prompts, trendsand Canva templates delivered every Monday:
Thinking about running Google ads but not sure where to start? In this episode, we break down the differences between Google Local Service Ads and Google Search (PPC) Ads for lawn care and landscaping businesses. You'll learn the pros and cons of each, how they work, and what to consider before investing—so you can choose the option that fits your goals, budget, and growth plans. Important Links: https://www.brandedbull.com/ https://www.instagram.com/brandedbull/ https://www.facebook.com/brandedbullinc https://www.lawntrepreneuracademy.com/
Real estate agents, professionals, coaches, and local service experts, this new FIRED UP Podcast episode featuring YouTube Ads Strategist Adam Holland breaks down exactly how to turn Google Search and YouTube into high-intent real estate leads. Adam has managed more than $25 million in YouTube ad spend and has helped countless brands scale through paid traffic, Google Search campaigns, and advanced tracking strategies. You will learn why YouTube is a buyer intent platform, how Google search ads let you rank your name, your real estate brand, and even protect your online reputation, plus how local agents can target movers, buyers, and sellers who are already searching. Adam breaks down separate channels for organic and paid, search based ad campaigns, and how to track leads so Google sends you more of the right clients. If you are serious about real estate marketing, lead generation, and growing your business, do not quit.
SpaceX IPO coming – huge increase in valuation over past 3 months Happy Hanukah – Eight Crazy Nights Now Kevin AND Kevin PLUS we are now on Spotify and Amazon Music/Podcasts! Click HERE for Show Notes and Links DHUnplugged is now streaming live - with listener chat. Click on link on the right sidebar. Love the Show? Then how about a Donation? Follow John C. Dvorak on Twitter Follow Andrew Horowitz on Twitter Warm-Up - Last Chance for CTP Cup 2025 participants - Happy Hanukah - Eight Crazy Nights - Sad News - Rob Reiner - Fed decision is out.... - Overdue eco reports coming this week Markets - Oracle still problematic - SpaceX IPO coming - huge increase in valuation over past 3 months - Another Bankruptcy - cleaning up is not good business - Oh my - Now Kevin AND Kevin - Weight loss game continues - One thing saved for last - a doozie... Tesla - - All time High - Prospect of Robotaxi - Even though sales hitting multi-year lows Wall Street Never Sleeps? - Nasdaq files to extend trading to 23 hours on weekdays - Banks concerned about investor protections, costs, liquidity, volatility risks of nonstop trading - Proponents argue round-the-clock trading benefits global investors - That may create some additional volatility potential SpaceX - SpaceX aims for a potential $1.5 trillion market cap with an Initial Public Offering in 2026, which could become the largest IPO in history - July 2025 tender valuation was $400B - Dec 14th (4 months later) $800B - Starlink is the primary money winner of this deal - Tesla shares climbing even with nothing behind it - seemingly in sympathy for this IPO ---- TESLA does not have ownership of SpaceX - OH - this could be the reason....U.S. deliveries dropped significantly in November—the lowest since early 2022—but this weakness has been overshadowed by the enthusiasm for autonomy. Rob Reiner - A son of legendary Hollywood director Rob Reiner and his wife, producer Michele Singer Reiner, Nick Reiner, is being held on suspicion of murder following their deaths, according to Los Angeles Police Department Chief Jim McDonnell. He's being held on $4 million bail. - Citing law enforcement sources and family friends, ABC News reported on Monday that Nick Reiner had recently returned to live at his parents' South Chadbourne Avenue home. The move was described as a temporary arrangement intended to help him stabilize. - Not going to discuss the Truth Social post about this tragedy HEADLINE ALERT - "Copper could hit ‘stratospheric new highs' as hoarding of the metal in U.S. continues" - Copper has gone from 5.77 to 5.30 (July to today) - 6 Tops at this price since 2011 - Not seeing this as per the headline - seems like a Hunt Brothers special from the 1980s - CORNERING THE MARKET ---1980 - Silver went from $11 to $50 then crashed, bankrupting the Hunt Bros - after COMEX changed rules forcing them to cover positions Bankruptcy - After 35 years, the maker of the Roomba robot vacuum filed for bankruptcy protection late Sunday night. Following warnings issued earlier this year that it was fast running out of options, iRobot says it is entering Chapter 11 protection and will be acquired by its contract manufacturer, China-based Picea Robotics. - The company says it will continue to operate “with no anticipated disruption to its app functionality, customer programs, global partners, supply chain relationships, or ongoing product support.” - Remember that Amazon - The Amazon buyout of iRobot, maker of Roomba, was announced in 2022 for $1.7 billion but ultimately failed in January 2024 due to significant regulatory pushback, primarily from the EU, over anti-competitive concerns. -- Amazon walked away with a $94 million termination fee Fed Pick - President Donald Trump said Friday that Kevin Warsh has moved to the top of his list as the next Federal Reserve chair, though Kevin Hassett also remains in contention, according to the Wall Street Journal. - Interesting that this comes days after Hassett said that we would not let outside suggestions influence his voting - ---In addition to putting heavier weight on Warsh getting the job, Trump repeated an assertion he has made in the past that the Fed chair ought to consult the president about interest rate decisions. - Also of interest, prediction markets had Hassett at 95% probability - now it moved to 50% - big payday for people in the know. Housing Prices - Average home price is DOWN on year-over-year basis - First time on national level since 2024 - Active listings in November were nearly 13% higher than November 2024, but new listings were just 1.7% higher --- Houses are on market longer - - Prices in Austin, Texas, are down 10% from last year; in Denver, they're down 5%, according to Parcl Labs. Tampa, Florida, and Houston both saw prices fall 4%, and Atlanta and Phoenix saw price decreases of 3%. More Hosing Related - Zillow shares plunged more than 9% on Monday on worries that the online real estate platform could have a big new competitor: Google Search. - Google appears to be running tests on putting real estate sale listings into its search results. Overdue Eco - Black Hole - The U.S. Bureau of Labor Statistics on Tuesday releases its long-awaited combined employment reports for October and November, but a number of key details will be missing after the government shutdown prevented data collection, including October's unemployment rate, resulting in the first-ever gap in that critical data series since inception in 1948. - NICE JOB GANG! - Some of the data will be estimated. - It said it would not publish the headline CPI number or the so-called core CPI, which strips out the volatile food and energy components, for October. "BLS cannot provide specific guidance to data users for navigating the missing October observations," the agency said. Some Updates - Some info coming in are estimates - some delayed - Unemployment at 4.6% - Latest report shows +64,000 added - ISM Manufacturing and Non-manufacturing - both slowed over the last month The Fed - Meanwhile the Fed cuts rates.... - A Federal Reserve split over where its priorities should lie cut its key interest rate Wednesday in a 9-3 vote, but signaled a tougher road ahead for further reductions. - The FOMC's “dot plot” indicated just one more reduction in 2026 and another in 2027, amid considerable disagreement from members about where rates should head. - In addition to the rate decision, the Fed also announced it will resume buying Treasury securities. The central bank will start by buying $40 billion in Treasury bills, beginning Friday. - Markets were all over the place on this as it was a little confusing at first - then it seemed that everyone loved (for one day) - Why is the Fed moving up Treasury purchases to "immediately" from a few months from now? - AND - dissension ! A larger group that usual of regional Fed bank presidents signaled they opposed the cut, and six policymakers said the benchmark federal funds rate should end 2025 in a range of 3.75% to 4%, suggesting they opposed the move. - Long bonds have not moved at all on this news. Costco Earnings - Costco beat Wall Street's fiscal first-quarter sales and revenue expectations. - Sales rose 8.2% and digital sales jumped 20.5% compared with the year-ago quarter. - Costco surpassed Wall Street's quarterly expectations and posted year-over-year sales growth of 8.2% as the retailer attracted more digital sales and opened new locations. - Earnings per share: $4.50 vs. $4.27 expected - Revenue: $67.31 billion vs. $67.14 billion expected - Costco does not provide year ahead guidance - Shares down from a recent high of $855 Costco Fun Facts - About 4.5 million pies were sold in the three days before Thanksgiving, which is equivalent to roughly 7,000 pies per warehouse. - These were bakery pies (e.g., pumpkin, apple), - Costco had more than $250 million in non-food online orders on Black Friday, a record for Costco's U.S. e-commerce business. - Approximately 358,000 whole pizzas were served at Costco's U.S. food courts, a 31% jump from last year. (500 pizza's per store) Fat No More - Retatrutide - Eli Lilly said its next-generation obesity drug delivered what appears to be the highest weight loss seen so far in a late-stage trial and reduced knee arthritis pain, clearing the first of several upcoming studies on the weekly injection. - In a 48-week Phase 2 study, participants on the highest dose lost an average of 24% of their body weight. - Recent Phase 3 results showed patients on the highest dose lost an average of 28.7% of their body weight after 68 weeks. - The trials also showed improvements in related health conditions, including knee osteoarthritis pain, blood pressure, and liver fat - This triple action is what makes retatrutide potentially more effective for weight loss than existing medications like Zepbound (tirzepatide), which targets two receptors, or Wegovy (semaglutide), which targets only one. Paypal - PayPal Holdings Inc. applied to become a bank in the US, looking to take advantage of the Trump administration's openness to financial-technology companies entering the banking system. - The payments-focused firm submitted applications to the Federal Deposit Insurance Corp. and the Utah Department of Financial Institutions to form a Utah-chartered industrial loan company, PayPal said in a statement Monday. - If approved, PayPal Bank would help the firm bolster its small-business lending capabilities, according to the statement, which said the company has provided access to more than $30 billion in loans and capital since 2013. Ford - Management Confused - Instead of planning to make enough electric vehicles to account for 40 percent of global sales by 2030—as it pledged just four years ago—Ford says it will focus on a broader range of hybrids, extended-range electrics, and battery-electric models, which executives now say will account for 50 percent of sales by the end of the decade. - The automaker will make hybrid versions of almost every vehicle in its lineup, the company says. - All in on EVS cost them - Ford expects to record about $19.5 billion in special items, mostly during the fourth quarter. ---- The charges are related to a restructuring of its business priorities and a pullback in its all-electric vehicle investments. Australia - Australia has implemented a groundbreaking ban preventing children under 16 from accessing major social media platforms like TikTok, Instagram, and Facebook, effective December 2025, to protect them from harm, with significant fines for companies failing to enforce it, though messaging apps and gaming platforms are currently exempt. - Reddit is suing - Facebook, Instagram, Snapchat, Threads, TikTok, X (Twitter), YouTube, Reddit, Kick, and Twitch are all banned for kids under 16. - Thoughts on this? Saved For Last - Of all the eye-popping numbers that Oracle Corp. published last week on the costs of its artificial-intelligence data center buildout, the most striking didn't appear until the day after its earnings press release and analyst call. - The more comprehensive 10-Q earnings report that appeared on Thursday detailed $248 billion of lease-payment commitments, “substantially all” related to data centers and cloud capacity arrangements, the business-software firm said. These are due to commence between now and its 2028 financial year but they're not yet included on its balance sheet. - That's almost $150 billion more than was disclosed in the footnotes of September's earnings update. Love the Show? Then how about a Donation? The Winner for iShares Bitcoin Trust ETF (IBIT) Winners will be getting great stuff like the new "OFFICIAL" DHUnplugged Shirt! CTP CUP 2025 Participants: Jim Beaver Mike Kazmierczak Joe Metzger Ken Degel David Martin Dean Wormell Neil Larion Mary Lou Schwarzer Eric Harvey (2024 Winner) FED AND CRYPTO LIMERICKS See this week's stock picks HERE Follow John C. Dvorak on Twitter Follow Andrew Horowitz on Twitter