Podcasts about AGI

  • 1,685PODCASTS
  • 5,452EPISODES
  • 41mAVG DURATION
  • 4DAILY NEW EPISODES
  • Jul 17, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about AGI

Show all podcasts related to agi

Latest podcast episodes about AGI

Camp Gagnon
Is AI Actually Causing Unemployment? (Expert Reveals the Truth)

Camp Gagnon

Play Episode Listen Later Jul 17, 2025 95:10


What is the current state of AI? Today, we explore artificial intelligence and see how it is becoming a regular part of human life. We'll discuss how AI has changed education, the difference between AI and AGI, AI's role in convenience, the seemingly paranormal ability to read people, and other interesting topics. WELCOME TO CAMP!

Personal Development Mastery
#519 Your voices matter: reflections from listeners of Personal Development Mastery.

Personal Development Mastery

Play Episode Listen Later Jul 17, 2025 6:51 Transcription Available


What if a podcast conversation could spark real change in your life?In this unique episode, Agi shares heartfelt reflections from four listeners whose stories reveal the deep impact of connection, authenticity, and personal growth. If you're striving to bridge the gap between learning and doing, this episode will resonate deeply.Hear how Agi's conversations with listeners like Laura, Karen, and T.R. highlight the power of community and the importance of turning insight into consistent action, all while shaping the future of the podcast itself.Listen now to feel seen, inspired, and reminded that your journey matters.˚VALUABLE RESOURCES:Join our growing community at MasterySeekersTribe.com, where self-mastery seekers come together for connection and growth.˚

The Other Side of Midnight with Frank Morano
Hour 3: Bourdain, Bots & Beyond | 07-16-25

The Other Side of Midnight with Frank Morano

Play Episode Listen Later Jul 16, 2025 57:05


Lionel examines the suspicious death of Anthony Bourdain, highlighting inconsistencies in the official suicide narrative, his outspoken criticism of elites and trafficking, and parallels to other celebrity deaths. Lionel talks about Artificial Intelligence (AI), Artificial General Intelligence (AGI) and the existential threat of AGI's recursive self-improvement is also explored. Lionel talks to an 88-year-old "conspiracy person" and former radio professional. Another caller, an addict in recovery, discusses the "free high" of anesthesia during medical procedures and a potential future of brain-stimulated highs.A third caller asks about the implications of AI-generated performers like Elvis holograms. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Dawn of Dynamic AI: RFT Comes Online, w/ Predibase CEO Dev Rishi, from Inference by Turing Post

Play Episode Listen Later Jul 16, 2025 38:47


This crossover episode from Inference by Turing Post features CEO Dev Rishi of Predibase discussing the shift from static to continuously learning AI systems that can adapt and improve from ongoing user feedback in production. Rishi provides grounded insights from deploying these dynamic models to real enterprise customers in healthcare and finance, exploring both the massive potential upside and significant safety challenges of reinforcement learning at scale. The conversation examines how "practical specialized intelligence" could reshape the AI landscape by filling economic niches efficiently, potentially offering a more stable alternative to AGI development. This discussion bridges theoretical concepts with real-world deployment experience, offering a practical preview of AI systems that "train once and learn forever." Turing Post channel:  @RealTuringPost  Turpin Post website: https://www.turingpost.com Sponsors: Google Gemini 2.5 Flash : Build faster, smarter apps with customizable reasoning controls that let you optimize for speed and cost. Start building at https://aistudio.google.com Labelbox: Labelbox pairs automation, expert judgment, and reinforcement learning to deliver high-quality training data for cutting-edge AI. Put its data factory to work for you, visit https://labelbox.com Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive The AGNTCY: The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 42,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) Sponsor: Google Gemini 2.5 Flash (00:31) About the Episode (03:46) Training Models Continuously (05:03) Reinforcement Fine-Tuning Revolution (09:31) Agentic Workflows Challenges (Part 1) (12:51) Sponsors: Labelbox | Oracle Cloud Infrastructure (15:28) Agentic Workflows Challenges (Part 2) (15:41) ChatGPT Pivot Moment (19:59) Planning AI Future (24:45) Open Source Gaps (Part 1) (28:35) Sponsors: The AGNTCY | NetSuite by Oracle (30:50) Open Source Gaps (Part 2) (30:54) AGI vs Specialized (35:26) Happiness and Success (37:04) Outro

Les Cast Codeurs Podcast
LCC 328 - Expert généraliste cherche Virtual Thread

Les Cast Codeurs Podcast

Play Episode Listen Later Jul 16, 2025 90:13


Dans cet épisode, Emmanuel et Antonio discutent de divers sujets liés au développement: Applets (et oui), app iOS développées sous Linux, le protocole A2A, l'accessibilité, les assistants de code AI en ligne de commande (vous n'y échapperez pas)… Mais aussi des approches méthodologiques et architecturales comme l'architecture hexagonale, les tech radars, l'expert généraliste et bien d'autres choses encore. Enregistré le 11 juillet 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-328.mp3 ou en vidéo sur YouTube. News Langages Les Applets Java c'est terminé pour de bon… enfin, bientot: https://openjdk.org/jeps/504 Les navigateurs web ne supportent plus les applets. L'API Applet et l'outil appletviewer ont été dépréciés dans JDK 9 (2017). L'outil appletviewer a été supprimé dans JDK 11 (2018). Depuis, impossible d'exécuter des applets avec le JDK. L'API Applet a été marquée pour suppression dans JDK 17 (2021). Le Security Manager, essentiel pour exécuter des applets de façon sécurisée, a été désactivé définitivement dans JDK 24 (2025). Librairies Quarkus 3.24 avec la notion d'extensions qui peuvent fournir des capacités à des assistants https://quarkus.io/blog/quarkus-3-24-released/ les assistants typiquement IA, ont accès a des capacités des extensions Par exemple générer un client à partir d'openAPI Offrir un accès à la,base de données en dev via le schéma. L'intégration d'Hibernate 7 dans Quarkus https://quarkus.io/blog/hibernate7-on-quarkus/ Jakarta data api restriction nouvelle Injection du SchemaManager Sortie de Micronaut 4.9 https://micronaut.io/2025/06/30/micronaut-framework-4-9-0-released/ Core : Mise à jour vers Netty 4.2.2 (attention, peut affecter les perfs). Nouveau mode expérimental “Event loop Carrier” pour exécuter des virtual threads sur l'event loop Netty. Nouvelle annotation @ClassImport pour traiter des classes déjà compilées. Arrivée des @Mixin (Java uniquement) pour modifier les métadonnées d'annotations Micronaut sans altérer les classes originales. HTTP/3 : Changement de dépendance pour le support expérimental. Graceful Shutdown : Nouvelle API pour un arrêt en douceur des applications. Cache Control : API fluente pour construire facilement l'en-tête HTTP Cache-Control. KSP 2 : Support de KSP 2 (à partir de 2.0.2) et testé avec Kotlin 2. Jakarta Data : Implémentation de la spécification Jakarta Data 1.0. gRPC : Support du JSON pour envoyer des messages sérialisés via un POST HTTP. ProjectGen : Nouveau module expérimental pour générer des projets JVM (Gradle ou Maven) via une API. Un super article sur experimenter avec les event loops reactives dans les virtualthreads https://micronaut.io/2025/06/30/transitioning-to-virtual-threads-using-the-micronaut-loom-carrier/ Malheureusement cela demander le hacker le JDK C'est un article de micronaut mais le travail a ete collaboratif avec les equipes de Red Hat OpenJDK, Red Hat perf et de Quarkus et Vert.x Pour les curieux c'est un bon article Ubuntu offre un outil de creation de container pour Spring notamment https://canonical.com/blog/spring-boot-containers-made-easy creer des images OCI pour les applications Spring Boot basées sur Ubuntu base images bien sur utilise jlink pour reduire la taille pas sur de voir le gros avantage vs d'autres solutions plus portables d'ailleurs Canonical entre dans la danse des builds d'openjdk Le SDK Java de A2A contribué par Red Hat est sorti https://quarkus.io/blog/a2a-project-launches-java-sdk/ A2A est un protocole initié par Google et donne à la fondation Linux Il permet à des agents de se décrire et d'interagir entre eux Agent cards, skills, tâche, contexte A2A complémente MCP Red hat a implémenté le SDK Java avec le conseil des équipes Google En quelques annotations et classes on a un agent card, un client A2A et un serveur avec l'échange de messages via le protocole A2A Comment configurer mockito sans warning après java 21 https://rieckpil.de/how-to-configure-mockito-agent-for-java-21-without-warning/ les agents chargés dynamiquement sont déconseillés et seront interdis bientôt Un des usages est mockito via bytebuddy L'avantage est que la,configuration était transparente Mais bon sécurité oblige c'est fini. Donc l'article décrit comment configurer maven gradle pour mettre l'agent au démarrage des tests Et aussi comment configurer cela dans IntelliJ idea. Moins simple malheureusement Web Des raisons “égoïstes” de rendre les UIs plus accessibles https://nolanlawson.com/2025/06/16/selfish-reasons-for-building-accessible-uis/ Raisons égoïstes : Des avantages personnels pour les développeurs de créer des interfaces utilisateurs (UI) accessibles, au-delà des arguments moraux. Débogage facilité : Une interface accessible, avec une structure sémantique claire, est plus facile à déboguer qu'un code désordonné (la « soupe de div »). Noms standardisés : L'accessibilité fournit un vocabulaire standard (par exemple, les directives WAI-ARIA) pour nommer les composants d'interface, ce qui aide à la clarté et à la structuration du code. Tests simplifiés : Il est plus simple d'écrire des tests automatisés pour des éléments d'interface accessibles, car ils peuvent être ciblés de manière plus fiable et sémantique. Après 20 ans de stagnation, la spécification du format d'image PNG évolue enfin ! https://www.programmax.net/articles/png-is-back/ Objectif : Maintenir la pertinence et la compétitivité du format. Recommandation : Soutenu par des institutions comme la Bibliothèque du Congrès américain. Nouveautés Clés :Prise en charge du HDR (High Dynamic Range) pour une plus grande gamme de couleurs. Reconnaissance officielle des PNG animés (APNG). Support des métadonnées Exif (copyright, géolocalisation, etc.). Support Actuel : Déjà intégré dans Chrome, Safari, Firefox, iOS, macOS et Photoshop. Futur :Prochaine édition : focus sur l'interopérabilité entre HDR et SDR. Édition suivante : améliorations de la compression. Avec le projet open source Xtool, on peut maintenant construire des applications iOS sur Linux ou Windows, sans avoir besoin d'avoir obligatoirement un Mac https://xtool.sh/tutorials/xtool/ Un tutoriel très bien fait explique comment faire : Création d'un nouveau projet via la commande xtool new. Génération d'un package Swift avec des fichiers clés comme Package.swift et xtool.yml. Build et exécution de l'app sur un appareil iOS avec xtool dev. Connexion de l'appareil en USB, gestion du jumelage et du Mode Développeur. xtool gère automatiquement les certificats, profils de provisionnement et la signature de l'app. Modification du code de l'interface utilisateur (ex: ContentView.swift). Reconstruction et réinstallation rapide de l'app mise à jour avec xtool dev. xtool est basé sur VSCode sur la partie IDE Data et Intelligence Artificielle Nouvelle edition du best seller mondial “Understanding LangChain4j” : https://www.linkedin.com/posts/agoncal_langchain4j-java-ai-activity-7342825482830200833-rtw8/ Mise a jour des APIs (de LC4j 0.35 a 1.1.0) Nouveaux Chapitres sur MCP / Easy RAG / JSon Response Nouveaux modeles (GitHub Model, DeepSeek, Foundry Local) Mise a jour des modeles existants (GPT-4.1, Claude 3.7…) Google donne A2A a la Foundation Linux https://developers.googleblog.com/en/google-cloud-donates-a2a-to-linux-foundation/ Annonce du projet Agent2Agent (A2A) : Lors du sommet Open Source Summit North America, la Linux Foundation a annoncé la création du projet Agent2Agent, en partenariat avec Google, AWS, Microsoft, Cisco, Salesforce, SAP et ServiceNow. Objectif du protocole A2A : Ce protocole vise à établir une norme ouverte pour permettre aux agents d'intelligence artificielle (IA) de communiquer, collaborer et coordonner des tâches complexes entre eux, indépendamment de leur fournisseur. Transfert de Google à la communauté open source : Google a transféré la spécification du protocole A2A, les SDK associés et les outils de développement à la Linux Foundation pour garantir une gouvernance neutre et communautaire. Soutien de l'industrie : Plus de 100 entreprises soutiennent déjà le protocole. AWS et Cisco sont les derniers à l'avoir validé. Chaque entreprise partenaire a souligné l'importance de l'interopérabilité et de la collaboration ouverte pour l'avenir de l'IA. Objectifs de la fondation A2A : Établir une norme universelle pour l'interopérabilité des agents IA. Favoriser un écosystème mondial de développeurs et d'innovateurs. Garantir une gouvernance neutre et ouverte. Accélérer l'innovation sécurisée et collaborative. parler de la spec et surement dire qu'on aura l'occasion d'y revenir Gemini CLI :https://blog.google/technology/developers/introducing-gemini-cli-open-source-ai-agent/ Agent IA dans le terminal : Gemini CLI permet d'utiliser l'IA Gemini directement depuis le terminal. Gratuit avec compte Google : Accès à Gemini 2.5 Pro avec des limites généreuses. Fonctionnalités puissantes : Génère du code, exécute des commandes, automatise des tâches. Open source : Personnalisable et extensible par la communauté. Complément de Code Assist : Fonctionne aussi avec les IDE comme VS Code. Au lieu de blocker les IAs sur vos sites vous pouvez peut-être les guider avec les fichiers LLMs.txt https://llmstxt.org/ Exemples du projet angular: llms.txt un simple index avec des liens : https://angular.dev/llms.txt lllms-full.txt une version bien plus détaillée : https://angular.dev/llms-full.txt Outillage Les commits dans Git sont immuables, mais saviez vous que vous pouviez rajouter / mettre à jour des “notes” sur les commits ? https://tylercipriani.com/blog/2022/11/19/git-notes-gits-coolest-most-unloved-feature/ Fonctionnalité méconnue : git notes est une fonctionnalité puissante mais peu utilisée de Git. Ajout de métadonnées : Permet d'attacher des informations à des commits existants sans en modifier le hash. Cas d'usage : Idéal pour ajouter des données issues de systèmes automatisés (builds, tickets, etc.). Revue de code distribuée : Des outils comme git-appraise ont été construits sur git notes pour permettre une revue de code entièrement distribuée, indépendante des forges (GitHub, GitLab). Peu populaire : Son interface complexe et le manque de support des plateformes de forge ont limité son adoption (GitHub n'affiche même pas/plus les notes). Indépendance des forges : git notes offre une voie vers une plus grande indépendance vis-à-vis des plateformes centralisées, en distribuant l'historique du projet avec le code lui-même. Un aperçu dur Spring Boot debugger dans IntelliJ idea ultimate https://blog.jetbrains.com/idea/2025/06/demystifying-spring-boot-with-spring-debugger/ montre cet outil qui donne du contexte spécifique à Spring comme les beans non activés, ceux mockés, la valeur des configs, l'état des transactions Il permet de visualiser tous les beans Spring directement dans la vue projet, avec les beans non instanciés grisés et les beans mockés marqués en orange pour les tests Il résout le problème de résolution des propriétés en affichant la valeur effective en temps réel dans les fichiers properties et yaml, avec la source exacte des valeurs surchargées Il affiche des indicateurs visuels pour les méthodes exécutées dans des transactions actives, avec les détails complets de la transaction et une hiérarchie visuelle pour les transactions imbriquées Il détecte automatiquement toutes les connexions DataSource actives et les intègre avec la fenêtre d'outils Database d'IntelliJ IDEA pour l'inspection Il permet l'auto-complétion et l'invocation de tous les beans chargés dans l'évaluateur d'expression, fonctionnant comme un REPL pour le contexte Spring Il fonctionne sans agent runtime supplémentaire en utilisant des breakpoints non-suspendus dans les bibliothèques Spring Boot pour analyser les données localement Une liste communautaire sur les assistants IA pour le code, lancée par Lize Raes https://aitoolcomparator.com/ tableau comparatif qui permet de voir les différentes fonctionnalités supportées par ces outils Architecture Un article sur l'architecture hexagonale en Java https://foojay.io/today/clean-and-modular-java-a-hexagonal-architecture-approach/ article introductif mais avec exemple sur l'architecture hexagonale entre le domaine, l'application et l‘infrastructure Le domain est sans dépendance L‘appli spécifique à l'application mais sans dépendance technique explique le flow L'infrastructure aura les dépendances à vos frameworks spring, Quarkus Micronaut, Kafka etc Je suis naturellement pas fan de l'architecture hexagonale en terme de volume de code vs le gain surtout en microservices mais c'est toujours intéressant de se challenger et de regarder le bénéfice coût. Gardez un œil sur les technologies avec les tech radar https://www.sfeir.dev/cloud/tech-radar-gardez-un-oeil-sur-le-paysage-technologique/ Le Tech Radar est crucial pour la veille technologique continue et la prise de décision éclairée. Il catégorise les technologies en Adopt, Trial, Assess, Hold, selon leur maturité et pertinence. Il est recommandé de créer son propre Tech Radar pour l'adapter aux besoins spécifiques, en s'inspirant des Radars publics. Utilisez des outils de découverte (Alternativeto), de tendance (Google Trends), de gestion d'obsolescence (End-of-life.date) et d'apprentissage (roadmap.sh). Restez informé via les blogs, podcasts, newsletters (TLDR), et les réseaux sociaux/communautés (X, Slack). L'objectif est de rester compétitif et de faire des choix technologiques stratégiques. Attention à ne pas sous-estimer son coût de maintenance Méthodologies Le concept d'expert generaliste https://martinfowler.com/articles/expert-generalist.html L'industrie pousse vers une spécialisation étroite, mais les collègues les plus efficaces excellent dans plusieurs domaines à la fois Un développeur Python expérimenté peut rapidement devenir productif dans une équipe Java grâce aux concepts fondamentaux partagés L'expertise réelle comporte deux aspects : la profondeur dans un domaine et la capacité d'apprendre rapidement Les Expert Generalists développent une maîtrise durable au niveau des principes fondamentaux plutôt que des outils spécifiques La curiosité est essentielle : ils explorent les nouvelles technologies et s'assurent de comprendre les réponses au lieu de copier-coller du code La collaboration est vitale car ils savent qu'ils ne peuvent pas tout maîtriser et travaillent efficacement avec des spécialistes L'humilité les pousse à d'abord comprendre pourquoi les choses fonctionnent d'une certaine manière avant de les remettre en question Le focus client canalise leur curiosité vers ce qui aide réellement les utilisateurs à exceller dans leur travail L'industrie doit traiter “Expert Generalist” comme une compétence de première classe à nommer, évaluer et former ca me rappelle le technical staff Un article sur les métriques métier et leurs valeurs https://blog.ippon.fr/2025/07/02/monitoring-metier-comment-va-vraiment-ton-service-2/ un article de rappel sur la valeur du monitoring métier et ses valeurs Le monitoring technique traditionnel (CPU, serveurs, API) ne garantit pas que le service fonctionne correctement pour l'utilisateur final. Le monitoring métier complète le monitoring technique en se concentrant sur l'expérience réelle des utilisateurs plutôt que sur les composants isolés. Il surveille des parcours critiques concrets comme “un client peut-il finaliser sa commande ?” au lieu d'indicateurs abstraits. Les métriques métier sont directement actionnables : taux de succès, délais moyens et volumes d'erreurs permettent de prioriser les actions. C'est un outil de pilotage stratégique qui améliore la réactivité, la priorisation et le dialogue entre équipes techniques et métier. La mise en place suit 5 étapes : dashboard technique fiable, identification des parcours critiques, traduction en indicateurs, centralisation et suivi dans la durée. Une Definition of Done doit formaliser des critères objectifs avant d'instrumenter tout parcours métier. Les indicateurs mesurables incluent les points de passage réussis/échoués, les temps entre actions et le respect des règles métier. Les dashboards doivent être intégrés dans les rituels quotidiens avec un système d'alertes temps réel compréhensibles. Le dispositif doit évoluer continuellement avec les transformations produit en questionnant chaque incident pour améliorer la détection. La difficulté c'est effectivement l'évolution métier par exemple peu de commandes la nuit etc ça fait partie de la boîte à outils SRE Sécurité Toujours à la recherche du S de Sécurité dans les MCP https://www.darkreading.com/cloud-security/hundreds-mcp-servers-ai-models-abuse-rce analyse des serveurs mcp ouverts et accessibles beaucoup ne font pas de sanity check des parametres si vous les utilisez dans votre appel genAI vous vous exposer ils ne sont pas mauvais fondamentalement mais n'ont pas encore de standardisation de securite si usage local prefferer stdio ou restreindre SSE à 127.0.0.1 Loi, société et organisation Nicolas Martignole, le même qui a créé le logo des Cast Codeurs, s'interroge sur les voies possibles des développeurs face à l'impact de l'IA sur notre métier https://touilleur-express.fr/2025/06/23/ni-manager-ni-contributeur-individuel/ Évolution des carrières de développeur : L'IA transforme les parcours traditionnels (manager ou expert technique). Chef d'Orchestre d'IA : Ancien manager qui pilote des IA, définit les architectures et valide le code généré. Artisan Augmenté : Développeur utilisant l'IA comme un outil pour coder plus vite et résoudre des problèmes complexes. Philosophe du Code : Un nouveau rôle centré sur le “pourquoi” du code, la conceptualisation de systèmes et l'éthique de l'IA. Charge cognitive de validation : Nouvelle charge mentale créée par la nécessité de vérifier le travail des IA. Réflexion sur l'impact : L'article invite à choisir son impact : orchestrer, créer ou guider. Entraîner les IAs sur des livres protégés (copyright) est acceptable (fair use) mais les stocker ne l'est pas https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/ Victoire pour Anthropic (jusqu'au prochain procès): L'entreprise a obtenu gain de cause dans un procès très suivi concernant l'entraînement de son IA, Claude, avec des œuvres protégées par le droit d'auteur. “Fair Use” en force : Le juge a estimé que l'utilisation des livres pour entraîner l'IA relevait du “fair use” (usage équitable) car il s'agit d'une transformation du contenu, pas d'une simple reproduction. Nuance importante : Cependant, le stockage de ces œuvres dans une “bibliothèque centrale” sans autorisation a été jugé illégal, ce qui souligne la complexité de la gestion des données pour les modèles d'IA. Luc Julia, son audition au sénat https://videos.senat.fr/video.5486945_685259f55eac4.ia–audition-de-luc-julia-concepteur-de-siri On aime ou pas on aide pas Luc Julia et sa vision de l'IA . C'est un eversion encore plus longue mais dans le même thème que sa keynote à Devoxx France 2025 ( https://www.youtube.com/watch?v=JdxjGZBtp_k ) Nature et limites de l'IA : Luc Julia a insisté sur le fait que l'intelligence artificielle est une “évolution” plutôt qu'une “révolution”. Il a rappelé qu'elle repose sur des mathématiques et n'est pas “magique”. Il a également alerté sur le manque de fiabilité des informations fournies par les IA génératives comme ChatGPT, soulignant qu'« on ne peut pas leur faire confiance » car elles peuvent se tromper et que leur pertinence diminue avec le temps. Régulation de l'IA : Il a plaidé pour une régulation “intelligente et éclairée”, qui devrait se faire a posteriori afin de ne pas freiner l'innovation. Selon lui, cette régulation doit être basée sur les faits et non sur une analyse des risques a priori. Place de la France : Luc Julia a affirmé que la France possédait des chercheurs de très haut niveau et faisait partie des meilleurs mondiaux dans le domaine de l'IA. Il a cependant soulevé le problème du financement de la recherche et de l'innovation en France. IA et Société : L'audition a traité des impacts de l'IA sur la vie privée, le monde du travail et l'éducation. Luc Julia a souligné l'importance de développer l'esprit critique, notamment chez les jeunes, pour apprendre à vérifier les informations générées par les IA. Applications concrètes et futures : Le cas de la voiture autonome a été discuté, Luc Julia expliquant les différents niveaux d'autonomie et les défis restants. Il a également affirmé que l'intelligence artificielle générale (AGI), une IA qui dépasserait l'homme dans tous les domaines, est “impossible” avec les technologies actuelles. Rubrique débutant Les weakreferences et le finalize https://dzone.com/articles/advanced-java-garbage-collection-concepts un petit rappel utile sur les pièges de la méthode finalize qui peut ne jamais être invoquée Les risques de bug si finalize ne fini jamais Finalize rend le travail du garbage collector beaucoup plus complexe et inefficace Weak references sont utiles mais leur libération n'est pas contrôlable. Donc à ne pas abuser. Il y a aussi les soft et phantom references mais les usages ne sont assez subtils et complexe en fonction du GC. Le sériel va traiter les weak avant les soft, parallel non Le g1 ça dépend de la région Z1 ça dépend car le traitement est asynchrone Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-19 juillet 2025 : DebConf25 - Brest (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 22-24 septembre 2025 : Kernel Recipes - Paris (France) 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2 octobre 2025 : Nantes Craft - Nantes (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-7 octobre 2025 : Swift Connection 2025 - Paris (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 17 octobre 2025 : Sylius Con 2025 - Lyon (France) 17 octobre 2025 : ScalaIO 2025 - Paris (France) 20 octobre 2025 : Codeurs en Seine - Rouen (France) 23 octobre 2025 : Cloud Nord - Lille (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 5-6 novembre 2025 : Tech Show Paris - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1-2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9-11 décembre 2025 : APIdays Paris - Paris (France) 9-11 décembre 2025 : Green IO Paris - Paris (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

ESPURNA
Inteligencia artificial, Elon Musk, AGI y el futuro del ser humano con A. Losada y JL Chulilla

ESPURNA

Play Episode Listen Later Jul 16, 2025 91:59


Inteligencia artificial, Elon Musk, AGI y el futuro del ser humano con A. Losada y JL Chulilla, (parte 2) Escucha el episodio completo en la app de iVoox, o descubre todo el catálogo de iVoox Originals

SEO Unfiltered
Ep 4.2: LLM, GEO, AEO—What Do All These AI Names Mean?

SEO Unfiltered

Play Episode Listen Later Jul 16, 2025 10:11 Transcription Available


We remember back in the day when SEO and PPC were just about the only abbreviations we had to worry about, but then came along AI, and boy oh boy, have things gotten just a little bit more alphabetty.In this episode, Genny's going to run through the list of the most common AI-related terms that marketers most likely encounter on their mission to maximise their visibility on AI-powered search. In case you want spoilers, here's the list:AI (Artificial Intelligence)Any technology that simulates human intelligence in machines that allows them to perform tasks like recognise patterns, make predictions, and understand language.GenAI (Generative AI)A type of AI that can generate new content, (i.e., text, images, music, or videos) based on its training data. Examples include ChatGPT, MidJourney, and Deepfakes.GEO (Generative Engine Optimization)This is the process of optimising web content to maximise your brand visibility in generative AI systems (like ChatGPT) by ensuring your content gets selected as answers in AI-generated responses.AEO (Answer Engine Optimization)Synonymous with GEO, AEO focuses on optimizing content so that it appears in AI-generated answers, particularly for Answer Engines.AIO (AI Overviews)AIO refers to AI Overviews, those helpful content summaries at the top of Google's search results, which include a short overview with relevant links.LLM (Large Language Model)A type of AI that's trained on vast amounts of text data that can process and generate human-like language. LLMs like GPT-3 can answer questions, generate text, and more, but they are based on fixed training data (they don't learn in real time).NLP (Natural Language Processing)This is a subset of AI that's focused on helping machines understand, interpret, and generate human language, e.g., speech recognition, text analysis, and machine translation.ML (Machine Learning)Another subset of AI where machines learn from data to recognize patterns and make predictions. DL (Deep Learning)A part of machine learning that uses neural networks with multiple layers (hence the term "deep") to process and understand complex data.AGI (Artificial General Intelligence)A type of AI that can learn and apply knowledge across various domains, just like humans. Unlike narrow AI, which excels at specific tasks (e.g., chatbots), AGI can perform a wide variety of tasks with human-like reasoning.Hallucinations (AI Hallucinations)In this context, hallucinations refer to instances where the model generates incorrect or totally made up information that appears convincing. GPT (Generative Pre-trained Transformer)A type of LLM (like ChatGPT), that uses transformer architecture to generate human-like text based on large-scale pretraining. GPT can generate, summarise, or answer questions based on data it has been trained on.Remember, we're at the very start of our AI journey, so don't let information overwhelm get in the way of AI adoption!Happy Listening

DeFi Slate
How Blockchain Fixes AI's Biggest Problem with Anand Iyer

DeFi Slate

Play Episode Listen Later Jul 14, 2025 53:04


What if the future of AGI isn't owned by a handful of tech giants?In today's episode, we sit down with an Anand Iyer, a prominent AI crypto investor who's been living in the trenches of both worlds to explore how $300 billion has flowed into centralized AI versus just $1 billion into decentralized AI and why this disparity creates the biggest opportunity. We dig into the push to give AI ownership back to the people and a bold prediction: the next billion-dollar project might be coded by just one developer and launched onchain.Let's explore.The Rollup---Newton is the trust layer for autonomous finance. Smart. Secure. Verifiable. Built for a future where AI agents replace apps and interfaces. Learn more here: https://www.magicnewton.com/Get effortless access to crypto's best DeFi yields. Continually rebalanced by AI powered Keepers to earn you more while saving you time and reducing costs. Learn more here: https://summer.fi/earn?referralCode=2000096----

Secrets of Success
Andy Sack & Adam Brotman - AI First

Secrets of Success

Play Episode Listen Later Jul 14, 2025 27:50


Bill Horan talks with Adam Brotman and Andy Sack, authors of AI FIRST. Andy and Adam will discuss what the differences between AI, AGI and ASI are, why they say they were dunbfounded by CHAT GPT 3.5, how people can learn more about AI just by using it for everyday simple tasks and how we can be hurt by scammers if we don't keep up with AI.

En.Digital Podcast
La Tertul-IA #58 GROK 4, Navegadores Agénticos y Robots de Amazon

En.Digital Podcast

Play Episode Listen Later Jul 14, 2025 58:34


En esta tertulia semanal de Product Hackers, hablamos de los temas más calientes en Inteligencia Artificial.Empezamos con Grok 4, el nuevo modelo de xAI que promete superar a los PhDs en varios benchmarks. ¿Está Elon más cerca que nadie de la AGI? Exploramos el auge de los navegadores agénticos como Comet de Perplexity y DIA, que están reimaginando la experiencia de navegación con IA integrada. Y muchas más noticias

This Week in Startups
Grok 4 Beats OpenAI + The $300 AI Agent Era | E2150

This Week in Startups

Play Episode Listen Later Jul 12, 2025 71:04


Today's show: Grok 4 just leapfrogged OpenAI to become the top AI model—and it's not just hype. In this episode, @Jason and @alex break down Grok's AGI-level performance, the massive drop in LLM pricing, and why some companies are raising prices anyway. They also dive into the Missouri AG's investigation into AI “bias,” the future of First Amendment protections for LLMs, and how autonomous vehicles are creating a new category: “autonomous commerce.” If you're building with AI or betting on the future of tech, don't miss this one.Timestamps:(1:55) AI models: Grok 4 and performance benchmarks(3:51) Detailed analysis of AI model performance and price trends(10:11) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist(11:25) AI models' problem-solving capabilities and timeline for solving math problems(16:53) Legal and regulatory challenges for AI(19:56) Retool - Visit https://www.retool.com/twist and try it out today.(21:12) Bias in AI models and political implications(30:41) Vouched - Trust for agents that's built for builders like you. Check it out at http://vouched.id/twist(32:07) Infinite energy potential and AI impact; Bitcoin's new high(37:11) Crypto regulation and fintech under new administration(45:50) Future of storage, computing power, and GPU lifespan in data centers(53:40) Claude segment by Anthropic(55:09) Guest Ben Seidl of Autolane introduction(57:19) Autolane's impact on autonomous vehicles and commerce(59:05) Rise of autonomous commerce and logistics(1:06:37) Retailer issues with autonomous vehicle integration and orchestrationSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: ⁠https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(10:11) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist(19:56) Retool - Visit https://www.retool.com/twist and try it out today.(30:41) Vouched - Trust for agents that's built for builders like you. Check it out at http://vouched.id/twistGreat TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.comSubscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916

4biddenknowledge Podcast
The End of Jobs? Post-Labor Economy & AI Revolution Explained | Wealth in Automation Era

4biddenknowledge Podcast

Play Episode Listen Later Jul 11, 2025 39:59


Is this the end of jobs as we know them? Join Billy Carson as he dives into the post-labor economy and the unstoppable rise of AI and automation reshaping the workforce. Discover how robotics, AI surgeons, and automation are replacing traditional jobs and what this means for the future of work.Learn how to pivot your career, leverage decentralized finance (DeFi), and survive the coming economic shifts with strategies for passive income and financial sovereignty.Whether you're worried about job loss or curious about the future of wealth, this video covers it all—from universal basic income to the rise of artificial general intelligence (AGI) and beyond.Learn more about ARC and the future of earning money here - https://go.arcrypto.io/3Hd8h23

4biddenknowledge Podcast
The End of Jobs? Post-Labor Economy & AI Revolution Explained | Wealth in Automation Era

4biddenknowledge Podcast

Play Episode Listen Later Jul 11, 2025 37:59


Is this the end of jobs as we know them? Join Billy Carson as he dives into the post-labor economy and the unstoppable rise of AI and automation reshaping the workforce. Discover how robotics, AI surgeons, and automation are replacing traditional jobs and what this means for the future of work.Learn how to pivot your career, leverage decentralized finance (DeFi), and survive the coming economic shifts with strategies for passive income and financial sovereignty.Whether you're worried about job loss or curious about the future of wealth, this video covers it all—from universal basic income to the rise of artificial general intelligence (AGI) and beyond.Learn more about ARC and the future of earning money here - https://go.arcrypto.io/3Hd8h23

Podcasty Aktuality.sk
AI prehľad: Temná stránka OpenAI, Grok 4 a Zuckova nákupná horúčka

Podcasty Aktuality.sk

Play Episode Listen Later Jul 11, 2025 50:58


Je správne, aby modely trénovali na našich dátach bez nášho vedomia a aby ľudia v rozvojových krajinách pracovali v traumatizujúcich podmienkach za pár dolárov? Zároveň sa však ukazuje nesmierna sila AI v medicíne, kde už dnes prekonáva ľudských lekárov a rieši problémy, ktoré boli roky neriešiteľné. Do toho vstupuje Európska únia s prísnou reguláciou, ktorá vyvoláva paniku u technologických gigantov.V novej epizóde podcastu SHARE sa redaktorka a redaktor Živé.sk Maroš Žofčin a Lucia Kobzová rozprávajú o najdôležitejších AI novinkách za posledné dva týždne a o tom, čo znamenajú pre našu budúcnosť.Redaktori Živé.sk vydávajú knihu: Zo série rozhovorov sa dozviete, ako umelá inteligencia čoskoro zásadne zmení svet okolo nás. Aktuálne je s letnou 30-percentnou zľavou:TIP: https://zive.aktuality.sk/clanok/0RfdZVW/nahliadnite-do-buducnosti-vydavame-knihu-o-umelej-inteligencii/V podcaste hovoríme aj o týchto témach:Prečo praktiky OpenAI prirovnávajú ku kolonializmu.Ako OpenAI traumatizovala svojich zamestnancov v Keni.Je nový Grok 4 od Elona Muska len ďalším marketingovým trikom?Podarí sa Markovi Zuckerbergovi s novým laboratóriom vyvinúť superinteligenciu?Prečo sa technologickí giganti boja nových pravidiel z Bruselu.Ako dokázala AI pomôcť páru po 18 rokoch splodiť dieťa.Témam sa venujeme aj tu:https://zive.aktuality.sk/clanok/5oD71Bu/zatial-najpokrocilejsia-ai-od-elona-muska-je-realitou-tymto-chce-zaujat-novy-grok-4/https://zive.aktuality.sk/clanok/uS7CFeW/je-radovo-presnejsia-ako-skuseni-lekari-predstavili-revolucnu-ai-ako-to-funguje/https://zive.aktuality.sk/clanok/xEX9Kd1/18-rokov-sa-im-nedarilo-otehotniet-ai-za-par-minut-ich-problem-vyriesila/https://zive.aktuality.sk/clanok/wTIp0IR/onkolog-ai-moze-vytvarat-lieky-na-rakovinu-ktorych-vyvoj-je-dnes-nerentabilny-podcast/Ďalšie odkazy spomínané v podcaste:Esej od Karen Heo: Pred spustením AGI si určite postavíme bunkerTulene, ktoré pomáhajú ľuďom pri náročných chorobáchHumanoidný robot, ktorý nosí lieky a sprevádza deti na vyšetrenia v nemocniciachPodcast SHARE pripravuje magazín Živé.sk.

MIT Sloan Management Review Polska
Limity AI: #9 Czego AI nie umie i nie będzie umieć? Prekursorskie badania w obszarze inżynierii promptów

MIT Sloan Management Review Polska

Play Episode Listen Later Jul 11, 2025 114:56


W dziewiątym odcinku "Limitów AI" rozmawiamy o tym, co przemilczane w marketingowej narracji bigtechów – obiektywnych ograniczeniach rozwijanej obecnie architektury LLM / LRM. Czym ludzkie rozumowanie i ludzka inteligencja różnią się od działania modeli językowych? Na czym właściwie polega inteligencja ogólna? Jak traktować deklaracje na temat prac nad konstrukcją jej syntetycznych odpowiedników? Czemu zamiast "AI" warto by mówić raczej o symulatorach obliczeniowych? Jakie są praktyczne, biznesowe ryzyka ich bezkrytycznej antropomorfizacji? A także: na czym polega niestabilność LLM / LRM, od czego zależy efektywność promptów i jak tę efektywność poprawiać. Special Guest: dr Paweł Szczęsny.

The Glenn Beck Program
How Trump Can Fix Epstein Files Fallout Overnight | Guests: Douglass Mackey & Edwin Black | 7/10/25

The Glenn Beck Program

Play Episode Listen Later Jul 10, 2025 131:41


After X's artificial intelligence chatbot, Grok, appeared to go rogue and begin posting anti-Semitic posts and vulgar descriptions of politicians, Elon Musk rolled out the newest iteration, Grok 4. Glenn warns that this is just the biggest, and possibly the last, step toward AGI. Soon, society won't be able to keep up with the speed at which AI will progress. Douglass Mackey, the man sentenced to seven months in prison for posting an election meme, joins to discuss his conviction being recently overturned. Mackey also details how he was targeted, the obscure law used against him, and how much money this political targeting cost him. Bill O'Reilly joins the program to discuss what President Trump told him regarding the Epstein files, as Americans are still demanding answers. Stu reviews some of the successful policies implemented by Argentina's recently elected President Javier Milei. Host of "The Edwin Black Show" Edwin Black joins to discuss his newest book, "Israel Strikes Iran," which delves into the backstory behind Israel's Operation Rising Lion. The guys discuss the recent statement made by Supreme Court Justice Ketanji Brown Jackson, in which she revealed that she believes her job is to use her position to make decisions based on her own feelings.  Learn more about your ad choices. Visit megaphone.fm/adchoices

Everyday AI Podcast – An AI and ChatGPT Podcast
Ep 564: Dr. Ben Goertzel: The Road to Creating Benevolent Decentralized AGI

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jul 10, 2025 40:19


What's at stake for humanity amid the arms race to AGI? Dr. Ben Goertzel should know. He legit coined the term AGI.

Personal Development Mastery
#517 A subtle moment that changed my sense of purpose (personal development reflections)

Personal Development Mastery

Play Episode Listen Later Jul 10, 2025 6:50 Transcription Available


Have you ever wondered if your quiet, consistent efforts are truly making a difference?In a world that often rewards loud achievements and immediate results, it's easy to overlook the power of consistent, purposeful action. This reflective episode explores the moments when inner clarity and alignment emerge; not from a master plan, but from simply showing up with intention and presence.Discover how true purpose often unfolds through aligned action rather than big revelations.Learn the importance of internal validation and recognizing your own growth.Be inspired by a real story of personal insight sparked through consistency and presence.Press play now to uncover the unseen impact of your efforts and connect with the deeper meaning behind your daily actions.˚VALUABLE RESOURCES:Click here to get in touch with Agi and discuss mentoring/coaching.˚You can find the previous episodes of this series here: #489, #495, #501, #505, #509, #513˚Join our growing community at MasterySeekersTribe.com, where self-mastery seekers come together for connection and growth.˚

In the Pit with Cody Schneider | Marketing | Growth | Startups
50 ai agents are running this guy's business. no employees?

In the Pit with Cody Schneider | Marketing | Growth | Startups

Play Episode Listen Later Jul 10, 2025 52:30


In this episode, Adam Silverman — co-founder & CEO of Agent Ops — dives deep into what “AI agents” actually are, why observability matters, and the very real marketing & growth automations companies are shipping today. From social-listening bots that draft Reddit replies to multi-agent pipelines that rebalance seven-figure ad budgets in real time, Adam lays out a practical playbook for founders, heads of growth, and non-technical operators who want to move from hype to hands-on results.Guest socials• LinkedIn: https://www.linkedin.com/in/adamsil•

Cloud Realities
CR0105: How little we still understand about GreenOps with James Hall, Green Pixie

Cloud Realities

Play Episode Listen Later Jul 10, 2025 32:39


GreenOps is a cultural transformation that empowers developers to turn emissions data into meaningful action, bridging the communication gap with ESG teams and exposing the critical truth that cloud cost and carbon cost are not the same, which fundamentally reshapes how we approach sustainable IT.This week, Dave, Esmee and Rob talk to James Hall, Head of GreenOps at Green Pixie, to unpack the real state of GreenOps today—and why we've only just scratched the surface.  TLDR 01:57 Rob is confused about AGI 06:11 Cloud conversation with James Hall 22:10 Esmee as media archeologist, found GreenOps is 50 years old 30:46 Having some drinks in the summer Guest James Hall: https://www.linkedin.com/in/james-f-hall/ Hosts Dave Chapman: https://www.linkedin.com/in/chapmandr/ Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/ Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/Production Marcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/ Dave Chapman: https://www.linkedin.com/in/chapmandr/ Sound Ben Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/ Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/'Cloud Realities' is an original podcast from Capgemini

Interpreting India
Beyond Superintelligence: A Realist's Guide to AI

Interpreting India

Play Episode Listen Later Jul 10, 2025 39:21


The episode begins with Kapoor explaining the origins of AI Snake Oil, tracing it back to his PhD research at Princeton on AI's limited predictive capabilities in social science domains. He shares how he and co-author Arvind Narayanan uncovered major methodological flaws in civil war prediction models, which later extended to other fields misapplying machine learning.The conversation then turns to the disconnect between academic findings and media narratives. Kapoor critiques the hype cycle around AI, emphasizing how its real-world adoption is slower, more fragmented, and often augmentative rather than fully automating human labor. He cites the enduring demand for radiologists as a case in point.Kapoor introduces the concept of “AI as normal technology,” which rejects both the notion of imminent superintelligence and the dismissal of AI as a passing fad. He argues that, like other general-purpose technologies (electricity, the internet), AI will gradually reshape industries, mediated by social, economic, and organizational factors—not just technical capabilities.The episode also examines the speculative worldviews put forth by documents like AI 2027, which warn of AGI-induced catastrophe. Kapoor outlines two key disagreements: current AI systems are not technically on track to achieve general intelligence, and even capable systems require human and institutional choices to wield real-world power.On policy, Kapoor emphasizes the importance of investing in AI complements—such as education, workforce training, and regulatory frameworks—to enable meaningful and equitable AI integration. He advocates for resilience-focused policies, including cybersecurity preparedness, unemployment protection, and broader access to AI tools.The episode concludes with a discussion on recalibrating expectations. Kapoor urges policymakers to move beyond benchmark scores and collaborate with domain experts to measure AI's real impact. In a rapid-fire segment, he names the myth of AI predicting the future as the most misleading and humorously imagines a superintelligent AI fixing global cybersecurity first if it ever emerged.Episode ContributorsSayash Kapoor is a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. His research focuses on the societal impact of AI. He previously worked on AI in the industry and academia at Facebook, Columbia University, and EPFL Switzerland. He is a recipient of a best paper award at ACM FAccT and an impact recognition award at ACM CSCW.Nidhi Singh is a senior research analyst and program manager at Carnegie India. Her current research interests include data governance, artificial intelligence and emerging technologies. Her work focuses on the implications of information technology law and policy from a Global Majority and Asian perspective. Suggested ReadingsAI as Normal Technology by Arvind Narayanan and Sayash Kapoor. Every two weeks, Interpreting India brings you diverse voices from India and around the world to explore the critical questions shaping the nation's future. We delve into how technology, the economy, and foreign policy intertwine to influence India's relationship with the global stage.As a Carnegie India production, hosted by Carnegie scholars, Interpreting India, a Carnegie India production, provides insightful perspectives and cutting-edge by tackling the defining questions that chart India's course through the next decade.Stay tuned for thought-provoking discussions, expert insights, and a deeper understanding of India's place in the world.Don't forget to subscribe, share, and leave a review to join the conversation and be part of Interpreting India's journey.

Mundo Futuro
179: Guerra por cerebros: Meta pagando millones para contratar genios de AI. ¿Qué es el bono demográfico y por qué se está extinguiendo?. Reporte XPRIZE 2025 sobre Longevidad y AI.

Mundo Futuro

Play Episode Listen Later Jul 10, 2025 81:53


Meta “roba” a los genios de la IA con bonos de hasta 100 M USD, un ex-CEO cruza al equipo rival y se redefine el liderazgo en AGI ético. El bono demográfico se extingue en Japón, China, Occidente y Latinoamérica; la longevidad se convierte en la nueva frontera gracias al informe Longevity 2025, IA y epigenética en ensayos humanos, y la XPRIZE que impulsa la promesa de rejuvenecimiento. Además, te recomendamos Eterno resplandor de una mente sin recuerdos para repensar cómo los recuerdos y la tecnología moldean nuestro yo. ¡Dale play y súmate a la conversación sobre el futuro que ya llegó! /// www.mundofuturo.vip No podemos predecir el futuro pero sí podemos explorarlo. Jorge Alor, Mario Valle y Jaime Limón analizan las tendencias de tecnología e innovación que cambiarán al mundo en los próximos 10 años más de lo que ha cambiado en los últimos 100. /// Jorge Alor | @elpadrino Mario Valle | @bilbeny Jaime Limón | @mrlemon /// Learn more about your ad choices. Visit megaphone.fm/adchoices

Pondering AI
A Question of Humanity with Pia Lauritzen, PhD

Pondering AI

Play Episode Listen Later Jul 9, 2025 55:48


Pia Lauritzen questions our use of questions, the nature of humanity, the premise of AGI, the essence of tech, if humans can be optimized and why thinking is required. Pia and Kimberly discuss the function of questions, curiosity as a basic human feature, AI as an answer machine, why humans think, the contradiction at the heart of AGI, grappling with the three big Es, the fallacy of human optimization, respecting humanity, Heidegger's eerily precise predictions, the skill of critical thinking, and why it's not really about the questions at all. Pia Lauritzen, PhD is a philosopher, author and tech inventor asking big questions about tech and transformation. As the CEO and Founder of Qvest and a Thinkers50 Radar Member Pia is on a mission to democratize the power of questions. Related ResourcesQuestions (Book): https://www.press.jhu.edu/books/title/23069/questions TEDx Talk: https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions Question Jam: www.questionjam.comForbes Column: forbes.com/sites/pialauritzen LinkedIn Learning: www.Linkedin.com/learning/pialauritzen Personal Website: pialauritzen.dk A transcript of this episode is here.   

Don't Stop Us Now! Podcast
AI Will Replace Jobs But …. - Peter Guagenti

Don't Stop Us Now! Podcast

Play Episode Listen Later Jul 9, 2025 49:02


This week we're sharing an AI conversation many executives are afraid to have. Our guest, Peter Guagenti, CEO of agentic platform and AI agent builder, EverWorker, is one of those rare voices willing to say what others won't: "AI will replace jobs”. Perhaps surprisingly, he goes on to explain why that's not necessarily a bad thing.While many in the industry underestimate or downplay AI's capabilities, Peter speaks plainly about what he's seeing happen right now in businesses across multiple industries.Peter has been at the forefront of enterprise AI adoption for nearly a decade. Before EverWorker, he was President of TabNine, a pioneering AI code assistant, and he's held leadership roles at major data infrastructure and dev tools companies including Cockroach Labs.What you'll hear Peter share in this episode:Why job replacement fears are missing the bigger picture Which "boring" businesses are actually leading early AI agent adoptionSkills you need to successfully implement AI for automation in any businessWhy Peter calls AGI talk "nonsense" and what leaders should focus on instead, andThe one thing about AI's future that genuinely worries him most.Peter has a gift for cutting through the hype with plain talking, concrete examples and practical insights in an easy to understand way.This is a refreshing and informative conversation for anyone wanting a glimpse into early adoption of AI agents and much more. Enjoy this conversation with the plain-speaking and savvy Peter Guagenti. Useful LinksEverWorker websiteEverWorker AI Academy - free AI fundamentals course for business leaders Hosted on Acast. See acast.com/privacy for more information.

IT Privacy and Security Weekly update.
EP 250. Cry me an AI. The IT Privacy and Security Weekly Update for the Week Ending July 8th., 2025

IT Privacy and Security Weekly update.

Play Episode Listen Later Jul 9, 2025 15:29


This week takes us from blueteeth to AI emotional supportJack Dorsey's innovative Bitchat app pioneers secure, internet-free messaging via Bluetooth, redefining decentralized communication.Google's Gemini AI introduces context-aware assistance on Android, sparking privacy debates with its opt-out data access model.A major breach exposes Catwatchful's invasive stalkerware, compromising thousands of Android devices with covert surveillance.Finally, AT&T's Account Lock feature empowers customers to safeguard their accounts against rising SIM swapping threats.Let's Encrypt revolutionizes online security by offering free TLS/SSL certificates for IP addresses, enhancing accessibility.The elusive definition of AGI fuels debate, challenging tech giants like Microsoft and OpenAI in their race for innovation.Microsoft's AI-driven layoff support sparks discussion, as displaced employees are encouraged to use Copilot for emotional resilience.Obviously lots of news and emotion packed into this week's update.  Let's go cry an AI.For a full transcript click here.

Eftermiddag i P3
Skithistorier, sommarplågor & Horny Horner

Eftermiddag i P3

Play Episode Listen Later Jul 9, 2025 41:32


Våra flöden har fyllts av skitnyheter på det senaste och vi tar tillfället i akt och frågar er när ni blivit nedskitna. Evelina Galli berättar allt vi behöver veta om AGI och Tina Mehrafzoon bjuder på sommarplågor för finsmakare. Lyssna på alla avsnitt i Sveriges Radio Play.

AI DAILY: Breaking News in AI
AI'S IDEOLOGICAL WAR

AI DAILY: Breaking News in AI

Play Episode Listen Later Jul 9, 2025 3:35


Plus Are We About To Lose Every Job To Robots?Like this? Get AIDAILY, delivered to your inbox, 3x a week. Subscribe to our newsletter at https://aidaily.usAI Is Radicalizing Both Sides—Welcome to the New Culture WarAI isn't just changing tools—it's fueling a full-blown ideological war. Some see AI as the ultimate evolution of human progress, while skeptics dismiss it as a bubble or worse. The tech's strengths and flaws—automation, hallucinations, social impact—are driving both hype and backlash. This split isn't about tech, it's about belief. Would You Replace Your CEO with an AI Avatar? Tech CEOs Are Testing the WatersSome tech bosses are literally sending AI versions of themselves to earnings calls: Klarna's Sebastian Siemiatkowski and Zoom's Eric Yuan let digital avatars do their talking for them. Meanwhile, Klarna's CEO even admitted AI could eventually replace his own job—though real-world chaos means humans aren't fully out… yet. Winning the AI Race Means More Than Just Tech—The U.S. Needs StrategyThe U.S.–China AI competition isn't just about building smarter bots—it's a full-on geopolitical showdown. It's a race on three fronts: developing AGI, embedding AI across societies, and securing chips, data lanes, and regulation. To stay on top, the U.S. needs a holistic plan that blends innovation, smart policy, and defense—not just private‑sector hype. Left Tech for Welding—Here's Why It Was the Best Move EverTabby Toney got laid off from her software gig in May and bailed on tech because AI was making everything feel shallow. Instead, she's welding again—tapping into creativity, problem-solving, and actual hands-on work. No more burnout, no more prep for grueling interviews—just real craft. We're About to Lose Almost Every Job to Robots—Here's the DealFuturist Adam Dorr says in the next ~20 years, robots and AI will snatch nearly all jobs—cooking, coding, caring—faster and cheaper than us. Some human roles may hang on, but not nearly enough. Society's gotta rethink how we share value, income, and purpose before chaos hits. AI That Promises to ‘Solve All Diseases' Is Heading Into Human TrialsA stealthy Google-owned lab, Isomorphic Labs (spun out from DeepMind), is now testing AI-designed cancer drugs in humans. Backed by AlphaFold 3, it designs molecules in silico, aiming to slash the 10–15 year, billion-dollar drug timeline. But with no clue how the AI makes decisions, questions around safety, transparency, pricing, and monopoly loom large.

P3 Spel
Allt om AGI: Här är nya tekniken som techjättarna hyllar – och andra bävar inför

P3 Spel

Play Episode Listen Later Jul 9, 2025 8:00


Artificiell Generell Intelligens har blivit ett riktigt buzz-word i techbranschen, men vad är det egentligen och ska vi vara rädda för det? Hör mer om AGI i P3:s podd Paradiso. Lyssna på alla avsnitt i Sveriges Radio Play.

Retirement Planning Education, with Andy Panko
#159 - SPECIAL EDITION...Provisions of the "One Big Beautiful Bill Act" most likely to impact your tax return

Retirement Planning Education, with Andy Panko

Play Episode Listen Later Jul 8, 2025 45:34


In the SPECIAL EDITION episode, Andy summarizes the key provision of the recently signed into law One Big Beautiful Act that are most likely to impact you and your tax return. The topics summarized are:Permanency of the current federal tax ratesPermanency, and a slight increase, to the current standard deduction amountsA new temporary personal exemption up to $6,000 per person 65 or olderPermanency, and a slight increase, to the lifetime gift and estate size exemptionPermanency of the current Alternative Minimum Tax exclusion amount, but reduction/reversion of its income phase out levelsPermanency of the $750,000 limit on residential mortgage principal against which interest can be deductedPermanency of the elimination of miscellaneous itemized deductionsTemporary increase to $40,000 for State and Local Tax ("SALT") deductionsA new permanent charitable deduction for people who use the standard deductionA new minimum AGI-based floor on charitable donations before donations can be itemized deductionsA temporary exclusion from income tax of up to $25,000 tip incomeA temporary exclusion from income tax of up to $25,000 of overtime incomeA temporary deduction of up to $10,000 of interest loans to buy cars whose final assembly was in the U.S.Recissions of multiple "Green New Deal" tax credits such as electric vehicle credits and residential clean energy creditsCreation of new "Trump" savings accounts for children under 18And the bill having NO changes with regards to how Social Security is taxed (i.e. the bill did NOT make Social Security not taxable)Links in this episode:Final text of the One Big Beautiful Bill Act - hereMy written summary of the key individual income tax provisions of the One Big Beautiful Bill - hereTo send Andy questions to be addressed on future Q&A episodes, email andy@andypanko.comMy company newsletter - Retirement Planning InsightsFacebook group - Retirement Planning Education (formerly Taxes in Retirement)YouTube channel - Retirement Planning Education (formerly Retirement Planning Demystified)Retirement Planning Education website - www.RetirementPlanningEducation.com

80,000 Hours Podcast with Rob Wiblin
#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jul 8, 2025 170:32


Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there's a 25% chance that within four years, AI will be able to do everything needed to run an AI company, from writing code to designing experiments to making strategic and business decisions.As Ryan lays out, AI models are “marching through the human regime”: systems that could handle five-minute tasks two years ago now tackle 90-minute projects. Double that a few more times and we may be automating full jobs rather than just parts of them.Will setting AI to improve itself lead to an explosive positive feedback loop? Maybe, but maybe not.The explosive scenario: Once you've automated your AI company, you could have the equivalent of 20,000 top researchers, each working 50 times faster than humans with total focus. “You have your AIs, they do a bunch of algorithmic research, they train a new AI, that new AI is smarter and better and more efficient… that new AI does even faster algorithmic research.” In this world, we could see years of AI progress compressed into months or even weeks.With AIs now doing all of the work of programming their successors and blowing past the human level, Ryan thinks it would be fairly straightforward for them to take over and disempower humanity, if they thought doing so would better achieve their goals. In the interview he lays out the four most likely approaches for them to take.The linear progress scenario: You automate your company but progress barely accelerates. Why? Multiple reasons, but the most likely is “it could just be that AI R&D research bottlenecks extremely hard on compute.” You've got brilliant AI researchers, but they're all waiting for experiments to run on the same limited set of chips, so can only make modest progress.Ryan's median guess splits the difference: perhaps a 20x acceleration that lasts for a few months or years. Transformative, but less extreme than some in the AI companies imagine.And his 25th percentile case? Progress “just barely faster” than before. All that automation, and all you've been able to do is keep pace.Unfortunately the data we can observe today is so limited that it leaves us with vast error bars. “We're extrapolating from a regime that we don't even understand to a wildly different regime,” Ryan believes, “so no one knows.”But that huge uncertainty means the explosive growth scenario is a plausible one — and the companies building these systems are spending tens of billions to try to make it happen.In this extensive interview, Ryan elaborates on the above and the policy and technical response necessary to insure us against the possibility that they succeed — a scenario society has barely begun to prepare for.Summary, video, and full transcript: https://80k.info/rg25Recorded February 21, 2025.Chapters:Cold open (00:00:00)Who's Ryan Greenblatt? (00:01:10)How close are we to automating AI R&D? (00:01:27)Really, though: how capable are today's models? (00:05:08)Why AI companies get automated earlier than others (00:12:35)Most likely ways for AGI to take over (00:17:37)Would AGI go rogue early or bide its time? (00:29:19)The “pause at human level” approach (00:34:02)AI control over AI alignment (00:45:38)Do we have to hope to catch AIs red-handed? (00:51:23)How would a slow AGI takeoff look? (00:55:33)Why might an intelligence explosion not happen for 8+ years? (01:03:32)Key challenges in forecasting AI progress (01:15:07)The bear case on AGI (01:23:01)The change to “compute at inference” (01:28:46)How much has pretraining petered out? (01:34:22)Could we get an intelligence explosion within a year? (01:46:36)Reasons AIs might struggle to replace humans (01:50:33)Things could go insanely fast when we automate AI R&D. Or not. (01:57:25)How fast would the intelligence explosion slow down? (02:11:48)Bottom line for mortals (02:24:33)Six orders of magnitude of progress... what does that even look like? (02:30:34)Neglected and important technical work people should be doing (02:40:32)What's the most promising work in governance? (02:44:32)Ryan's current research priorities (02:47:48)Tell us what you thought! https://forms.gle/hCjfcXGeLKxm5pLaAVideo editing: Luke Monsour, Simon Monsour, and Dominic ArmstrongAudio engineering: Ben Cordell, Milo McGuire, and Dominic ArmstrongMusic: Ben CordellTranscriptions and web: Katy Moore

CMO Confidential
Adam Brotman and Andy Sack | Co-Founders, Forum3 | It's a Bird! It's a Plane! Holy Sh!t, It's AI!

CMO Confidential

Play Episode Listen Later Jul 8, 2025 67:59


A CMO Confidential Interview with Andy Sack and Adam Brotman, Co-Founders and Co-CEO's of Forum 3, authors of the book AI First, previously at Microsoft and Starbucks. They discuss why AI is different from previous technology advances and the series of "Holy Shit!" moments experienced when interviewing Sam Altman, Bill Gates and others. Key topics include: their belief that AI is "moving faster than you think" since it isn't constrained by an adoption curve or infrastructure; the power of Artificial General Intelligence which will be smarter than most experts; why trying to calculate the ROI of AI is comparable to measuring the return on electricity; and the possibility of 95% of marketing and agency jobs being impacted over the next 5 years. Tune in to hear how Chat GPT scored a top grade on the AP Biology Exam, how Moderna became an AI leader, and their tips for staying near the front of the wave.This week on CMO Confidential, host Mike Linton sits down with Adam Brotman, former Chief Digital Officer of Starbucks and co-CEO of J.Crew, and Andy Sack, venture capitalist and Managing Partner at Keen Capital. Together they co-authored AI First and co-founded Forum3, a company on a mission to educate businesses on how to thrive in the AI era.In this episode, Adam and Andy recount their interviews with leaders like Sam Altman, Bill Gates, and Reid Hoffman—and unpack why we are at a true “Holy Sh*t Moment” in technology.Learn how generative AI is poised to replace 95% of marketing tasks, what agentic AI means for the future of work, and why marketers need to shift from campaign thinking to orchestration and system design—fast.Topics Covered: • What Adam and Andy learned from interviewing tech's top minds • Why artificial general intelligence (AGI) is closer than you think • How AI tools will transform agency and in-house marketing roles • Why marketers must experiment now—or risk irrelevance • The unexpected productivity ROI of adopting AI toolsThis episode isn't just about AI—it's about how business leaders and marketers must transform to remain relevant in the age of exponential change.00:00 - Intro & AI-Powered Marketing by Publicis Sapient 01:42 - Welcome + Adam Brotman & Andy Sack intro 04:45 - Why “AI First” started as “Our AI Journey” 08:13 - The “Holy Sh*t” moment explained 10:00 - Interviewing Sam Altman and the AGI revelation 15:50 - Bill Gates' AI holy sh*t moment 20:30 - What AGI means for marketers and agencies 25:20 - Agentic AI and spinning up marketing agents 30:40 - Consumer behavior and synthetic influencers 34:50 - How agencies must evolve or die 38:20 - The case study of Moderna's AI-first approach 41:00 - Evaluating AI vendors + building internal councils 45:10 - The ROI of AI: Productivity & Unlocks 49:00 - Playbook for becoming an AI-first org 52:30 - Funny poker shirt story + parting advice 56:00 - Closing thoughts and next episode teaser #GenerativeAI #CMOConfidential #AdamBrotman #AndySack #Forum3 #MarketingAI #AIInMarketing #AIRevolution #HolyShitMoment #AIFirst #SamAltman #BillGates #AGI #MarketingPodcast #DigitalTransformation #FutureOfWork #AIProductivity #ChiefMarketingOfficer #CMOLife #AIPlaybook #MarketingLeadership #AIForBusinessSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Court Reporter Podcast
#64 - AI & the Future of Court Reporting: Separating Hype from Reality with Matt Moss

The Court Reporter Podcast

Play Episode Listen Later Jul 8, 2025 36:24 Transcription Available


In this episode, Brynn is joined by court reporter and tech-savvy expert Matt Moss to explore one of the hottest topics in the legal world today—AI in court reporting. With widespread concern about artificial intelligence replacing human professionals, Matt brings clarity to the conversation, breaking down what's real, what's hype, and how court reporters can stay ahead of the curve.You'll hear how Matt went from waiting tables to becoming a respected realtime reporter, how he relearned his theory mid-training, and why his curiosity for lifelong learning led him deep into the world of artificial intelligence. He also explains the crucial distinction between AI, AGI, and LLMs—and why understanding these terms matters.This episode is essential for anyone who's ever wondered:Will AI take over court reporting?How do tools like ChatGPT actually work?What's irreplaceable about a human court reporter?Plus, Matt gives a sneak peek into his upcoming panel at the NCRA Convention and shares his favorite resources to become more tech-literate in today's fast-moving landscape.

Whole Grain
Port Pressures: Navigating Grain Facility Challenges with Smart Equipment Solutions

Whole Grain

Play Episode Listen Later Jul 8, 2025 39:39 Transcription Available


Have questions, feedback, or thoughts on the show? We want to hear from you! Click on this link to send us a text message. Port Pressures: Navigating Grain Facility Challenges with Smart Equipment Solutions Sponsored by AGI – Ag Growth InternationalIn this episode of the Whole Grain Podcast, host Jim Lenz, Director of Global Training and Education at GEAPS, is joined by Justin Paterson of AGI (Ag Growth International) to dive into the modern-day challenges faced by grain port facilities — and how innovative equipment and systems from AGI are helping operators tackle these head-on.With 20 years of experience in the grain industry across both North and South America, Justin brings a unique global perspective to the discussion. Before joining AGI in 2018 as Vice President of Global Engineering, he served as Director of Engineering for a major grain handler in Canada. He holds degrees in Civil Engineering and Agriculture, and is a registered Professional Engineer and Professional Agronomist. Originally from Winnipeg, Manitoba, Justin is now based at AGI Brazil, just outside São Paulo, where he leads global engineering strategy for AGI's commercial infrastructure.From navigating logistical bottlenecks to enhancing throughput, safety, and operational efficiency, AGI offers scalable, smart solutions tailored to commercial grain operations. Justin shares insights from the field and explains how AGI collaborates with customers to design systems that meet the unique demands of port terminals.Tune in to learn:What makes grain ports unique compared to inland facilitiesHow AGI approaches problem-solving through integration and customizationTrends shaping the future of commercial grain handling at scaleWhether you're new to the grain industry or a seasoned pro, this episode sheds light on the evolving needs of port operations and how forward-thinking companies like AGI are rising to the challenge.Explore more about AGI Website: https://www.aggrowth.com Commercial Solutions Overview: AGI Commercial Landing Page YouTube Channel: AGI on YouTube LinkedIn: AGI on LinkedInGrain Elevator and Processing Society champions, connects and serves the global grain industry and its members. Be sure to visit GEAPS' website to learn how you can grow your network, support your personal professional development, and advance your career. Thank you for listening to another episode of GEAPS' Whole Grain podcast.

Sales and Marketing Built Freedom
AgentForce Decoded: Inside Salesforce's $80B AI Revolution

Sales and Marketing Built Freedom

Play Episode Listen Later Jul 7, 2025 49:24


Your competitors are already using AI. Don't get left behind. Weekly strategies used by PE Backed and Publicly Traded Companies →https://hi.switchy.io/U6H7S--In this conversation, Ryan Staley interviews Ajay Kumar, the head of AI product growth at Salesforce, discussing the deployment and innovative use cases of Agent Force. Ajay shares surprising applications of AI in various industries, particularly in customer service and marketing, and highlights the integration with OpenAI. The discussion also covers the future of AI, including predictions about AGI and the potential for background agents to revolutionize workflows.Chapters00:00 Introduction to AI and Agent Force at Salesforce02:33 Surprising Use Cases of Agent Force06:49 Impactful Use Cases in Sales and Marketing10:36 Integration with OpenAI and Future Roadmap14:53 Demonstration of Agent Force Features26:14 Top Use Cases and Agent Types29:00 Acquisition Insights and Technology Integration33:17 The Future of AI Agents38:55 Personal AGI Experiences and Innovations44:31 Predictions for AI's Future and Accessibility

Around the Prompt
An unfiltered conversation with Brett Adcock, CEO of Figure AI

Around the Prompt

Play Episode Listen Later Jul 6, 2025 50:04


Join Nolan Fortman and Logan Kilpatrick for a conversation with Brett Adcock, CEO of Figure AI, a general purpose robotics company. We talk about how robotics are the ultimate deployment vector of AGI, the challenges of robotics, and the timeline until home robots hit mainstream.

AXRP - the AI X-risk Research Podcast
45 - Samuel Albanie on DeepMind's AGI Safety Approach

AXRP - the AI X-risk Research Podcast

Play Episode Listen Later Jul 6, 2025 75:42


In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called "An Approach to Technical AGI Safety and Security". It covers the assumptions made by the approach, as well as the types of mitigations it outlines. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast Transcript: https://axrp.net/episode/2025/07/06/episode-45-samuel-albanie-deepminds-agi-safety-approach.html   Topics we discuss, and timestamps: 0:00:37 DeepMind's Approach to Technical AGI Safety and Security 0:04:29 Current paradigm continuation 0:19:13 No human ceiling 0:21:22 Uncertain timelines 0:23:36 Approximate continuity and the potential for accelerating capability improvement 0:34:29 Misuse and misalignment 0:39:34 Societal readiness 0:43:58 Misuse mitigations 0:52:57 Misalignment mitigations 1:05:20 Samuel's thinking about technical AGI safety 1:14:02 Following Samuel's work   Samuel on Twitter/X: x.com/samuelalbanie   Research we discuss: An Approach to Technical AGI Safety and Security: https://arxiv.org/abs/2504.01849 Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/abs/2311.02462 The Checklist: What Succeeding at AI Safety Will Involve: https://sleepinyourhat.github.io/checklist/ Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499   Episode art by Hamish Doodles: hamishdoodles.com

Recomendados de la semana en iVoox.com Semana del 5 al 11 de julio del 2021
PC Superinteligencia: Los mercenarios del modelo

Recomendados de la semana en iVoox.com Semana del 5 al 11 de julio del 2021

Play Episode Listen Later Jul 6, 2025 17:05


¿Quién controla la inteligencia artificial? ¿Y cuánto cuesta fichar al futuro? En este episodio desnudamos la guerra secreta por el talento más valioso del planeta: el que entrena modelos. Te aviso: hay millones, CEOs despechados, startups sin producto... y una lluvia de colonia con aroma a ego tecnológico. PUNTOS CLAVE DEL CAPÍTULO Meta va a la caza y captura de cerebros premium: ofertas, sueldos obscenos y fichajes que parecen del PC Fútbol. OpenAI se siente saqueada y responde con drama, recalibraciones y perfumes éticos. Thinking Machines y otras startups sin producto, pero con valoraciones de 10.000 millones, nos recuerdan que aquí manda la narrativa. Mira Murati, Daniel Gross, Ilya Sutskever… todos tienen precio o propuesta. Musk y Trump estrenan nueva telenovela: entre partidos cerdito, amenazas de deportación y guerras de egos. Ranking sorpresa: ¿qué modelo respeta más tu privacidad? (Spoiler: no es Meta, ni Gemini, ni Copilot). Y sí, ya nadie habla de AGI. Ahora lo que mola es la Superinteligencia. Piensa Poco, Scrollea Mucho: El Capitalismo Límbico Nos Tiene https://go.ivoox.com/rf/140187412 Ilya Sutskever y la Superinteligencia Segura: ¿Está el Ex-Jefe de OpenAI un Paso Adelante? https://go.ivoox.com/rf/134801029 HUMANIA: WIN-WIN Corporativo. La Era Trump-Musk https://go.ivoox.com/rf/135752500 Artículos de Referencia https://www.wired.com/story/mark-zuckerberg-welcomes-superintelligence-team https://www.wired.com/story/mark-zuckerberg-meta-offer-top-ai-talent-300-million https://www.entrepreneur.com/business-news/ai-startup-tml-from-ex-openai-exec-mira-murati-pays-500000/494108 https://www.elconfidencial.com/tecnologia/novaceno/2025-07-02/zuckerberg-inteligencia-artificial-openia-futuro-tencologia_4164371 https://www.xataka.com/robotica-e-ia/industria-ia-se-ha-convertido-juego-tronos-eso-revela-verdad-inquietante-ia-casi-todo-humo https://www.wired.com/story/sam-altman-meta-ai-talent-poaching-spree-leaked-messages https://www.businessinsider.es/economia/elon-musk-arremete-nuevo-partido-republicano-ley-presupuestaria-trump-ha-sido-batalla-1470327 https://www.businessinsider.es/economia/ultima-disputa-musk-trump-clavo-ataud-tesla-inversor-ross-gerber-1470868 https://es-us.noticias.yahoo.com/chatbot-inteligencia-artificial-protege-datos-183103697.html

Cheat on Everything: Cluely's Vision for Always-On AI Assistance

Play Episode Listen Later Jul 5, 2025 82:36


Roy Lee, founder and CEO of Cluely, discusses his AI startup's $15 million Andreessen Horowitz investment and their provocative "cheat on everything" marketing approach that has gone viral across the tech industry. They explore Cluely's real-time AI assistant that provides undetectable information during meetings and interviews, Roy's philosophy of "AI maximalism," and his vision for a post-AGI world where humans are freed from economic necessity to pursue intrinsic interests. The conversation covers his controversial stance on dissolving copyright and privacy norms for efficiency gains, the resonance of his message with young people, and how he believes society should adapt to increasingly capable AI systems. Despite the edgy messaging, Roy presents thoughtful perspectives on competing with tech giants and building technology that anticipates entirely new social contracts in an AI-dominated future. Sponsors: Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive The AGNTCY: The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 42,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (03:24) Introduction and Cluey Overview (10:55) Future Rules and Privacy (13:20) Positive Vision for Future (Part 1) (18:01) Sponsors: Oracle Cloud Infrastructure | The AGNTCY (20:01) Positive Vision for Future (Part 2) (21:23) Entrepreneurship and Impact Theory (24:22) Anti-Establishment Marketing Strategy (27:26) AI in Universities (30:16) Columbia Expulsion Story (32:48) AI Maximalism Ethics (Part 1) (32:53) Sponsor: NetSuite by Oracle (34:17) AI Maximalism Ethics (Part 2) (38:29) AI Identification Debate (46:00) Output vs Input Philosophy (51:35) Learning and Skill Building (56:40) Trust and Market Effects (01:03:42) Assessment and Hiring Revolution (01:06:47) Viral Marketing Strategy (01:12:39) Long-term Company Strategy (01:15:59) High-End Talent Acquisition (01:18:56) Outro

Behind the Numbers: eMarketer Podcast
AGI Is Coming: How It Will Change Everything—and When | Behind the Numbers

Behind the Numbers: eMarketer Podcast

Play Episode Listen Later Jul 3, 2025 28:12


On today's podcast episode, we discuss what area of people's lives artificial general intelligence (AGI) will change the most, the argument for AI developers asking permission from society to build these models, and when AGI might actually get here. Join Senior Director of Podcasts and host Marcus Johnson, and Analysts Jacob Bourne and Grace Harmon. Listen everywhere and watch on YouTube and Spotify.   To learn more about our research and get access to PRO+ go to EMARKETER.com   Follow us on Instagram at: https://www.instagram.com/emarketer/ For sponsorship opportunities contact us: advertising@emarketer.com For more information visit: https://www.emarketer.com/advertise/ Have questions or just want to say hi? Drop us a line at podcast@emarketer.com    For a transcript of this episode click here:  https://www.emarketer.com/content/podcast-agi-coming-how-will-change-everything-and-behind-numbers   © 2025 EMARKETER   Quad is a global marketing experience company that gives brands a frictionless way to go to market using an array of innovative, data-driven offerings. With a platform built for integrated execution, Quad helps clients maximize marketing effectiveness across all channels. It ranks among Ad Age's 25 largest agency companies. For more information, visit quad.com.  

Personal Development Mastery
#515 "If you die before you die, you won't die when you die" (Personal development wisdom snippets)

Personal Development Mastery

Play Episode Listen Later Jul 3, 2025 7:04 Transcription Available


What does it really mean to "die before you die", and how can this insight radically transform the way you live?Snippet of wisdom 079.In this series, I select my favourite, most insightful moments from previous episodes of the podcast.Today, my guest Martin O'Toole talks about unplugging from the illusion of modern life, the deep regrets people face at the end of their journey, and how embracing presence, gratitude, and awareness can lead to a more meaningful existence.Press play to learn how to escape the hamster wheel, live with fewer regrets, and choose a life of conscious fulfillment.˚VALUABLE RESOURCES:Listen to the full conversation with Martin O'Toole in episodes #316-317:https://personaldevelopmentmasterypodcast.com/316https://personaldevelopmentmasterypodcast.com/317˚Click here to get in touch with Agi and discuss mentoring/coaching.˚Join our growing community at MasterySeekersTribe.com, where self-mastery seekers come together for connection and growth.˚

The Lunar Society
Why I don't think AGI is right around the corner

The Lunar Society

Play Episode Listen Later Jul 3, 2025 14:01


I've had a lot of discussions on my podcast where we haggle out timelines to AGI. Some guests think it's 20 years away - others 2 years. Here's an audio version of where my thoughts stand as of June 2025. If you want to read the original post, you can check it out here. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Bitcoin Magazine
$2 Trillion to $10 Trillion: How BTC Will Re-Architect Political Power | Bitcoin Policy Hour Ep. 9

Bitcoin Magazine

Play Episode Listen Later Jul 2, 2025 28:51


What happens when Bitcoin grows from $2 trillion to $10 trillion? In this special edition of Bitcoin Policy Hour recorded in Washington, DC, BPI Exec. Director Matthew Pines and Head of Policy Zack Shapiro break down how Bitcoin's exponential monetization is forcing a re-architecture of political and economic power in real time.They dive into:- Why policymakers are finally paying attention to Bitcoin- How AI, quantum computing, and global instability intersect with BTC policy- What a $10T Bitcoin means for U.S. national security and global influence- The legislative battlefield ahead (Clarity Act, market structure bills, non-custodial dev protections)- How BPI is building the next generation of Bitcoin policy leaders in D.C.Chapters:00:00 - Intro: From the Bitcoin Policy Summit in DC04:00 - Why Bitcoin touches every policy domain06:00 - National security and dual-use tech with Patrick Witt09:00 - Government's evolving view on Bitcoin10:30 - Balancing privacy, surveillance & freedom12:00 - Private intel conversations on Bitcoin geopolitics15:45 - What's next for BPI in 202516:50 - Stablecoins, Clarity Act, and legislative strategy18:30 - Strategic outlook: quantum, AI & China22:00 - Lightning, AGI, and machine-to-machine payments25:20 - Preparing for a $5–10T Bitcoin market28:00 - Bitcoin's monetization: who leads and why it matters

Behind the Numbers: eMarketer Podcast
Artificial General Intelligence Explained: When Will AI Be Smarter Than Us? | Behind the Numbers

Behind the Numbers: eMarketer Podcast

Play Episode Listen Later Jun 30, 2025 25:36


On today's podcast episode, we discuss the various definitions of artificial general intelligence (AGI) and try to come up with the best one we can. Then we look at how smart humans are compared to current AI models. Join Senior Director of Podcasts and host Marcus Johnson, and Analysts Jacob Bourne and Gadjo Sevilla. Listen everywhere and watch on YouTube and Spotify.   To learn more about our research and get access to PRO+ go to EMARKETER.com   Follow us on Instagram at: https://www.instagram.com/emarketer/ For sponsorship opportunities contact us: advertising@emarketer.com For more information visit: https://www.emarketer.com/advertise/ Have questions or just want to say hi? Drop us a line at podcast@emarketer.com    For a transcript of this episode click here:  https://www.emarketer.com/content/podcast-btn-artificial-general-intelligence-explained-will-ai-smarter-than-us   © 2025 EMARKETER Cint is a global insights company. Our media measurement solutions help advertisers, publishers, platforms, and media agencies measure the impact of cross-platform ad campaigns by leveraging our platform's global reach. Cint's attitudinal measurement product, Lucid Measurement, has measured over 15,000 campaigns and has over 500 billion impressions globally. For more information, visit cint.com/insights.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 556: Choosing the Right AI:  Agents, LLMs, or Algorithms?

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jun 27, 2025 32:27


Everyone wants the latest and greatest AI buzzword. But at what cost? And what the heck is the difference between algos, LLMs, and agents anyway? Tune in to find out.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Choosing AI: Algorithms vs. AgentsUnderstanding AI Models and AgentsUsing Conditional Statements in AIImportance of Data in AI TrainingRisk Factors in Agentic AI ProjectsInnovation through AI ExperimentationEvaluating AI for Business SolutionsTimestamps:00:00 AWS AI Leader Departs Amid Talent War03:43 Meta Wins Copyright Lawsuit07:47 Choosing AI: Short or Long Term?12:58 Agentic AI: Dynamic Decision Models16:12 "Demanding Data-Driven Precision in Business"20:08 "Agentic AI: Adoption and Risks"22:05 Startup Challenges Amidst Tech Giants24:36 Balancing Innovation and Routine27:25 AGI: Future of Work and SurvivalKeywords:AI algorithms, Large Language Models, LLMs, Agents, Agentic AI, Multi agentic AI, Amazon Web Services, AWS, Vazhi Philemon, Gen AI efforts, Amazon Bedrock, talent wars in tech, OpenAI, Google, Meta, Copyright lawsuit, AI training, Sarah Silverman, Llama, Fair use in AI, Anthropic, AI deep research model, API, Webhooks, MCP, Code interpreter, Keymaker, Data labeling, Training datasets, Computer vision models, Block out time to experiment, Decision-making, If else conditional statements, Data-driven approach, AGI, Teleporting, Innovation in AI, Experiment with AI, Business leaders, Performance improvements, Sustainable business models, Corporate blade.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started. Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started.

Tech Won't Save Us
Chatbots Are Repeating Social Media's Harms w/ Nitasha Tiku

Tech Won't Save Us

Play Episode Listen Later Jun 26, 2025 53:48


Paris Marx is joined by Nitasha Tiku to discuss how AI companies are preying on users to drive engagement and how that's repeating many of the problems we're belatedly trying to address with social media companies at an accelerated pace.Nitasha Tiku is a technology reporter at the Washington Post.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Kyla Hewson.Also mentioned in this episode:Nitasha wrote about how chatbots are messing with people's minds.Paris wrote about Mark Zuckerberg's comments about people needing AI friends.AI companies are facing ongoing lawsuits over harmful content.Support the show

AI For Humans
Big AI Vs Humans: OpenAI's Office, Google's Free AI Agent and more AI News

AI For Humans

Play Episode Listen Later Jun 26, 2025 55:27


OpenAI, Google & Anthropic are all eating different parts of the business & creative worlds but where does that leave us? For only 25 cents, you too can sponsor a human in a world of AGI. In the big news this week, OpenAI's takes on Microsoft Office, Google's cutting the cost of AI coding with their new Google CLI (Command Line Interface) and dropped an on-device robotics platform. Oh, and Anthropic just won a massive lawsuit around AI training and fair use. Plus, Tesla's rocky rollout of their Robotaxis, Eleven Labs' new MCP-centric 11ai voice agent, Runway's Game Worlds, the best hacker in the world in now an AI bot AND Gavin defends AI slop. US HUMANS AIN'T GOING AWAY. UNLESS THE AI GIVES US ENDLESS TREATS.  #ai #ainews #openai Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links //   OpenAI Developing Microsoft Office / Google Workplace Competitor https://www.theinformation.com/articles/openai-quietly-designed-rival-google-workspace-microsoft-office?rc=c3oojq OpenAI io / trademark drama:  https://www.theguardian.com/technology/2025/jun/23/openai-jony-ive-io-amid-trademark-iyo Sam's receipts from Jason Rugolo (founder of iYo the headphone company) https://x.com/sama/status/1937606794362388674 Google's OpenSource Comand Line Interface for Gemini is Free? https://blog.google/technology/developers/introducing-gemini-cli-open-source-ai-agent/ 1000 free Gemini Pro 2.5 requests per day https://x.com/OfficialLoganK/status/1937881962070364271 Anthropic's Big AI Legal Win  https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/ More detail: https://x.com/AndrewCurran_/status/1937512454835306974 Gemini's On Device Robotics https://deepmind.google/discover/blog/gemini-robotics-on-device-brings-ai-to-local-robotic-devices/ AlphaGenome: an AI model to help scientists better understand our DNA https://x.com/GoogleDeepMind/status/1937873589170237738 Tesla Robotaxi Roll-out https://www.cnbc.com/2025/06/23/tesla-robotaxi-incidents-caught-on-camera-in-austin-get-nhtsa-concern.html Kinda Scary Looking: https://x.com/binarybits/status/1936951664721719383 Random slamming of brakes: https://x.com/JustonBrazda/status/1937518919062856107 Mira Murati's Thinking Machines Raises $2B Seed Round https://thinkingmachines.ai/ https://www.theinformation.com/articles/ex-openai-cto-muratis-startup-plans-compete-openai-others?rc=c3oojq&shared=2c64512f9a1ab832 Eleven Labs 11ai Voice Assistant https://x.com/elevenlabsio/status/1937200086515097939 Voice Design for V3 JUST RELEASED: https://x.com/elevenlabsio/status/1937912222128238967 Runway's Game Worlds  https://x.com/c_valenzuelab/status/1937665391855120525 Example: https://x.com/aDimensionDoor/status/1937651875408675060 AI Dungeon https://aidungeon.com/ The Best Hacker in the US in now an autonomous AI bot https://www.pcmag.com/news/this-ai-is-outranking-humans-as-a-top-software-bug-hunter https://x.com/Xbow/status/1937512662859981116 Simple & Good AI Work Flow From AI Warper https://x.com/AIWarper/status/1936899718678008211 RealTime Natural Language Photo Editing https://x.com/zeke/status/1937267796146290952 Bunker J Squirrel https://www.tiktok.com/t/ZTjc3hb38/ Bigfoot Sermons https://www.tiktok.com/t/ZTjcEq17Y/ John Oliver's Episode about AI Slop https://youtu.be/TWpg1RmzAbc?si=LAdktGWlIVVDqAjR Jabba Kisses Han https://www.reddit.com/r/CursedAI/comments/1ljjdw3/what_the_hell_am_i_looking_at/  

Daniel Alonzo's Wealth On The Beach Podcast
The Scary Truth About Artificial Intelligence & Jobs | Wealth on the Beach Podcast

Daniel Alonzo's Wealth On The Beach Podcast

Play Episode Listen Later Jun 25, 2025 70:20


Artificial Intelligence isn't coming — it's already here. And it's changing everything. In the latest episode of the Wealth on the Beach Podcast, I sat down with AI strategist Adriana to explore: ✅ Will AI take your job in the next 3–5 years? ✅ What is AGI — and why are tech leaders warning us? ✅ Is Universal Basic Income a solution or a silent threat? ✅ What makes us human in a machine-driven future? "You will become irrelevant if you don't pivot." — A line from the episode that hit hard. This isn't just about AI. It's about YOU, your future, and how to stay ahead of the curve. Let's reclaim the future — before it's too late. st

Lenny's Podcast: Product | Growth | Career
From ChatGPT to Instagram to Uber: The quiet architect behind the world's most popular products | Peter Deng

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Jun 22, 2025 115:28


Peter Deng has led product teams at OpenAI, Instagram, Uber, Facebook, Airtable, and Oculus and helped build products used by billions—including Facebook's News Feed, the standalone Messenger app, Instagram filters, Uber Reserve, ChatGPT, and more. Currently he's investing in early-stage founders at Felicis. In this episode, Peter dives into his most valuable lessons from building and scaling some of tech's most iconic products and companies.What you'll learn:1. Peter's one‑sentence test for hiring superstars2. Why your product (probably) doesn't matter3. Why you don't need a tech breakthrough to build a huge business4. The five PM archetypes, and how to build a team of Avengers5. Counterintuitive lessons on growing products from 0 to 1, and 1 to 1006. The importance of data flywheels and workflows—Brought to you by:Paragon—Ship every SaaS integration your customers wantPragmatic Institute—Industry‑recognized product, marketing, and AI training and certificationsContentsquare—Create better digital experiences—Where to find Peter Deng:• X: https://x.com/pxd• LinkedIn: https://www.linkedin.com/in/peterxdeng/—In this episode, we cover:(00:00) Introduction to Peter Deng(05:41) AI and AGI insights(11:35) The future of education with AI(16:53) The power of language in leadership(21:01) Building iconic products(36:44) Scaling from zero to 100(41:56) Balancing short- and long-term goals(47:12) Creating a healthy tension in teams(50:02) The five archetypes of product managers(55:39) Primary and secondary archetypes(58:47) Hiring for growth mindset and autonomy(01:15:52) Effective management and communication strategies(01:19:23) Presentation advice and self-advocacy(01:25:50) Balancing craft and practicality in product management(01:30:40) The importance of empathy in design thinking(01:35:45) Career decisions and learning opportunities(01:42:05) Lessons from product failures(01:45:42) Lightning round and final thoughts—Referenced:• OpenAI: https://openai.com/• Artificial general intelligence (AGI): https://en.wikipedia.org/wiki/Artificial_general_intelligence• Head of ChatGPT answers philosophical questions about AI at SXSW 2024 with SignalFire's Josh Constine: https://www.youtube.com/watch?v=mgbgI0R6XCw• Professors Are Using A.I., Too. Now What?: https://www.npr.org/2025/05/21/1252663599/kashmir-hill-ai#:~:text=Now%20What• Herbert H. Clark: https://web.stanford.edu/~clark/• Russian speakers get the blues: https://www.newscientist.com/article/dn11759-russian-speakers-get-the-blues/• Ilya Sutskever (OpenAI Chief Scientist)—Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment: https://www.dwarkesh.com/p/ilya-sutskever• Anthropic's CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next• Kevin Systrom on LinkedIn: https://www.linkedin.com/in/kevinsystrom/• Building a magical AI code editor used by over 1 million developers in four months: The untold story of Windsurf | Varun Mohan (co-founder and CEO): https://www.lennysnewsletter.com/p/the-untold-story-of-windsurf-varun-mohan• Microsoft CPO: If you aren't prototyping with AI, you're doing it wrong | Aparna Chennapragada: https://www.lennysnewsletter.com/p/microsoft-cpo-on-ai• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (CEO and co-founder): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Granola: https://www.granola.ai/• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder and CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Fidji Simo on LinkedIn: https://www.linkedin.com/in/fidjisimo/• Airtable: https://www.airtable.com/• George Lee on LinkedIn: https://www.linkedin.com/in/geolee/• Andrew Chen on LinkedIn: https://www.linkedin.com/in/andrewchen/• Lauryn Motamedi on LinkedIn: https://www.linkedin.com/in/laurynmotamedi/• Twilio: https://www.twilio.com/• Nick Turley on LinkedIn: https://www.linkedin.com/in/nicholasturley/• Ian Silber on LinkedIn: https://www.linkedin.com/in/iansilber/• Thomas Dimson on LinkedIn: https://www.linkedin.com/in/thomasdimson/• Joey Flynn on LinkedIn: https://www.linkedin.com/in/joey-flynn-8291586b/• Ryan O'Rourke's website: https://www.rourkery.com/• Joanne Jang on LinkedIn: https://www.linkedin.com/in/jangjoanne/• Behind the founder: Marc Benioff: https://www.lennysnewsletter.com/p/behind-the-founder-marc-benioff• Jill Hazelbaker on LinkedIn: https://www.linkedin.com/in/jill-hazelbaker-3aa32422/• Guy Kawasaki's website: https://guykawasaki.com/• Eric Antonow on LinkedIn: https://www.linkedin.com/in/antonow/• Sachin Kansal on LinkedIn: https://www.linkedin.com/in/sachinkansal/• IDEO design thinking: https://designthinking.ideo.com/• The 7 Steps of the Design Thinking Process: https://www.ideou.com/blogs/inspiration/design-thinking-process• Linear's secret to building beloved B2B products | Nan Yu (Head of Product): https://www.lennysnewsletter.com/p/linears-secret-to-building-beloved-b2b-products-nan-yu• Jeff Bezos's quote: https://news.ycombinator.com/item?id=27778175• Friendster: https://en.wikipedia.org/wiki/Friendster• Myspace: https://en.wikipedia.org/wiki/Myspace• How LinkedIn became interesting: The inside story | Tomer Cohen (CPO at LinkedIn): https://www.lennysnewsletter.com/p/how-linkedin-became-interesting-tomer-cohen• “Smile” by Jay-Z: https://www.youtube.com/watch?v=SSumXG5_rs8&list=RDSSumXG5_rs8&start_radio=1• The Wire on HBO: https://www.hbo.com/the-wire• Felicis: https://www.felicis.com/—Recommended books:• Sapiens: A Brief History of Humankind: https://www.amazon.com/Sapiens-Humankind-Yuval-Noah-Harari/dp/0062316095• The Design of Everyday Things: https://www.amazon.com/Design-Everyday-Things-Revised-Expanded/dp/0465050654• The Silk Roads: A New History of the World: https://www.amazon.com/Silk-Roads-New-History-World/dp/1101912375—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.lennysnewsletter.com/subscribe