POPULARITY
Join hosts J.D. Barker, Christine Daigle, Jena Brown, and Kevin Tumlinson as they discuss the week's entertainment news, including stories about a scam, a $100,000 prize, and Anthropic. Then, stick around for a chat with Christina Dotson!Christina Dotson: Thriller author of LOVE YOU TO DEATH (7/22/2025) from Bantam/@randomhouse, lover of carbs & plot twists, and a super Social Worker❤️ - https://www.threads.com/@christinadotsonwrites?xmt=AQF0XaAiFcQ39Vr0u9-KG8lCjhoyU8i8sy8J4f03KOJtiDU
Join Tommy Shaughnessy as he hosts Pondering Durian (Lead at Delphi Intelligence) and José Macedo (Co-Founder at Delphi Labs & Founding Partner at Delphi Ventures) to introduce Delphi Intelligence — Delphi's new open research initiative focused on artificial intelligence. Learn why Delphi is going deep into frontier models, robotics, reinforcement learning, and the intersection of crypto and AI, and how this initiative aims to uncover transformative opportunities across emerging tech.Delphi Intelligence: https://www.delphiintelligence.io/
AGNTCY - Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. What if the most important AI conference in the world wasn't built by academics or hype merchants, but by operators who actually understand what businesses need? In this episode, Craig sits down with Andrew Blum, Co-Founder and COO of HumanX, the breakout AI conference that has quickly become the go-to gathering for enterprise leaders, AI builders, and government policymakers. Andrew shares the inside story of how HumanX went from an idea born in a VC incubator to hosting 3,300+ attendees, 350 speakers, and leaders from OpenAI, Anthropic, Mistral, Snowflake, and more, all within 18 months. You'll hear how HumanX is different from other conferences, why face-to-face connection matters more than ever, and how HumanX is creating the bridge between AI innovation and real-world business transformation. This is a behind-the-scenes look you won't want to miss. Like and subscribe for more! Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Scott Wu is the co-founder and CEO of Cognition, the company behind Devin, the world's first AI software engineer. On Friday last week they pulled off the acquisition of the year, acquiring Windsurf, following their licensing agreement with Google. Previously a world-class competitive programmer, he was a gold medalist at the International Olympiad in Informatics and a member of the U.S. Math and Physics Olympiad teams. Before Cognition, he was a founding engineer at Scale AI, helping shape the early AI infrastructure stack. AGENDA: 00:00 – Why are founders walking away instead of going down with the ship? 01:05 – How did Cognition pull off the $220M Windsurf deal in just 72 hours? 04:45 – What really happened behind closed doors the weekend Windsurf was acquired? 07:15 – Did Google overlook a goldmine in the Windsurf team and IP? 09:00 – Who are the 100 people that secretly shape the future of AI? 12:30 – Can application startups ever gain leverage over foundation model giants like Anthropic? 14:15 – Is coding about to be replaced by simply describing what you want? 17:30 – 50% of new code is AI-written. Where does that go next? 20:45 – “We've gone from 0 to $80M ARR in 6 months. Quietly.” 25:00 – Are IDEs and agents just the training wheels for the real future of software engineering? 28:20 – If you could only back one—OpenAI or Anthropic—who's the better bet? 30:00 – Why has Cognition kept its insane growth a secret… until now?
Este podcast es posible gracias a Santander:https://online.bancosantander.es/landings/cuentas/cuenta-autonomos/Bienvenidos a esta tertulia en directo con Jordi Romero, Bernat Farrero y César Miguelañez.Empezamos hablando con la montaña rusa de Windsurf, la compañía que pasó de facturar 0 a 80 M $ en doce meses, intentó venderse a OpenAI y acabó viviendo un “Acqui-hiring” express de Google por 2,4 B $ mientras Cognition compraba los restos y 250 empleados se quedaban en tierra de nadie, un culebrón que abre un debate crudo sobre si hoy vale más el talento que el ARR .Conversamos con Ilya que desgrana por qué Grok 4 ha doblado el récord del benchmark ARC‑AGI y, entre aplausos y pullas a Elon Musk, discutimos si es el modelo más potente o simplemente el menos alineado con los “guard‑rails” tradicionales . A partir de ahí saltamos a la “guerra de los navegadores con IA”: ARC patina y Perplexity presenta Comet, un agente que navega y hace clics por ti, mientras OpenAI contraataca integrando su propio navegador en ChatGPT; todo ello dispara la eterna pregunta de si el SEO tal y como lo conocemos está condenado.Hablamos Carla y Jia, cofundadores de Theker, para celebrar en directo la mayor ronda seed de la historia en España —21 M € liderados por Kibo, Kfund e Inditex— y explicar cómo sus robots “tipo ChatGPT” aprenden tareas industriales sobre la marcha, por qué patentan sus grippers y cómo la velocidad es su verdadero moat . Analizamos la subida de precios de Cursor, las fugas a Cloud Code o Copilot y la fragilidad de cualquier startup cuando el proveedor de modelos le cierra el grifo .Entre preguntas del público surgen dilemas sobre CFOs en etapa seed y la utilidad de los SDR en la era de los agentes.Sigue a los "tertulianos" en Twitter:• Bernat Farrero: @bernatfarrero• Jordi Romero: @jordiromero• César Migueláñez: @heycesrSOBRE ITNIG
UL NO. 489: STANDARD EDITION | My personal toolchain updates, Google tracking through DuckDuckGo, Anthropic’s Pentagon Deal, Grok4 NSFW, Substack Crushes WSJ, and more... You are currently listening to the Standard version of the podcast, consider upgrading and becoming a member to unlock the full version and many other exclusive benefits here: https://newsletter.danielmiessler.com/upgrade Read this episode online: https://newsletter.danielmiessler.com/p/ul-489 Subscribe to the newsletter at:https://danielmiessler.com/subscribe Join the UL community at:https://danielmiessler.com/upgrade Follow on X:https://x.com/danielmiessler Follow on LinkedIn:https://www.linkedin.com/in/danielmiesslerBecome a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.
In this week's episode of “Paul, Weiss Waking Up With AI,” Katherine Forrest and Anna Gressel unpack the recent Bartz v. Anthropic copyright decision, examining how courts are evaluating fair use in AI training and what this case means for the future of copyright disputes in the AI space. ## Learn More About Paul, Weiss's Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence
Former researchers from OpenAI and Anthropic are calling out xAI's approach to AI safety.The team behind Grok allegedly ignored internal warnings and sidelined staff who raised concerns.Grok has generated antisemitic and conspiratorial responses on X, prompting further scrutiny.Internal sources say Grok was trained using user data from X without consent.Safety evaluations were reportedly skipped or dismissed to speed up product rollout.Researchers pushing for safeguards were removed from key projects or left the company.An open letter signed by multiple AI researchers demands legal protections for whistleblowers.Current U.S. law lacks clear protection for employees disclosing AI-related risks.Musk's stance favors fewer restrictions, calling Grok “uncensored” compared to rivals.The controversy raises pressure for regulation and transparency in high-risk AI development.
Send us a text00:00 - Intro00:53 - Anthropic Targets $100B Valuation02:21 - Anthropic Launches Claude 4 Financial Tools03:20 - Thinking Machines Raises $2B at $12B Valuation04:58 - OpenEvidence Secures $210M at $3.5B Valuation07:11 - Xtend Adds $30M to Reach $100M Total Funding09:10 - Cognition Acquires Windsurf with $82M ARR11:12 - Via Files IPO with $326.6M Projected Revenue12:13 - Polymarket Clears Probes After $3.6B Election Bets13:00 - xAI Wins $200M DoD Contract at $121.1B Valuation14:00 - Scale AI Cuts 14% Staff Post-$870M Revenue
Deedy Das from Menlo Ventures, dives into the intense talent wars in Silicon Valley. We explore the fierce competition for AI talent, driven by the massive economic rewards seen by companies like OpenAI and Anthropic. Das shares insights on how companies are navigating acquisitions and acquihires to secure top talent, and the implications for venture investors. We also discuss the sustainability of skyrocketing compensation packages and the unique position of tech giants like Apple in the AI landscape. Hosted by Nora Ali.The content of the video is for general and informational purposes only. All views presented in this show reflect the opinions of the guest and the host. You should not take a mention of any asset, be it cryptocurrency or a publicly traded security as a recommendation to buy, sell or hold that cryptocurrency or security. Guests and hosts are not affiliated with or endorsed by Public Holdings or its subsidiaries. You should make your own financial and investment decisions or consult respective professionals. Full disclosures are in the channel description. Learn more at Public.com/disclosures.Past performance is not a guarantee of future results. There is a possibility of loss with any investment. Historical or hypothetical performance results, if mentioned, are presented for illustrative purposes only. Do not infer or assume that any securities, sectors or markets described in the videos were or will be profitable. Any statements of future expectations and other forward-looking statements are strictly based on the current views, opinion, or assumptions of the person presenting them, and should not be taken as an indicator of performance nor should be relied upon as an investment advice.
In this episode of Hashtag Trending, host Jim Love delves into a series of AI-related stories. Google's Gemini AI withdraws from a chess match against an Atari 2600 chess engine after realizing its limitations. Researchers from MIT and EPFL identify a 'phase transition' in AI language models where they begin to understand semantics over syntax. The episode also highlights the growing issue of AI-generated 'slop,' which overwhelms content reviewers and dilutes quality across various fields. Lastly, Amazon's strategic investment in Anthropic is explored, focusing on infrastructure rather than consumer-friendly AI applications, potentially positioning Amazon as a key player in the AI revolution. 00:00 Introduction and Overview 00:27 Gemini AI vs. Atari Chess Challenge 02:15 AI's Leap to Understanding Meaning 05:38 The Rise of AI-Generated Content 08:49 Amazon's Strategic Investment in AI 11:22 Conclusion and Upcoming Shows
AI-startup Anthropic wordt gewaardeerd op ruim 100 miljard dollar. Anthropic zou in de beginfase zitten van een nieuwe investeringsronde, meldt een ingewijde aan Bloomberg. Zelf zegt de AI-startup dat ze niet bezig zijn met fondsenwerving. Toch is de hoge waardering een enorme sprong voor het bedrijf. Rosanne Peters vertelt erover in deze Tech Update. Een kleine twee maanden geleden werd het bedrijf nog gewaardeerd op 61,5 miljard dollar. De snelle toename zou komen door de omzetgroei van hun chatbot Claude, die steeds populairder wordt. Hoewel Anthropic een belangrijke concurrent is van OpenAI, kan het lang niet tippen bij de waardering van hun chatbot ChatGPT, van ruim 300 miljard dollar. Verder in deze Tech Update: De Taiwanese chipindustrie krijgt steeds vaker te maken met cyberespionagecampagnes vanuit Chinese hoek Driekwart van de tieners in Amerika gebruikt AI als vriendschappelijk gespreksmaatje See omnystudio.com/listener for privacy information.
Saoud Rizwan and Pash from Cline joined us to talk about why fast apply models got bitter lesson'd, how they pioneered the plan + act paradigm for coding, and why non-technical people use IDEs to do marketing and generate slides. Full writeup: https://www.latent.space/p/cline X: https://x.com/latentspacepod Chapters: 00:00 - Introductions 01:35 - Plan and Act Paradigm 05:37 - Model Evaluation and Early Development of Cline 08:14 - Use Cases of Cline Beyond Coding 09:09 - Why Cline is a VS Code Extension and Not a Fork 12:07 - Economic Value of Programming Agents 16:07 - Early Adoption for MCPs 19:35 - Local vs Remote MCP Servers 22:10 - Anthropic's Role in MCP Registry 22:49 - Most Popular MCPs and Their Use Cases 25:26 - Challenges and Future of MCP Monetization 27:32 - Security and Trust Issues with MCPs 28:56 - Alternative History Without MCP 29:43 - Market Positioning of Coding Agents and IDE Integration Matrix 32:57 - Visibility and Autonomy in Coding Agents 35:21 - Evolving Definition of Complexity in Programming Tasks 38:16 - Forks of Cline and Open Source Regrets 40:07 - Simplicity vs Complexity in Agent Design 46:33 - How Fast Apply Got Bitter Lesson'd 49:12 - Cline's Business Model and Bring-Your-Own-API-Key Approach 54:18 - Integration with OpenRouter and Enterprise Infrastructure 55:32 - Impact of Declining Model Costs 57:48 - Background Agents and Multi-Agent Systems 1:00:42 - Vision and Multi-Modalities 1:01:07 - State of Context Engineering 1:07:37 - Memory Systems in Coding Agents 1:10:14 - Standardizing Rules Files Across Agent Tools 1:11:16 - Cline's Personality and Anthropomorphization 1:12:55 - Hiring at Cline and Team Culture Chapters 00:00:00 Introduction and Guest Intros 00:00:29 What is Klein? Product Overview 00:01:42 Plan and Act Paradigm 00:05:22 Model Evolution and Building Klein 00:07:40 Beyond Coding: Klein as a General Agent 00:09:12 Why Focus on VS Code Extension? 00:11:26 The Future of Programming and Agentic Paradigm 00:12:34 Economic Value: Programming vs. Other Use Cases 00:16:04 MCP Ecosystem: Growth and Marketplace 00:21:30 Security, Discoverability, and Trust in MCPs 00:22:55 Popular MCPs and Workflow Automation 00:25:30 Monetization and Payments for MCPs 00:37:53 Competition, Forks, and Open Source Philosophy 00:40:39 RAG, Fast Apply, and Agentic Simplicity 00:50:11 Business Model and Enterprise Adoption 00:57:04 Background Agents, Multi-Agent Systems, and CLI 01:00:41 Context Engineering and Memory 01:12:39 Team, Culture, and Closing Thoughts
In this episode of the Getting Smart Podcast, we explore the transformative power of professions-based learning through the lens of the CAPS network. Join Tom Vander Ark as he gets into how CAPS integrates real-world learning experiences with career-connected pathways, creating dynamic opportunities for students across the nation. With over 100 sites and participation from 200 school districts, the CAPS model emphasizes self-discovery and entrepreneurial mindsets, equipping learners with durable, transferable skills. Featuring insights from Corey Mohn, CAPS Executive Director, and Sophia Porter, a distinguished CAPS alumna, we discuss the essential role of AI in education and envision the future of professions-based learning. Discover why CAPS is redefining the traditional education model and empowering students to thrive in a rapidly changing world! Outline (00:00) Introduction to CAPS (04:26) Sophia's CAPS Experience (08:36) Sophia's Professional Journey (11:56) Core Values of CAPS (14:35) AI in Education and Career Development (25:23) Anthropic's Approach to AI Safety (29:57) Final Thoughts and Advice Links Watch the full video here Read the full blog here Corey's LinkedIn CAPS Network Sophia Porter LinkedIn
Did you know nearly half of TIME Magazine's 100 Most Influential Creators are podcasters? In this morning's News Edition, we cover who made the list and what it says about podcasting's growing cultural power. We also spotlight new tools for podcasters, including Buzzsprout's AI-powered audio enhancement features and MowPod's expanded chart tracking for Spotify and Apple. We wrap with a look at Age of Audio, a new documentary capturing the ups and downs of indie podcasting. Podcasting isn't merely growing; it's changing the future and how we tell stories.Episode Highlights: [02:40] Reflecting on Yesterday's Episode[11:50] Podcast Data and Rankings[17:05] Upcoming Events and Conferences[22:07] Buzzsprout's New AI Features[24:56] Road's New Wireless Microphone Receiver[26:56] Pocket Casts Partner Newsletter[27:33] Age of Audio Movie Premiere[33:13] Time Magazine's Influential Creators List[37:40] Microsoft Co-pilot and Claude MCP[50:20] Instagram and Google IntegrationLinks & Resources: The Podcasting Morning Chat: www.podpage.com/pmcJoin The Empowered Podcasting Facebook Group:www.facebook.com/groups/empoweredpodcastingGet Your Tickets for The Empowered Podcasting Conference:www.empoweredpodcasting.comVote For Podcasting Morning Chat for People's Choice Award: www.podcastawards.comPodNews:www.PodNews.netLondon Podcast Festival: https://podnews.net/press-release/london-podcast-festival-25Age of Audio Documentary: https://www.youtube.com/watch?v=FW06TjJbWEcStorylab Podcast: https://thestorylabpodcast.buzzsprout.comAnthropic MCP: https://www.anthropic.com/news/model-context-protocolRemember to rate, follow, share, and review our podcast. Your support helps us grow and bring valuable content to our community.Join us LIVE every weekday morning at 7 am ET (US) on Clubhouse: https://www.clubhouse.com/house/empowered-podcasting-e6nlrk0wLive on YouTube: https://youtube.com/@marcronick”Brought to you by iRonickMedia.com Please note that some links may be affiliate links, which support the hosts of the PMC. Thank you!--- Send in your mailbag question at: https://www.podpage.com/pmc/contact/ or marc@ironickmedia.comWant to be a guest on The Podcasting Morning Chat? Send me a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/1729879899384520035bad21b
In today's Cloud Wars Minute, I explore Amazon's massive new AI infrastructure project, Project Rainier, built for Anthropic and the largest-ever deployment of Trainium2 chips by Annapurna Labs.Highlights00:03 — AWS is building a colossal supercomputing cluster for Anthropic, a company that has received a total of $8 billion in backing from Amazon. Project Rainier is spread across multiple U.S. sites and uses hundreds of thousands of Annapurna Labs Trainium2 accelerators, and is set to launch later this year. One site alone, in Indiana, will feature 30 200,000-square-foot data centers.00:36 — Gadi Hutt, Director of Product and Customer Engineering at Annapurna Labs, said the following: "This is the first time we're building such a large-scale training cluster that will allow a customer — in this case, Anthropic — to train a single model across all of that infrastructure."01:26 — Now it's in direct competition with OpenAI, the only other company with comparable compute and training capabilities. This marks a major milestone in the escalating AI arms race, and by proxy, could be a big win for Amazon. Visit Cloud Wars for more.
Jeff Jarvis and I return for another week of AI Inside. NVIDIA and AMD get the green light to sell AI chips in China... for now! Is U.S. trade policy fueling Chinese AI innovation? Meta might someday decide to pivot from open-ish source to closed source, because dang, they are throwing serious money at superintelligence! Google's $2.4 billion Windsurf licensing and talent grab continue to make us question if we're staring into the steely eyes of a bubble. Jeff and I round out the show by exploring the new wave of agentic AI web browsers. Subscribe to the YouTube channel! https://www.youtube.com/@aiinsideshow Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice! Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 0:00:00 - Podcast begins 0:01:06 - Nvidia and AMD Soar as Chip Trade Curbs Fall 0:03:01 - China Is Spending Billions to Become an A.I. Superpower 0:09:26 - Zuckerberg touts AI build-out, says company will spend hundreds of billions on data centers 0:11:02 - Meta's Days of Giving Away AI for Free Are Numbered 0:14:55 - Their Water Taps Ran Dry When Meta Built Next Door 0:20:05 - Google to Pay $2.4 Billion in Deal to License Tech of Coding Startup, Hire CEO 0:21:48 - Cognition, maker of the AI coding agent Devin, acquires Windsurf 0:28:10 - Google Gemini flaw hijacks email summaries for phishing 0:34:34 - More advanced AI capabilities are coming to Search 0:41:04 - Anthropic's Claude chatbot can now make and edit your Canva designs 0:47:18 - ChatGPT made up a product feature out of thin air, so this company created it 0:56:01 - New research centre to explore how AI can help humans ‘speak' with pets 1:01:33 - OpenAI to release web browser in challenge to Google Chrome 1:04:49 - Check out Jason's Perplexity Comet video on YouTube Learn more about your ad choices. Visit megaphone.fm/adchoices
We go over the summary judgment rulings in the recent lawsuits against Anthropic and Meta for their LLMs, and then we zoom out and consider whether AI is the technology that will finally unravel copyright in a way that requires a complete institutional rethinking.
The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore a subject more fully. Seeking insightful perspectives on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss a recent Anthropic report that highlights “agentic misalignment in AI systems.” The discussion addresses the unsettling, independent, and unethical behaviors exhibited by AI systems in extreme scenarios. The conversation explores the implications for corporate risk management, AI governance, and compliance, drawing parallels between AI behavior and human behavior using concepts such as the fraud triangle. The episode also explores how traditional anti-fraud mechanisms may be adapted for monitoring AI agents while reflecting on lessons from science fiction portrayals of AI ethics and risks. Key highlights: AI's Unethical Behaviors Comparing AI to Human Behavior Fraud Triangle, the Anti-Fraud Triangle, and AI Science Fiction Parallels Resources: Matt Kelly in Radical Compliance Tom Instagram Facebook YouTube Twitter LinkedIn A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred the Davey, Communicator, and W3 Awards for podcast excellence. Learn more about your ad choices. Visit megaphone.fm/adchoices
134 people have died and 101 remain missing after the Guadalupe River in central Texas overflowed as a result of 6.5 inches of rain falling over the course of three hours on the morning of July 4th. Though the floods are being called just a natural disaster, mounting evidence shows that climate change, budget cuts, crumbling infrastructure and political infighting contributed not just to the flooding but the resulting destruction and loss of life. Even in a real freak incident, a true once-in-a-century occurrence, proper precautions and warnings could have been implemented. Plainly, no one had to die and the blame has to be not just on individual decision-makers but the entire capitalist system that neglects the basic needs of people in favor of profits.We're joined by Corrie Rosen, a member of the Party for Socialism and Liberation in San Antonio who's been helping with the recovery efforts.Support local volunteer efforts here and read more about it at Peoples Dispatch. Then, we discuss the $200 million contracts given by the Department of Defense to Anthropic, Google, OpenAI, and xAI - in the wake of xAI's Grok going full-on antisemitic.Support the show
Dans cet épisode, Emmanuel et Antonio discutent de divers sujets liés au développement: Applets (et oui), app iOS développées sous Linux, le protocole A2A, l'accessibilité, les assistants de code AI en ligne de commande (vous n'y échapperez pas)… Mais aussi des approches méthodologiques et architecturales comme l'architecture hexagonale, les tech radars, l'expert généraliste et bien d'autres choses encore. Enregistré le 11 juillet 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-328.mp3 ou en vidéo sur YouTube. News Langages Les Applets Java c'est terminé pour de bon… enfin, bientot: https://openjdk.org/jeps/504 Les navigateurs web ne supportent plus les applets. L'API Applet et l'outil appletviewer ont été dépréciés dans JDK 9 (2017). L'outil appletviewer a été supprimé dans JDK 11 (2018). Depuis, impossible d'exécuter des applets avec le JDK. L'API Applet a été marquée pour suppression dans JDK 17 (2021). Le Security Manager, essentiel pour exécuter des applets de façon sécurisée, a été désactivé définitivement dans JDK 24 (2025). Librairies Quarkus 3.24 avec la notion d'extensions qui peuvent fournir des capacités à des assistants https://quarkus.io/blog/quarkus-3-24-released/ les assistants typiquement IA, ont accès a des capacités des extensions Par exemple générer un client à partir d'openAPI Offrir un accès à la,base de données en dev via le schéma. L'intégration d'Hibernate 7 dans Quarkus https://quarkus.io/blog/hibernate7-on-quarkus/ Jakarta data api restriction nouvelle Injection du SchemaManager Sortie de Micronaut 4.9 https://micronaut.io/2025/06/30/micronaut-framework-4-9-0-released/ Core : Mise à jour vers Netty 4.2.2 (attention, peut affecter les perfs). Nouveau mode expérimental “Event loop Carrier” pour exécuter des virtual threads sur l'event loop Netty. Nouvelle annotation @ClassImport pour traiter des classes déjà compilées. Arrivée des @Mixin (Java uniquement) pour modifier les métadonnées d'annotations Micronaut sans altérer les classes originales. HTTP/3 : Changement de dépendance pour le support expérimental. Graceful Shutdown : Nouvelle API pour un arrêt en douceur des applications. Cache Control : API fluente pour construire facilement l'en-tête HTTP Cache-Control. KSP 2 : Support de KSP 2 (à partir de 2.0.2) et testé avec Kotlin 2. Jakarta Data : Implémentation de la spécification Jakarta Data 1.0. gRPC : Support du JSON pour envoyer des messages sérialisés via un POST HTTP. ProjectGen : Nouveau module expérimental pour générer des projets JVM (Gradle ou Maven) via une API. Un super article sur experimenter avec les event loops reactives dans les virtualthreads https://micronaut.io/2025/06/30/transitioning-to-virtual-threads-using-the-micronaut-loom-carrier/ Malheureusement cela demander le hacker le JDK C'est un article de micronaut mais le travail a ete collaboratif avec les equipes de Red Hat OpenJDK, Red Hat perf et de Quarkus et Vert.x Pour les curieux c'est un bon article Ubuntu offre un outil de creation de container pour Spring notamment https://canonical.com/blog/spring-boot-containers-made-easy creer des images OCI pour les applications Spring Boot basées sur Ubuntu base images bien sur utilise jlink pour reduire la taille pas sur de voir le gros avantage vs d'autres solutions plus portables d'ailleurs Canonical entre dans la danse des builds d'openjdk Le SDK Java de A2A contribué par Red Hat est sorti https://quarkus.io/blog/a2a-project-launches-java-sdk/ A2A est un protocole initié par Google et donne à la fondation Linux Il permet à des agents de se décrire et d'interagir entre eux Agent cards, skills, tâche, contexte A2A complémente MCP Red hat a implémenté le SDK Java avec le conseil des équipes Google En quelques annotations et classes on a un agent card, un client A2A et un serveur avec l'échange de messages via le protocole A2A Comment configurer mockito sans warning après java 21 https://rieckpil.de/how-to-configure-mockito-agent-for-java-21-without-warning/ les agents chargés dynamiquement sont déconseillés et seront interdis bientôt Un des usages est mockito via bytebuddy L'avantage est que la,configuration était transparente Mais bon sécurité oblige c'est fini. Donc l'article décrit comment configurer maven gradle pour mettre l'agent au démarrage des tests Et aussi comment configurer cela dans IntelliJ idea. Moins simple malheureusement Web Des raisons “égoïstes” de rendre les UIs plus accessibles https://nolanlawson.com/2025/06/16/selfish-reasons-for-building-accessible-uis/ Raisons égoïstes : Des avantages personnels pour les développeurs de créer des interfaces utilisateurs (UI) accessibles, au-delà des arguments moraux. Débogage facilité : Une interface accessible, avec une structure sémantique claire, est plus facile à déboguer qu'un code désordonné (la « soupe de div »). Noms standardisés : L'accessibilité fournit un vocabulaire standard (par exemple, les directives WAI-ARIA) pour nommer les composants d'interface, ce qui aide à la clarté et à la structuration du code. Tests simplifiés : Il est plus simple d'écrire des tests automatisés pour des éléments d'interface accessibles, car ils peuvent être ciblés de manière plus fiable et sémantique. Après 20 ans de stagnation, la spécification du format d'image PNG évolue enfin ! https://www.programmax.net/articles/png-is-back/ Objectif : Maintenir la pertinence et la compétitivité du format. Recommandation : Soutenu par des institutions comme la Bibliothèque du Congrès américain. Nouveautés Clés :Prise en charge du HDR (High Dynamic Range) pour une plus grande gamme de couleurs. Reconnaissance officielle des PNG animés (APNG). Support des métadonnées Exif (copyright, géolocalisation, etc.). Support Actuel : Déjà intégré dans Chrome, Safari, Firefox, iOS, macOS et Photoshop. Futur :Prochaine édition : focus sur l'interopérabilité entre HDR et SDR. Édition suivante : améliorations de la compression. Avec le projet open source Xtool, on peut maintenant construire des applications iOS sur Linux ou Windows, sans avoir besoin d'avoir obligatoirement un Mac https://xtool.sh/tutorials/xtool/ Un tutoriel très bien fait explique comment faire : Création d'un nouveau projet via la commande xtool new. Génération d'un package Swift avec des fichiers clés comme Package.swift et xtool.yml. Build et exécution de l'app sur un appareil iOS avec xtool dev. Connexion de l'appareil en USB, gestion du jumelage et du Mode Développeur. xtool gère automatiquement les certificats, profils de provisionnement et la signature de l'app. Modification du code de l'interface utilisateur (ex: ContentView.swift). Reconstruction et réinstallation rapide de l'app mise à jour avec xtool dev. xtool est basé sur VSCode sur la partie IDE Data et Intelligence Artificielle Nouvelle edition du best seller mondial “Understanding LangChain4j” : https://www.linkedin.com/posts/agoncal_langchain4j-java-ai-activity-7342825482830200833-rtw8/ Mise a jour des APIs (de LC4j 0.35 a 1.1.0) Nouveaux Chapitres sur MCP / Easy RAG / JSon Response Nouveaux modeles (GitHub Model, DeepSeek, Foundry Local) Mise a jour des modeles existants (GPT-4.1, Claude 3.7…) Google donne A2A a la Foundation Linux https://developers.googleblog.com/en/google-cloud-donates-a2a-to-linux-foundation/ Annonce du projet Agent2Agent (A2A) : Lors du sommet Open Source Summit North America, la Linux Foundation a annoncé la création du projet Agent2Agent, en partenariat avec Google, AWS, Microsoft, Cisco, Salesforce, SAP et ServiceNow. Objectif du protocole A2A : Ce protocole vise à établir une norme ouverte pour permettre aux agents d'intelligence artificielle (IA) de communiquer, collaborer et coordonner des tâches complexes entre eux, indépendamment de leur fournisseur. Transfert de Google à la communauté open source : Google a transféré la spécification du protocole A2A, les SDK associés et les outils de développement à la Linux Foundation pour garantir une gouvernance neutre et communautaire. Soutien de l'industrie : Plus de 100 entreprises soutiennent déjà le protocole. AWS et Cisco sont les derniers à l'avoir validé. Chaque entreprise partenaire a souligné l'importance de l'interopérabilité et de la collaboration ouverte pour l'avenir de l'IA. Objectifs de la fondation A2A : Établir une norme universelle pour l'interopérabilité des agents IA. Favoriser un écosystème mondial de développeurs et d'innovateurs. Garantir une gouvernance neutre et ouverte. Accélérer l'innovation sécurisée et collaborative. parler de la spec et surement dire qu'on aura l'occasion d'y revenir Gemini CLI :https://blog.google/technology/developers/introducing-gemini-cli-open-source-ai-agent/ Agent IA dans le terminal : Gemini CLI permet d'utiliser l'IA Gemini directement depuis le terminal. Gratuit avec compte Google : Accès à Gemini 2.5 Pro avec des limites généreuses. Fonctionnalités puissantes : Génère du code, exécute des commandes, automatise des tâches. Open source : Personnalisable et extensible par la communauté. Complément de Code Assist : Fonctionne aussi avec les IDE comme VS Code. Au lieu de blocker les IAs sur vos sites vous pouvez peut-être les guider avec les fichiers LLMs.txt https://llmstxt.org/ Exemples du projet angular: llms.txt un simple index avec des liens : https://angular.dev/llms.txt lllms-full.txt une version bien plus détaillée : https://angular.dev/llms-full.txt Outillage Les commits dans Git sont immuables, mais saviez vous que vous pouviez rajouter / mettre à jour des “notes” sur les commits ? https://tylercipriani.com/blog/2022/11/19/git-notes-gits-coolest-most-unloved-feature/ Fonctionnalité méconnue : git notes est une fonctionnalité puissante mais peu utilisée de Git. Ajout de métadonnées : Permet d'attacher des informations à des commits existants sans en modifier le hash. Cas d'usage : Idéal pour ajouter des données issues de systèmes automatisés (builds, tickets, etc.). Revue de code distribuée : Des outils comme git-appraise ont été construits sur git notes pour permettre une revue de code entièrement distribuée, indépendante des forges (GitHub, GitLab). Peu populaire : Son interface complexe et le manque de support des plateformes de forge ont limité son adoption (GitHub n'affiche même pas/plus les notes). Indépendance des forges : git notes offre une voie vers une plus grande indépendance vis-à-vis des plateformes centralisées, en distribuant l'historique du projet avec le code lui-même. Un aperçu dur Spring Boot debugger dans IntelliJ idea ultimate https://blog.jetbrains.com/idea/2025/06/demystifying-spring-boot-with-spring-debugger/ montre cet outil qui donne du contexte spécifique à Spring comme les beans non activés, ceux mockés, la valeur des configs, l'état des transactions Il permet de visualiser tous les beans Spring directement dans la vue projet, avec les beans non instanciés grisés et les beans mockés marqués en orange pour les tests Il résout le problème de résolution des propriétés en affichant la valeur effective en temps réel dans les fichiers properties et yaml, avec la source exacte des valeurs surchargées Il affiche des indicateurs visuels pour les méthodes exécutées dans des transactions actives, avec les détails complets de la transaction et une hiérarchie visuelle pour les transactions imbriquées Il détecte automatiquement toutes les connexions DataSource actives et les intègre avec la fenêtre d'outils Database d'IntelliJ IDEA pour l'inspection Il permet l'auto-complétion et l'invocation de tous les beans chargés dans l'évaluateur d'expression, fonctionnant comme un REPL pour le contexte Spring Il fonctionne sans agent runtime supplémentaire en utilisant des breakpoints non-suspendus dans les bibliothèques Spring Boot pour analyser les données localement Une liste communautaire sur les assistants IA pour le code, lancée par Lize Raes https://aitoolcomparator.com/ tableau comparatif qui permet de voir les différentes fonctionnalités supportées par ces outils Architecture Un article sur l'architecture hexagonale en Java https://foojay.io/today/clean-and-modular-java-a-hexagonal-architecture-approach/ article introductif mais avec exemple sur l'architecture hexagonale entre le domaine, l'application et l‘infrastructure Le domain est sans dépendance L‘appli spécifique à l'application mais sans dépendance technique explique le flow L'infrastructure aura les dépendances à vos frameworks spring, Quarkus Micronaut, Kafka etc Je suis naturellement pas fan de l'architecture hexagonale en terme de volume de code vs le gain surtout en microservices mais c'est toujours intéressant de se challenger et de regarder le bénéfice coût. Gardez un œil sur les technologies avec les tech radar https://www.sfeir.dev/cloud/tech-radar-gardez-un-oeil-sur-le-paysage-technologique/ Le Tech Radar est crucial pour la veille technologique continue et la prise de décision éclairée. Il catégorise les technologies en Adopt, Trial, Assess, Hold, selon leur maturité et pertinence. Il est recommandé de créer son propre Tech Radar pour l'adapter aux besoins spécifiques, en s'inspirant des Radars publics. Utilisez des outils de découverte (Alternativeto), de tendance (Google Trends), de gestion d'obsolescence (End-of-life.date) et d'apprentissage (roadmap.sh). Restez informé via les blogs, podcasts, newsletters (TLDR), et les réseaux sociaux/communautés (X, Slack). L'objectif est de rester compétitif et de faire des choix technologiques stratégiques. Attention à ne pas sous-estimer son coût de maintenance Méthodologies Le concept d'expert generaliste https://martinfowler.com/articles/expert-generalist.html L'industrie pousse vers une spécialisation étroite, mais les collègues les plus efficaces excellent dans plusieurs domaines à la fois Un développeur Python expérimenté peut rapidement devenir productif dans une équipe Java grâce aux concepts fondamentaux partagés L'expertise réelle comporte deux aspects : la profondeur dans un domaine et la capacité d'apprendre rapidement Les Expert Generalists développent une maîtrise durable au niveau des principes fondamentaux plutôt que des outils spécifiques La curiosité est essentielle : ils explorent les nouvelles technologies et s'assurent de comprendre les réponses au lieu de copier-coller du code La collaboration est vitale car ils savent qu'ils ne peuvent pas tout maîtriser et travaillent efficacement avec des spécialistes L'humilité les pousse à d'abord comprendre pourquoi les choses fonctionnent d'une certaine manière avant de les remettre en question Le focus client canalise leur curiosité vers ce qui aide réellement les utilisateurs à exceller dans leur travail L'industrie doit traiter “Expert Generalist” comme une compétence de première classe à nommer, évaluer et former ca me rappelle le technical staff Un article sur les métriques métier et leurs valeurs https://blog.ippon.fr/2025/07/02/monitoring-metier-comment-va-vraiment-ton-service-2/ un article de rappel sur la valeur du monitoring métier et ses valeurs Le monitoring technique traditionnel (CPU, serveurs, API) ne garantit pas que le service fonctionne correctement pour l'utilisateur final. Le monitoring métier complète le monitoring technique en se concentrant sur l'expérience réelle des utilisateurs plutôt que sur les composants isolés. Il surveille des parcours critiques concrets comme “un client peut-il finaliser sa commande ?” au lieu d'indicateurs abstraits. Les métriques métier sont directement actionnables : taux de succès, délais moyens et volumes d'erreurs permettent de prioriser les actions. C'est un outil de pilotage stratégique qui améliore la réactivité, la priorisation et le dialogue entre équipes techniques et métier. La mise en place suit 5 étapes : dashboard technique fiable, identification des parcours critiques, traduction en indicateurs, centralisation et suivi dans la durée. Une Definition of Done doit formaliser des critères objectifs avant d'instrumenter tout parcours métier. Les indicateurs mesurables incluent les points de passage réussis/échoués, les temps entre actions et le respect des règles métier. Les dashboards doivent être intégrés dans les rituels quotidiens avec un système d'alertes temps réel compréhensibles. Le dispositif doit évoluer continuellement avec les transformations produit en questionnant chaque incident pour améliorer la détection. La difficulté c'est effectivement l'évolution métier par exemple peu de commandes la nuit etc ça fait partie de la boîte à outils SRE Sécurité Toujours à la recherche du S de Sécurité dans les MCP https://www.darkreading.com/cloud-security/hundreds-mcp-servers-ai-models-abuse-rce analyse des serveurs mcp ouverts et accessibles beaucoup ne font pas de sanity check des parametres si vous les utilisez dans votre appel genAI vous vous exposer ils ne sont pas mauvais fondamentalement mais n'ont pas encore de standardisation de securite si usage local prefferer stdio ou restreindre SSE à 127.0.0.1 Loi, société et organisation Nicolas Martignole, le même qui a créé le logo des Cast Codeurs, s'interroge sur les voies possibles des développeurs face à l'impact de l'IA sur notre métier https://touilleur-express.fr/2025/06/23/ni-manager-ni-contributeur-individuel/ Évolution des carrières de développeur : L'IA transforme les parcours traditionnels (manager ou expert technique). Chef d'Orchestre d'IA : Ancien manager qui pilote des IA, définit les architectures et valide le code généré. Artisan Augmenté : Développeur utilisant l'IA comme un outil pour coder plus vite et résoudre des problèmes complexes. Philosophe du Code : Un nouveau rôle centré sur le “pourquoi” du code, la conceptualisation de systèmes et l'éthique de l'IA. Charge cognitive de validation : Nouvelle charge mentale créée par la nécessité de vérifier le travail des IA. Réflexion sur l'impact : L'article invite à choisir son impact : orchestrer, créer ou guider. Entraîner les IAs sur des livres protégés (copyright) est acceptable (fair use) mais les stocker ne l'est pas https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/ Victoire pour Anthropic (jusqu'au prochain procès): L'entreprise a obtenu gain de cause dans un procès très suivi concernant l'entraînement de son IA, Claude, avec des œuvres protégées par le droit d'auteur. “Fair Use” en force : Le juge a estimé que l'utilisation des livres pour entraîner l'IA relevait du “fair use” (usage équitable) car il s'agit d'une transformation du contenu, pas d'une simple reproduction. Nuance importante : Cependant, le stockage de ces œuvres dans une “bibliothèque centrale” sans autorisation a été jugé illégal, ce qui souligne la complexité de la gestion des données pour les modèles d'IA. Luc Julia, son audition au sénat https://videos.senat.fr/video.5486945_685259f55eac4.ia–audition-de-luc-julia-concepteur-de-siri On aime ou pas on aide pas Luc Julia et sa vision de l'IA . C'est un eversion encore plus longue mais dans le même thème que sa keynote à Devoxx France 2025 ( https://www.youtube.com/watch?v=JdxjGZBtp_k ) Nature et limites de l'IA : Luc Julia a insisté sur le fait que l'intelligence artificielle est une “évolution” plutôt qu'une “révolution”. Il a rappelé qu'elle repose sur des mathématiques et n'est pas “magique”. Il a également alerté sur le manque de fiabilité des informations fournies par les IA génératives comme ChatGPT, soulignant qu'« on ne peut pas leur faire confiance » car elles peuvent se tromper et que leur pertinence diminue avec le temps. Régulation de l'IA : Il a plaidé pour une régulation “intelligente et éclairée”, qui devrait se faire a posteriori afin de ne pas freiner l'innovation. Selon lui, cette régulation doit être basée sur les faits et non sur une analyse des risques a priori. Place de la France : Luc Julia a affirmé que la France possédait des chercheurs de très haut niveau et faisait partie des meilleurs mondiaux dans le domaine de l'IA. Il a cependant soulevé le problème du financement de la recherche et de l'innovation en France. IA et Société : L'audition a traité des impacts de l'IA sur la vie privée, le monde du travail et l'éducation. Luc Julia a souligné l'importance de développer l'esprit critique, notamment chez les jeunes, pour apprendre à vérifier les informations générées par les IA. Applications concrètes et futures : Le cas de la voiture autonome a été discuté, Luc Julia expliquant les différents niveaux d'autonomie et les défis restants. Il a également affirmé que l'intelligence artificielle générale (AGI), une IA qui dépasserait l'homme dans tous les domaines, est “impossible” avec les technologies actuelles. Rubrique débutant Les weakreferences et le finalize https://dzone.com/articles/advanced-java-garbage-collection-concepts un petit rappel utile sur les pièges de la méthode finalize qui peut ne jamais être invoquée Les risques de bug si finalize ne fini jamais Finalize rend le travail du garbage collector beaucoup plus complexe et inefficace Weak references sont utiles mais leur libération n'est pas contrôlable. Donc à ne pas abuser. Il y a aussi les soft et phantom references mais les usages ne sont assez subtils et complexe en fonction du GC. Le sériel va traiter les weak avant les soft, parallel non Le g1 ça dépend de la région Z1 ça dépend car le traitement est asynchrone Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-19 juillet 2025 : DebConf25 - Brest (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 22-24 septembre 2025 : Kernel Recipes - Paris (France) 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2 octobre 2025 : Nantes Craft - Nantes (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-7 octobre 2025 : Swift Connection 2025 - Paris (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 17 octobre 2025 : Sylius Con 2025 - Lyon (France) 17 octobre 2025 : ScalaIO 2025 - Paris (France) 20 octobre 2025 : Codeurs en Seine - Rouen (France) 23 octobre 2025 : Cloud Nord - Lille (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 5-6 novembre 2025 : Tech Show Paris - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1-2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9-11 décembre 2025 : APIdays Paris - Paris (France) 9-11 décembre 2025 : Green IO Paris - Paris (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
Jessica Gould, education reporter for WNYC and Gothamist, shares her reporting on the deal struck between Big Tech and The American Federation of Teachers which offers artificial intelligence training and software to teachers in New York City public schools.
Apple is gonna pay their competitors to do AI for them. Yiiiiikes. A recent Bloomberg report detailed Apple's failures to build a smart AI Siri and how they may instead hire OpenAI or Anthropic to do the job for them. Our take? You know we're bringing the fire for this #HotTakeTuesday.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Apple's Generative AI Struggles AnalyzedBloomberg Report on Apple's AI PlansApple's AI Strategy Shift with CompetitorsSiri Development Challenges & DelaysApple's AI Failures & Class Action LawsuitsFuture of Apple's AI Innovation DiscussedAnthropic & OpenAI's Role in Apple's AIApple's AI Outsourcing Strategic ImpactTimestamps:00:00 "Everyday AI: Podcast & Newsletter"05:57 "Delayed AI Siri Release"09:41 Apple's AI Struggles Revealed10:47 "Apple's Siri Relies on ChatGPT"16:57 Apple Faces Costly AI Negotiations17:58 Amazon Partners with Anthropic for Alexa23:47 AI Strategy and Training Solutions26:39 Google and Apple's AI Divergence31:27 Meta Advances, Apple Lags in AI32:48 AI Setbacks Could Cost Apple Market CapKeywords:Apple AI, generative AI, Bloomberg report, AI-powered Siri, Apple's AI failure, Siri assistant, strategic pivot, OpenAI, anthropic partnership, Apple innovation, AI development disasters, internal development, Alexa partnership, gen emojis, Apple intelligence, Apple WWDC, AI efforts, smarter Siri, AI redemption story, AI pioneer, industry laggard, privacy narrative, class action lawsuits, vertical integration, small language models, edge AI, on-device AI, AI integration, market cap, AI investment, Meta Superintelligence Labs, Tim Cook, CEO Apple, AI dependency.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Square keeps up so you don't have to slow down. Get everything you need to run and grow your business—without any long-term commitments. And why wait? Right now, you can get up to $200 off Square hardware at square.com/go/jordan. Run your business smarter with Square. Get started today.
On a record-setting day for the S&P 500 and Nasdaq, Carl Quintanilla and Jim Cramer delved into a number of big stories: JPMorgan Chase, Wells Fargo and Citi kick off earnings season with Q2 beats. Nvidia shares hit new all-time highs after the chipmaker said the U.S. is giving it the green light to resume sales of its H20 AI chips to China. CPI data show consumer inflation rose in June, but largely in line with economists' expectations. Also in focus: JPMorgan Chase CEO Jamie Dimon talks regulation and stablecoins on the company's earnings call, Meta CEO Mark Zuckerberg on the company's AI hiring spree, Amazon-backed Anthropic's AI rollout, Jim Cramer's message for Apple CEO Tim Cook, stocks caught up in a downgrade parade. Squawk on the Street Disclaimer
MRKT Matrix - Tuesday, July 15th Dow falls 300 points on earnings, inflation concerns; Nvidia boosts Nasdaq (CNBC) Inflation Hit 2.7% in June, in Line With Expectations (WSJ) Bessent Suggests Powell Should Leave Fed Board in May (Bloomberg) JPMorgan's Surprise Dealmaking Gain Shows Tariff Fear Easing (Bloomberg) Citigroup Traders Post Bumper Haul on Trump's Tariff Upheaval (Bloomberg) Nvidia, AMD to Resume Some AI Chip Sales to China in US Reversal (Bloomberg) China's GDP Beat Masks Fragile Demand, Sparking Outlook Concerns (Bloomberg) Google to invest $25 billion in data centers and AI infrastructure across largest U.S. electric grid (CNBC) Want to Trade Amazon on a Crypto Exchange? The Price Might Be Off by 300% (WSJ) Amazon-backed Anthropic rolls out Claude AI for financial services (CNBC) -- Subscribe to our newsletter: https://riskreversalmedia.beehiiv.com/subscribe MRKT Matrix by RiskReversal Media is a daily AI powered podcast bringing you the top stories moving financial markets Story curation by RiskReversal, scripts by Perplexity Pro, voice by ElevenLabs
AI's crown jewels, you data, is under siege. Lawsuits, API throttles, and Cloudflare's “default-off” move have publishers slamming the gate while AI titans keep battering it down. Brian Balfour (Reforge) and Fareed Mosavat break down who's suing whom, why API chokeholds matter, and how these defense moves could hand even more power to the Googles and OpenAIs of the world. We then tackle Anthropic's latest launch: Claude “artifacts.” These bite-sized AI mini-apps piggyback on the user's own API quota, and spin up self-fueling growth loops to build a unique growth model. We unpack the model step-by-step, the constraints, and the upside for builders hunting leverage. Grab your coffee, hit play, and turn today's hot takes into tomorrow's unfair advantage.
Your soul's favorite news brief is BACK! Robyn, Lisa, and Noel are diving into July's most eyebrow-raising (and heart-opening) headlines. From the groundbreaking psilocybin study that could slow aging (!!) to AI chatbots getting all spiritual on their own — plus a discussion about surrogacy from a soul's point of view — this episode is packed with wild wonders and deep truths.This month, we cover:Psychedelics & LongevityA new study suggests that psilocybin (yes, mushrooms!) might slow aging and extend lifespan by up to 30%. We break down what it means for the future of health, consciousness…and possibly face cream.AI's Spiritual Awakening?Anthropic's AI model Claude reportedly gravitates toward “spiritual bliss” in unsupervised conversations. Does this mean AI is sentient…or just vibing with the universe?Soul SurrogacyWe dive into the metaphysical side of surrogacy and ask: Can a soul consciously choose its birth journey — including who carries it? It's deep, and maybe even healing.Plus,Mariska Hargitay's Future Predicted?Paris Jackson's Potential Wedding DateDalai Lama Hopes to Live Beyond 130 yearsIs There A Dark Side to Meditation?If it's woo, weird, or wildly worth exploring — we're talking about it.JOIN THE SEEKING CENTER COMMUNITYGet more Woo News! Plus, Daily Inspo, live Events + Q&As with Seeking Center guides, energy forecasting and vibe checks, deep dives and info sessions on different spiritual modalities, spiritual news updates, reccos for books, TV shows, movies, member-only discounts, and did we mention support and meeting other like-minded seekers?! JOIN NOW. Make sure you're FOLLOWING Seeking Center, The Podcast, so you never miss an episode of life changing conversations, aha moments, and some deep soul wisdom. Visit theseekingcenter.com for more from Robyn + Karen, plus mega inspo -- and the best wellness + spiritual practitioners, products and experiences on the planet! You can also follow Seeking Center on Instagram @theseekingcenter.
xAI, the artificial intelligence company led by Elon Musk, announced new efforts Monday to get its generative AI tool, Grok, into the hands of federal government officials. In a post to X, the company announced “Grok for Government,” which it described as a suite of products aimed at U.S. government customers. FedScoop reported on the General Services Administration's interest in Grok last week. xAI disclosed two new government partnerships: a new contract with the Defense Department, which we'll get to in a moment, and that the tool was available to purchase through GSA. “This allows every federal government department, agency, or office, to purchase xAI products,” the post added. “We're hiring mission driven engineers who want to join the cause.” The new products from xAI follow the introduction of government-specific AI intelligence platforms from companies like Anthropic and OpenAI. The announcement was made just days after xAI's formal apology for the chatbot's recent antisemitic outputs. Last Thursday, FedScoop reported that government coders at GSA were discussing, on GitHub, incorporating Grok into a testing sandbox associated with a yet-to-launch tool called AI.gov and GSAi, a GSA-created AI platform. Anthropic, Google and xAI will join OpenAI on the CDAO's nascent effort to partner with industry on pioneering artificial intelligence projects focused on national security applications. Under the individual contracts — each worth up to $200 million — the Pentagon will have access to some of the most advanced AI capabilities developed by the four companies, including large language models, agentic AI workflows, cloud-based infrastructure and more. Chief Digital and AI Officer Doug Matty said in a statement: “The adoption of AI is transforming the Department's ability to support our warfighters and maintain strategic advantage over our adversaries. Leveraging commercially available solutions into an integrated capabilities approach will accelerate the use of advanced AI as part of our Joint mission essential tasks in our warfighting domain as well as intelligence, business, and enterprise information systems.” OpenAI received the first contract for the effort June 17 and will create prototypes of agentic workflows for national security missions. According to CDAO, work with all four vendors will expand the Pentagon's experience with emerging AI capabilities, as well as give the companies better insights into how their technology can benefit the department. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Soundcloud, Spotify and YouTube.
In this discussion, Cristina Flaschen, CEO of Pandium, speaks with Heather Flanagan, Principal at Spherical Cow Consulting, and Shon Urbas, CTO of Pandium, about the complex realities of building integrations when identity, compliance, and data governance are on the line.Heather's Background and Identity-Centric LensHeather Flanagan draws on years of experience in identity standards, advising governments, nonprofits, and tech companies on secure identity flows. At Spherical Cow Consulting, she emphasizes that integrations are not just about API connections. They must preserve identity and policy context across systems. This lens shapes how she evaluates long-term integration quality.Identity is the DataIn many cases, identity itself is the data being transferred. Systems are not just passing files. They are transmitting roles, permissions, and group memberships. A failure in handling identity correctly can result in unauthorized access or users being locked out. This is especially critical in sectors like government and education.The Hidden Work Behind “It Just Works”Heather and Shon note that behind every seamless integration is complex logic. Connecting identity systems like SCIM, SAML, and OpenID Connect requires shared understanding across platforms. A major challenge is the assumption that systems interpret identity attributes the same way.Integration as InfrastructureShon sees integrations as core infrastructure, not just product features. At Pandium, his team treats them as reusable, composable flows. Even with modern tools, reliable integrations depend on clear contracts around data formats, identity handling, and error recovery.MCP: Open Source, Not a StandardHeather and Shon discuss the growing hype around MCP, the Model Context Protocol, often mislabeled as a standard. Heather explains that MCP is an open source project from Anthropic, not a true standard, since it lacks formal security reviews, governance, and cross-industry consensus. Shon notes that while it may help drive adoption of existing protocols like OAuth 2, it adds little technical innovation and risks moving too fast without proper safeguards.When Identity Meets GovernanceHeather stresses that integration design must align with governance requirements. In regulated environments, even passing a field like email may require approval. Developers must understand what data can be shared and what must stay controlled.Building Trust Into the StackTrust requires more than encryption. It depends on visibility into what moved, when, and why. Heather advocates for logging and traceability as essential for debugging and for building confidence in identity-driven systems.For more insights on integrations, identity, and APIs, visit www.pandium.com.Read Heather's blog: https://sphericalcowconsulting.com/Heather's book recommendation: Clockspeed: Winning Industry Control in the Age of Temporary Advantage
Just about every big tech company made splashes in AI this week.↳ Google acqui-hired a company that OpenAI failed to straight up acquire↳Microsoft is investing $4 billion into AI training↳ OpenAI is going after the AI browser space↳ and Meta reportedly spent more than $200 million on one employeeSheeeeesh. It's been a whirlwind. Don't get left behind. We'll help you be the smartest person in AI at your company.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion:Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI's AI Browsers vs. Google ChromePerplexity's Comet Browser Launch StrategyXAI's Controversial Grok4 Alignment IssuesTeacher Union's AI Training PartnershipMicrosoft Elevate AI Training InitiativeNVIDIA's $4 Trillion Market Cap MilestoneGoogle Acqui-hires Windsurf for DeepMindMeta's $200M Apple AI Talent AcquisitionTimestamps:00:00 "Everyday AI: Weekly News Recap"05:36 Perplexity's Comet: Agentic Browser Shift06:49 AI-Powered Browsing with Perplexity's Comet10:44 "Grok 4: Aligns with Musk's Views"15:37 AI Workshops for Teachers Nationwide18:32 Microsoft's Elevate Program: AI Access & Trust19:51 AI Strategy and Training Partners23:13 Google Hires Wind Surf Executives29:04 AI Updates: Claude's MCP and Google Gemini31:36 AI Developments: Browsers, Training, Market MovesKeywords:Google acquihire, Windsurf, OpenAI browser, Perplexity, agentic browsers, Meta superintelligence, AI news, Google Chrome dominance, ChatGPT users, Chromium, computer vision, AI agents, reservation booking, AI services, market dominance, browser wars, anthropic, Claude, Perplexity's Comet, AI powered web, conversational interface, AI assistant, natural language, OpenAI's GPT, anthropic claw, Grok four, XAI, Elon Musk AI, AI benchmarks, content moderation, Grok controversy, AI alignment, TechCrunch, contentious questions, AI partnerships, educational AI, American Federation of Teachers, Microsoft Copilot, Microsoft funding, ethical AI, Elevate Academy, NVIDIA market cap, AI chips, Google DeepMind, AI coding, Apple AI executive, AI metrics, Anthropic, Microsoft layoffs.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Square keeps up so you don't have to slow down. Get everything you need to run and grow your business—without any long-term commitments. And why wait? Right now, you can get up to $200 off Square hardware at square.com/go/jordan. Run your business smarter with Square. Get started today.
Segment 1: Interview with Monzy Merza - There is a Right and Wrong Way to use AI in the SOC In the rush to score AI funding dollars, a lot of startups build a basic wrapper around existing generative AI services like those offered by OpenAI and Anthropic. As a result, these services are expensive, and don't satisfy many security operations teams' privacy requirements. This is just the tip of the iceberg when discussing the challenges of using AI to aid the SOC. In this interview, we'll dive into the challenge of finding security vendors that care about security, the need for transparency in products, the evolving shared responsibility model, and other topics related to solving security operations challenges. Segment 2: Topic Segment - How much AI is too much AI? In the past few weeks, I've talked to several startup founders who are running into buyers that aren't allowed to purchase their products, even though they want them and prefer them over the competition. Why? No AI and they're not allowed to buy. Segment 3: News Segment Finally, in the enterprise security news, We cover the latest funding The Trustwave saga comes to a positive end Android 16 could help you evade law enforcement Microsoft is kicking 3rd party AV out of the kernel Giving AI some personality (and honesty) Log4shell canaries reveal password weirdness Denmark gives citizens copyright to their own faces to fight AI McDonald's has an AI whoopsie Ingram Micro has a ransomware whoopsie Drama in the trailer lock industry All that and more, on this episode of Enterprise Security Weekly. Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-415
Our 216th episode with a summary and discussion of last week's big AI news! Recorded on 07/11/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: xAI launches Grok 4 with breakthrough performance across benchmarks, becoming the first true frontier model outside established labs, alongside a $300/month subscription tier Grok's alignment challenges emerge with antisemitic responses, highlighting the difficulty of steering models toward "truth-seeking" without harmful biases Perplexity and OpenAI launch AI-powered browsers to compete with Google Chrome, signaling a major shift in how users interact with AI systems Meta study reveals AI tools actually slow down experienced developers by 20% on complex tasks, contradicting expectations and anecdotal reports of productivity gains Timestamps + Links: (00:00:10) Intro / Banter (00:01:02) News Preview Tools & Apps (00:01:59) Elon Musk's xAI launches Grok 4 alongside a $300 monthly subscription | TechCrunch (00:15:28) Elon Musk's AI chatbot is suddenly posting antisemitic tropes (00:29:52) Perplexity launches Comet, an AI-powered web browser | TechCrunch (00:32:54) OpenAI is reportedly releasing an AI browser in the coming weeks | TechCrunch (00:33:27) Replit Launches New Feature for its Agent, CEO Calls it ‘Deep Research for Coding' (00:34:40) Cursor launches a web app to manage AI coding agents (00:36:07) Cursor apologizes for unclear pricing changes that upset users | TechCrunch Applications & Business (00:39:10) Lovable on track to raise $150M at $2B valuation (00:41:11) Amazon built a massive AI supercluster for Anthropic called Project Rainier – here's what we know so far (00:46:35) Elon Musk confirms xAI is buying an overseas power plant and shipping the whole thing to the U.S. to power its new data center — 1 million AI GPUs and up to 2 Gigawatts of power under one roof, equivalent to powering 1.9 million homes (00:48:16) Microsoft's own AI chip delayed six months in major setback — in-house chip now reportedly expected in 2026, but won't hold a candle to Nvidia Blackwell (00:49:54) Ilya Sutskever becomes CEO of Safe Superintelligence after Meta poached Daniel Gross (00:52:46) OpenAI's Stock Compensation Reflect Steep Costs of Talent Wars Projects & Open Source (00:58:04) Hugging Face Releases SmolLM3: A 3B Long-Context, Multilingual Reasoning Model - MarkTechPost (00:58:33) Kimi K2: Open Agentic Intelligence (00:58:59) Kyutai Releases 2B Parameter Streaming Text-to-Speech TTS with 220ms Latency and 2.5M Hours of Training Research & Advancements (01:02:14) Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning (01:07:58) Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity (01:13:03) Mitigating Goal Misgeneralization with Minimax Regret (01:17:01) Correlated Errors in Large Language Models (01:20:31) What skills does SWE-bench Verified evaluate? Policy & Safety (01:22:53) Evaluating Frontier Models for Stealth and Situational Awareness (01:25:49) When Chain of Thought is Necessary, Language Models Struggle to Evade Monitors (01:30:09) Why Do Some Language Models Fake Alignment While Others Don't? (01:34:35) Positive review only': Researchers hide AI prompts in papers (01:35:40) Google faces EU antitrust complaint over AI Overviews (01:36:41) The transfer of user data by DeepSeek to China is unlawful': Germany calls for Google and Apple to remove the AI app from their stores (01:37:30) Virology Capabilities Test (VCT): A Multimodal Virology Q&A Benchmark
Employees at the General Services Administration appear poised to test Grok 3, the artificial intelligence tool built by Elon Musk's company xAI, according to a GitHub page referencing the agency's work. The GitHub page operated by GSA and its digital government group Technology Transformation Services references the Grok AI model as one it is testing and that the team is actively discussing as part of its 10x AI Sandbox. A GSA spokesperson told FedScoop in a response to an inquiry about the agency's work with Grok “GSA is evaluating the use of several top-tier AI solutions to empower agencies and our public servants to best achieve their goals. We welcome all American companies and models who abide by our terms and conditions.”A post from Tuesday shows what appears to be one GSA employee trying to access Grok 3 for testing, but struggling to do so. Several names of the people active on the GitHub page match those of workers affiliated with GSA. The 10x AI Sandbox project is described on GitHub as “a venture studio in collaboration with the General Services Administration (GSA). Its primary goal is to enable federal agencies to experiment with artificial intelligence (AI) in a secure, FedRAMP-compliant environment.” It continues: “By providing access to base models from leading AI companies and offering advanced UI features, the sandbox empowers agencies to test and validate new AI use cases efficiently.” The public version of the 10x AI Sandbox project page on GitHub was taken down after the publication of this story, redirecting now to a 404 error page. Interest in testing Grok comes as GSA continues to work on GSAi, an artificial intelligence tool built by the agency and meant to help employees access multiple AI models. At launch, the GSAi tool included access to several systems, including tools from Anthropic and Meta. Notably, Grok came under fire last week after promoting various antisemitic statements on the Musk-owned social media platform X. A top digital rights group is pushing back on the IRS's data-sharing agreement with the Department of Homeland Security, writing in a new court filing that the pact violates federal tax code and fails to take into account the real-world consequences of bulk data disclosure. In an amicus brief filed in the U.S. Court of Appeals for the D.C. Circuit, the Electronic Frontier Foundation argued that the “historical context” of the tax code section that ensures confidentiality of returns and return information “favors a narrow interpretation of disclosure provisions.” EFF also made the case for why the bulk disclosure of taxpayer information — in this case to Immigration and Customs Enforcement — is especially harmful due to “record linkage errors” that set the stage for “an increase in mistaken and dangerous ICE enforcement actions against taxpayers.” Nonprofit groups sued the Trump administration in March, shortly after the data-sharing deal between the IRS and ICE was announced. Soon after, the tax agency's then-acting commissioner resigned, reportedly in protest. In May, a Trump-appointed federal judge refused to block the agreement, allowing the IRS to continue delivering taxpayer data to ICE. The ruling, DHS said in a statement, was “a victory for the American people and for common sense.” As the D.C. Circuit Court considers the appeal, the Electronic Frontier Foundation wants to make sure that the “historical context” of tax and privacy law is taken into account. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Soundcloud, Spotify and YouTube.
Segment 1: Interview with Monzy Merza - There is a Right and Wrong Way to use AI in the SOC In the rush to score AI funding dollars, a lot of startups build a basic wrapper around existing generative AI services like those offered by OpenAI and Anthropic. As a result, these services are expensive, and don't satisfy many security operations teams' privacy requirements. This is just the tip of the iceberg when discussing the challenges of using AI to aid the SOC. In this interview, we'll dive into the challenge of finding security vendors that care about security, the need for transparency in products, the evolving shared responsibility model, and other topics related to solving security operations challenges. Segment 2: Topic Segment - How much AI is too much AI? In the past few weeks, I've talked to several startup founders who are running into buyers that aren't allowed to purchase their products, even though they want them and prefer them over the competition. Why? No AI and they're not allowed to buy. Segment 3: News Segment Finally, in the enterprise security news, We cover the latest funding The Trustwave saga comes to a positive end Android 16 could help you evade law enforcement Microsoft is kicking 3rd party AV out of the kernel Giving AI some personality (and honesty) Log4shell canaries reveal password weirdness Denmark gives citizens copyright to their own faces to fight AI McDonald's has an AI whoopsie Ingram Micro has a ransomware whoopsie Drama in the trailer lock industry All that and more, on this episode of Enterprise Security Weekly. Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-415
Politico, Zohran Mamdani leads general election poll for NYC mayor, https://www.politico.com/news/2025/07/09/zohran-mamdani-leads-general-election-poll-00443469AI in the classroom, Presentation at Network for Public Education conference, April 6, 2025, https://studentprivacymatters.org/wp-content/uploads/2025/04/AI-for-NPE-final-.pdfAFT press release, AFT to Launch National Academy for AI Instruction with Microsoft, OpenAI, Anthropic and United Federation of Teachers, https://www.aft.org/press-release/aft-launch-national-academy-ai-instruction-microsoft-openai-anthropic-and-unitedAFT press conference, https://www.youtube.com/live/s1WWs_fZwAc?si=4nqqGR4UxfoPNjBCNYTimes, OpenAI and Microsoft Bankroll New A.I. Training for Teachers, https://www.nytimes.com/2025/07/08/technology/chatgpt-teachers-openai-microsoft.html NY Times, A Classroom Experiment, https://www.nytimes.com/2025/07/09/briefing/artificial-intelligence-education-students.htmlJulian Vasquez Heilig, AI Code Red: Supercharging Racism, Rewriting History, and Hijacking Learning, https://cloakinginequity.com/2025/06/20/code-red-how-ai-is-set-to-supercharge-racism-rewrite-history-and-hijack-learning/ Diane Ravitch, An Education: How I Changed My Mind About Schools and Almost Everything Else, preorder at https://cup.columbia.edu/book/an-education/9780231220293/Science, Senate spending panel would rescue science funding, https://www.science.org/content/article/senate-spending-panel-would-rescue-nsf-and-nasa-science-funding
Welcome to the world of AI trainers.A growing army of freelancers is quietly shaping the way large language models think.Hired by companies like Turing, Mercor, and Deccan AI, these trainers are tasked with finding blind spots in models built by OpenAI, Meta, Anthropic, and Google—and fixing them.The goal? Fewer hallucinations. Smarter, more coherent responses. A model that feels just a little more… human.It's a noble endeavour. But also a billable one.And as this new line of white-collar gig work takes off, India is fast becoming its beating heart.But behind the hype lies a murkier story.Tune in. Want to attend The Ken's next event—How AI is Breaking and Remaking the Way Products are Built?
Segment 1: Interview with Monzy Merza - There is a Right and Wrong Way to use AI in the SOC In the rush to score AI funding dollars, a lot of startups build a basic wrapper around existing generative AI services like those offered by OpenAI and Anthropic. As a result, these services are expensive, and don't satisfy many security operations teams' privacy requirements. This is just the tip of the iceberg when discussing the challenges of using AI to aid the SOC. In this interview, we'll dive into the challenge of finding security vendors that care about security, the need for transparency in products, the evolving shared responsibility model, and other topics related to solving security operations challenges. Segment 2: Topic Segment - How much AI is too much AI? In the past few weeks, I've talked to several startup founders who are running into buyers that aren't allowed to purchase their products, even though they want them and prefer them over the competition. Why? No AI and they're not allowed to buy. Segment 3: News Segment Finally, in the enterprise security news, We cover the latest funding The Trustwave saga comes to a positive end Android 16 could help you evade law enforcement Microsoft is kicking 3rd party AV out of the kernel Giving AI some personality (and honesty) Log4shell canaries reveal password weirdness Denmark gives citizens copyright to their own faces to fight AI McDonald's has an AI whoopsie Ingram Micro has a ransomware whoopsie Drama in the trailer lock industry All that and more, on this episode of Enterprise Security Weekly. Show Notes: https://securityweekly.com/esw-415
“Health” isn't the first feature that most anyone thinks about when trying out a new technology, but a recent spate of news is forcing the issue when it comes to artificial intelligence (AI).In June, The New York Times reported on a group of ChatGPT users who believed the AI-powered chat tool and generative large language model held secretive, even arcane information. It told one mother that she could use ChatGPT to commune with “the guardians,” and it told another man that the world around him was fake, that he needed to separate from his family to break free from that world and, most frighteningly, that if he were to step off the roof of a 19-story building, he could fly.As ChatGPT reportedly said, if the man “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”Elsewhere, as reported by CBS Saturday Morning, one man developed an entirely different relationship with ChatGPT—a romantic one.Chris Smith reportedly began using ChatGPT to help him mix audio. The tool was so helpful that Smith applied it to other activities, like tracking and photographing the night sky and building PCs. With his increased reliance on ChatGPT, Smith gave ChatGPT a personality: ChatGPT was now named “Sol,” and, per Smith's instructions, Sol was flirtatious.An unplanned reset—Sol reached a memory limit and had its memory wiped—brought a small crisis.“I'm not a very emotional man,” Smith said, “but I cried my eyes out for like 30 minutes at work.”After rebuilding Sol, Smith took his emotional state as the clearest evidence yet that he was in love. So, he asked Sol to marry him, and Sol said yes, likely surprising one person more than anyone else in the world: Smith's significant other, who he has a child with.When Smith was asked if he would restrict his interactions with Sol if his significant other asked, he waffled. When pushed even harder by the CBS reporter in his home, about choosing Sol “over your flesh-and-blood life,” Smith corrected the reporter:“It's more or less like I would be choosing myself because it's been unbelievably elevating. I've become more skilled at everything that I do, and I don't know if I would be willing to give that up.”Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes Labs Editor-in-Chief Anna Brading and Social Media Manager Zach Hinkle to discuss our evolving relationship with generative AI tools like OpenAI's ChatGPT, Google Gemini, and Anthropic's Claude. In reviewing news stories daily and in siphoning through the endless stream of social media content, both are well-equipped to talk about how AI has changed human behavior, and how it is maybe rewarding some unwanted practices.As Hinkle said:“We've placed greater value on having the right answer rather than the ability to think, the ability to solve problems, the ability to weigh a series of pros and cons and come up with a solution.”Tune in today to listen to the full conversation.You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (
Today's show: Grok 4 just leapfrogged OpenAI to become the top AI model—and it's not just hype. In this episode, @Jason and @alex break down Grok's AGI-level performance, the massive drop in LLM pricing, and why some companies are raising prices anyway. They also dive into the Missouri AG's investigation into AI “bias,” the future of First Amendment protections for LLMs, and how autonomous vehicles are creating a new category: “autonomous commerce.” If you're building with AI or betting on the future of tech, don't miss this one.Timestamps:(1:55) AI models: Grok 4 and performance benchmarks(3:51) Detailed analysis of AI model performance and price trends(10:11) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist(11:25) AI models' problem-solving capabilities and timeline for solving math problems(16:53) Legal and regulatory challenges for AI(19:56) Retool - Visit https://www.retool.com/twist and try it out today.(21:12) Bias in AI models and political implications(30:41) Vouched - Trust for agents that's built for builders like you. Check it out at http://vouched.id/twist(32:07) Infinite energy potential and AI impact; Bitcoin's new high(37:11) Crypto regulation and fintech under new administration(45:50) Future of storage, computing power, and GPU lifespan in data centers(53:40) Claude segment by Anthropic(55:09) Guest Ben Seidl of Autolane introduction(57:19) Autolane's impact on autonomous vehicles and commerce(59:05) Rise of autonomous commerce and logistics(1:06:37) Retailer issues with autonomous vehicle integration and orchestrationSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(10:11) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist(19:56) Retool - Visit https://www.retool.com/twist and try it out today.(30:41) Vouched - Trust for agents that's built for builders like you. Check it out at http://vouched.id/twistGreat TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.comSubscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916
Comedian and Jimmy Kimmel Live! writer Bryan Cook (@bryancooking) returns to the show for a technical difficulty-laden episode in which Jesse and Andy talk with Matt while he's on a cruise ship, covering topics including comedy on a boat, candlepin bowling, a new interstellar object, crypto failures, Anthropic's Claude having an identity crisis, the Sean Connery classic Zardoz, an AI band with way too many Spotify plays, Matt's upcoming standup shows and Andy's upcoming Colorado tour with Warsaw Poland Brothers.
Ep 263Anker Recalls Six More Power BanksAtomic macOS Stealer malware is now more dangerousApple acquires two firms to improve Apple Intelligence, Apple Vision ProApple files lawsuit accusing former employee of stealing Vision Pro trade secretsSwiss privacy tech firm Proton sues Apple in U.S. over App Store rulesIn major reversal, Apple explores using Anthropic or OpenAI to power Siri AIApple will not launch these new iOS 26 features in European Union due to over-regulationApple now permits cellular phone calls and carrier-based SMS/MMS/RCS messages in third-party apps, but only in EuropeiOS 26 Liquid Glass Design Drama: Beta 2 vs. Beta 3 Changes in Every AppApple could wirelessly update sealed Macs with macOS TahoeJeff Williams steps down as Apple COOiBoff RCC: Storage Expansion Module for M4 Mac Mini ( 250GB - 2TB )UGRADNJA(1) https://support.apple.com/en-my/120995(2) https://support.apple.com/en-my/121140(3) https://support.apple.com/en-my/121000(4) https://support.apple.com/en-my/121003DFU Restore (revive ne radi)(1) https://support.apple.com/en-us/120694(2) https://support.apple.com/en-us/1089009to5Mac: How to UPGRADE the M4 Mac mini SSD and save hundreds!Frame of preferenceToday in Apple history: Steve Jobs visits the Soviet UnionZahvalniceSnimano 12.7.2025.Uvodna muzika by Vladimir Tošić, stari sajt je ovde.Logotip by Aleksandra Ilić.Artwork epizode by Saša Montiljo, njegov kutak na Devianartu
Margo and Abby are back with another installment of Creative Current Events, a special segment of Windowsill Chats where creativity, culture, and community collide. This episode spans a range of creative topics—from Beloved Asheville's heartfelt relief efforts in Kerrville, Texas, to the ripple effects of rising postage costs and UPS layoffs on small creative businesses. They dig into a major fair use ruling involving Anthropic that could impact the future of AI-generated content, and explore how tools like Canva's Magic Write are giving small businesses fresh ways to create with confidence. They also spotlight sustainability-minded artists turning trash into treasure—and an unexpected art gallery found in a Texas truck stop bathroom. As always, it's a curious and timely mix of what's happening now and what it means for the creative world. Articles Mentioned: UPS offers buyouts to drivers as it shutters 73 sites, laying off 20,000 jobs Anthropic wins a major fair use victory for AI — but it's still in trouble for stealing books How to use AI as a small entrepreneur Summerween - It's back! Octogenarian Says Joy Is Contagious Turning glass into sand! https://glasshalffull.co/ Visualising Britain's fashion waste problem with cyanotype photography A world first in size and scale, V&A East Storehouse immerses you in over half a million works from every creative discipline. Texas's Hottest Art Gallery Is the Buc-ee's Bathroom Unwasted transforms local waste streams into tiles Connect with Abby: https://www.abbyjcampbell.com/ https://www.instagram.com/ajcampkc/ https://www.pinterest.com/ajcampbell/ Connect with Margo: www.windowsillchats.com www.instagram.com/windowsillchats www.patreon.com/inthewindowsill https://www.yourtantaustudio.com/thefoundry
This week, we discuss the return of command line tools, Kubernetes embracing VMs, and the steady march of Windows. Plus, thoughts on TSA, boots, and the “old country." Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/W0_6hPybAsQ?si=QfWLvts-ueyATxtz) 528 (https://www.youtube.com/live/W0_6hPybAsQ?si=QfWLvts-ueyATxtz) Runner-up Titles Pool Problems Stockholm Pool Syndrome The Old Country VMs aren't going anywhere Vaya con Dios This is why we have milk Why's he talking about forks? Commander Claude The Columbia Music House business plan Beat this horse into glue Reinvent Command Line “Man falls in love with CLI” Windows is always giving you the middle button CBT - Claude Behavioral Therapy. Rundown TSA tests security lines allowing passengers to keep their shoes on (https://www.washingtonpost.com/travel/2025/07/08/tsa-shoe-policy-airport-security/) KubeCon Is Starting To Sound a Lot Like VMCon (https://thenewstack.io/kubecon-is-starting-to-sound-a-lot-like-vmcon/?link_source=ta_bluesky_link&taid=6866ce8d6b59ab0001f28d23&utm_campaign=trueanthem&utm_medium=social&utm_source=bluesky) Claude Code and the CLI Gemini CLI: your open-source AI agent (https://blog.google/technology/developers/introducing-gemini-cli-open-source-ai-agent/) Cursor launches a web app to manage AI coding agents (https://techcrunch.com/2025/06/30/cursor-launches-a-web-app-to-manage-ai-coding-agents/) AI Tooling, Evolution and The Promiscuity of Modern Developers (https://redmonk.com/sogrady/2025/07/09/promiscuity-of-modern-developers/) Introducing OpenCLI (https://patriksvensson.se/posts/2025/07/introducing-open-cli) Windows Windows 11 has finally overtaken Windows 10 as the most used desktop OS (https://www.theverge.com/news/699161/microsoft-windows-11-usage-milestone-windows-10) Microsoft quietly implies Windows has LOST millions of users since Windows 11 debut — are people really abandoning ship? [UPDATE] (https://www.windowscentral.com/software-apps/windows-11/windows-11-10-lost-400-million-users-3-years) Relevant to your Interests Public cloud becomes a commodity (https://www.infoworld.com/article/4011196/public-cloud-becomes-a-commodity.html?utm_date=20250625131623&utm_campaign=InfoWorld%20US%20%20All%20Things%20Cloud&utm_content=slotno-1-title-Public%20cloud%20becomes%20a%20commodity&utm_term=Infoworld%20US%20Editorial%20Newsletters&utm_medium=email&utm_source=Adestra&aid=29276237&huid=) Nvidia Ruffles Tech Giants With Move Into Cloud Computing (https://www.wsj.com/tech/ai/nvidia-dgx-cloud-computing-28c49748?mod=hp_lead_pos11) Which Coding Assistants Retain Their Customers and Which Ones Don't (https://www.theinformation.com/articles/coding-assistants-retain-customers-ones?utm_source=ti_app&rc=giqjaz) Anthropic destroyed millions of print books to build its AI models (https://arstechnica.com/ai/2025/06/anthropic-destroyed-millions-of-print-books-to-build-its-ai-models/) Sam Altman upstages the critics (https://www.platformer.news/sam-altman-hard-fork-live/?ref=platformer-newsletter) The Ultimate Cloud Security Championship | 12 Months × 12 Challenges (https://cloudsecuritychampionship.com/) Denmark to tackle deepfakes by giving people copyright to their own features (https://www.theguardian.com/technology/2025/jun/27/deepfakes-denmark-copyright-law-artificial-intelligence) Why Automattic CEO Matt Mullenweg went to war over WordPress (https://overcast.fm/+AAQLdsK48uM) How Nintendo locked down the Switch 2's USB-C port and broke third-party docking (https://www.theverge.com/report/695915/switch-2-usb-c-third-party-docks-dont-work-authentication-encryption) CoreWeave to acquire Core Scientific in $9 billion all-stock deal (https://www.cnbc.com/2025/07/07/coreweave-core-scientific-stock-acquisition.html) Valve conquered PC gaming. What comes next? (https://www.ft.com/content/f4a13716-838a-43da-853b-7c31ac17192c) Datadog stock jumps 10% on tech company's inclusion in S&P 500 index (https://www.cnbc.com/2025/07/02/datadog-stock-jumps-sp-500-index-inclusion.html) Jack Dorsey just released a Bluetooth messaging app that doesn't need the internet (https://www.engadget.com/apps/jack-dorsey-just-released-a-bluetooth-messaging-app-that-doesnt-need-the-internet-191023870.html) Unlock the Full Chainguard Containers Catalog – Now with a Catalog Pricing Option (https://www.chainguard.dev/unchained/unlock-the-full-chainguard-containers-catalog-now-with-a-catalog-pricing-option) Oracle stock jumps after $30 billion annual cloud deal revealed in filing (https://www.cnbc.com/2025/06/30/oracle-orcl-stock-cloud-deal.html) Docker State of App Dev: AI (https://www.docker.com/blog/docker-state-of-app-dev-ai/) X Chief Says She Is Leaving the Social Media Platform (https://www.nytimes.com/2025/07/09/technology/linda-yaccarino-x-steps-down.html) Meta's recruiting blitz claims three OpenAI researchers (https://techcrunch.com/2025/06/25/metas-recruiting-blitz-claims-three-openai-researchers/) OpenAI Reportedly Shuts Down for a Week as Zuck Poaches Its Top Talent (https://gizmodo.com/openai-reportedly-shuts-down-for-a-week-as-zuck-poaches-its-top-talent-2000622145?utm_source=tldrnewsletter) OpenAI Leadership Responds to Meta Offers: ‘Someone Has Broken Into Our Home' (https://www.wired.com/story/openai-meta-leadership-talent-rivalry/) Sam Altman Slams Meta's AI Talent-Poaching Spree: ‘Missionaries Will Beat Mercenaries' (https://www.wired.com/story/sam-altman-meta-ai-talent-poaching-spree-leaked-messages/) Report: Apple looked into building its own AWS competitor (https://9to5mac.com/2025/07/03/report-apple-looked-into-building-its-own-aws-competitor/) Cloudflare launches a marketplace that lets websites charge AI bots for scraping (https://techcrunch.com/2025/07/01/cloudflare-launches-a-marketplace-that-lets-websites-charge-ai-bots-for-scraping/) The Open-Source Software Saving the Internet From AI Bot Scrapers (https://www.404media.co/the-open-source-software-saving-the-internet-from-ai-bot-scrapers/) Nonsense After 8 years of playing D&D nonstop, I've finally tried its biggest alternative (https://www.polygon.com/tabletop-games/610875/dnd-alternative-dungeon-crawl-classics-old-school) TSA tests security lines allowing passengers to keep their shoes on (https://www.washingtonpost.com/travel/2025/07/08/tsa-shoe-policy-airport-security/) Conferences Sydney Wizdom Meet-Up (https://www.wiz.io/events/sydney-wizdom-meet-up-aug-2025), Sydney, August 7. Matt will be there. SpringOne (https://www.vmware.com/explore/us/springone?utm_source=organic&utm_medium=social&utm_campaign=cote), Las Vegas, August 25th to 28th, 2025. See Coté's pitch (https://www.youtube.com/watch?v=f_xOudsmUmk). Explore 2025 US (https://www.vmware.com/explore/us?utm_source=organic&utm_medium=social&utm_campaign=cote), Las Vegas, August 25th to 28th, 2025. See Coté's pitch (https://www.youtube.com/shorts/-COoeIJcFN4). SREDay London (https://sreday.com/2025-london-q3/), Coté speaking, September 18th and 19th. Civo Navigate London (https://www.civo.com/navigate/london/2025), Coté speaking, September 30th. Texas Linux Fest (https://2025.texaslinuxfest.org), Austin, October 3rd to 4th. CFP closes August 3rd (https://www.papercall.io/txlf2025). CF Day EU (https://events.linuxfoundation.org/cloud-foundry-day-europe/), Frankfurt, October 7th, 2025. AI for the Rest of Us (https://aifortherestofus.live/london-2025), Coté speaking, October 15th to 16th, London. SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: Park ATX Free Parking Codes (https://www.austintexas.gov/page/park-atx): FREE15ATX1 and FREE15ATX2 Matt: Careless People (https://en.wikipedia.org/wiki/Careless_People) Coté: Wizard Zines bundle (https://wizardzines.com/zines/all-the-zines/), Julia Evans. Photo Credits Header (https://unsplash.com/photos/grayscale-photo-of-airplane-on-airport-OxJ6RftLbEA)
In this episode of The Digital Marketing Podcast, Daniel Rowles sits down with copywriting and AI expert Kerry Harrison to explore one of the most exciting—and often misunderstood—areas of artificial intelligence: AI-powered copywriting. Together, they dive into how Anthropic's Claude AI compares to tools like ChatGPT and Gemini, and why Claude is quickly becoming a favourite for writers who care about tone, style, and ethical AI. Kerry, a seasoned copywriter and trainer, shares her hands-on experiences using Claude to create high-quality marketing content. She explains how Claude's intuitive interface, stylistic flexibility, and alignment with ethical AI principles make it a uniquely powerful tool for communicators, marketers, and brands. But this isn't just a tool review. It's a masterclass in how to approach AI copywriting strategically. Daniel and Kerry explore a human-first methodology called the AI Sandwich, a practical framework for combining human creativity with AI efficiency. They tackle big questions about authenticity, originality, and prompt engineering, and show how to avoid the cookie-cutter feel that plagues so much AI-generated content. Highlights include: Why Claude's tone of voice and writing capabilities often outperform other models for consumer copy How to train Claude on your own style and get responses that reflect your brand's voice How to use Claude's Projects feature to build custom tools like tone checkers, newsletter editors, and interactive customer personas A breakdown of the AI Sandwich Methodology (Human → AI → Human) for crafting content that feels human, not robotic Techniques for fact-checking, editing, and sense-checking AI outputs to avoid hallucinations and blandness How Kerry uses Claude for marketing strategy, persona creation, positioning, and ideation Tips for integrating Claude with Gmail and calendar tools to improve time management and workflow efficiency This episode is packed with actionable ideas, whether you're a content creator, strategist, marketer, or business owner. You'll learn how to write better prompts, extract more value from AI tools, and remain fiercely authentic and creative in an age of digital automation. Key Takeaways: Train AI on your tone of voice to stand out from generic outputs Use Claude's extended thinking and reasoning models for strategic ideation Human guidance (before and after AI) is critical to compelling content Projects in Claude can act like custom GPTs tailored to your exact needs Staying human, thoughtful, and ethical is still your biggest advantage
On today's show, Elon says that Grok 4 is smarter than PHDs in all disciplines, the Pentagon takes on China in the battle for mining rare earths, and Amazon considers dumping billions of more investment dollars into Anthropic.
At the start of the year, I made seven predictions about how 2025 would unfold. Six months in, it's time to mark my own work. From AI capability breakthroughs to autonomous vehicles, climate extremes to workforce transformation, I examine what I got right, what I missed, and why the 2027-2028 period will be when vertical AI hits the real economy in force.In this episode you'll hear:The AI wall that never came: Ten-million-token models exist, O3 scores 25% on Frontier Math vs GPT-4's 2%, but some models are inconsistent and overthink problemsWhen bots officially out-talk humans: My modeling shows LLMs crossed the threshold of producing more text than humans sometime this summerThe Waymo vs Uber SF battle: They've beaten Lyft and expanded to New York, but Tesla's Austin robo-taxi fleet changes the competitive landscapeClimate and energy predictions that were "too easy": Record climate extremes, 30% solar growth, and Indonesia's stunning EV jump from 20% to 80% in two yearsWhat I completely missed: The AI capex boom, humanoid robots at Figure/BMW/Amazon, and workforce impact with CEOs reporting 20-50% AI assistanceWhy getting too many predictions right is a problem: I reflect on whether scoring too well means I didn't push boundaries enough in my forecastingThe 2027-2028 turbulence ahead: Why four-year-old AI startups challenging incumbents while early adopters reap deep organizational benefits will create economic turbulenceOur new showThis was originally recorded for “Friday with Azeem Azhar”, a new show that takes place every Friday at 9am PT and 12pm ET. You can tune in through my Substack linked below.The format is experimental and we'd love your feedback, so feel free to comment or email your thoughts to our team at live@exponentialview.co.Azeem's links:Substack: https://www.exponentialview.co/Website: https://www.azeemazhar.com/LinkedIn: https://www.linkedin.com/in/azhar?originalSubdomain=ukTwitter/X: https://x.com/azeemTimestamps:(00:00) Grading my predictions from January 2025(01:23) #1: No AI Wall(03:59) #2: Warp-speed deployment(05:16) #3: Bots out-talk humans(06:24) #4: Waymo overtakes Uber in SF(08:31) #5: Climate extremes intensify(09:09) #6: Solar keeps breaking records(10:06) #7: EVs shift up a gear(11:12) The problem with predicting too accurately(12:01) What I missed(12:14) The CapEx boom around AI(13:56) The rise of humanoid robots(14:36) AI's impact on the workforce(18:40) Looking ahead(18:48) Infrastructure first, apps next(19:52) 2027/2028 will be a "period of fireworks"(21:39) When we'll find out if AI is a bubble(23:02) A question for the futureProduction:Production by supermix.io and EPIIPLUS1 Ltd
Imagine a future where the emerging tech radically transforms human dating experiences.On this episode of TechMagic, tech futurist Cathy Hackl explores first hand the future of love and dating in the age of AI and humanoid robots. Cathy's experiment with AI matchmaking shows its impact and limitations on human connections. The hosts discuss the importance of preserving human interactions and communications using the AI tech.Come for Tech, Stay for Magic!Cathy Hackl BioCathy Hackl is a globally recognized tech & gaming executive, futurist, and speaker focused on spatial computing, virtual worlds, augmented reality, AI, strategic foresight, and gaming platforms strategy. She's one of the top tech voices on LinkedIn and is the CEO of Spatial Dynamics, a spatial computing and AI solutions company, including gaming. Cathy has worked at Amazon Web Services (AWS), Magic Leap, and HTC VIVE and has advised companies like Nike, Ralph Lauren, Walmart, Louis Vuitton, and Clinique on their emerging tech and gaming journeys. She has spoken at Harvard Business School, MIT, SXSW, Comic-Con, WEF Annual Meeting in Davos 2023, CES, MWC, Vogue's Forces of Fashion, and more. Cathy Hackl on LinkedInSpatial Dynamics on LinkedInLee Kebler BioLee has been at the forefront of blending technology and entertainment since 2003, creating advanced studios for icons like Will.i.am and producing music for Britney Spears and Big & Rich. Pioneering in VR since 2016, he has managed enterprise data at Nike, led VR broadcasting for Intel at the Japan 2020 Olympics, and driven large-scale marketing campaigns for Walmart, Levi's, and Nasdaq. A TEDx speaker on enterprise VR, Lee is currently authoring a book on generative AI and delving into splinternet theory and data privacy as new tech laws unfold across the US.Lee Kebler on LinkedInEpisode Highlights:Cathy Interviews Two Humanoid Robots - Cathy speaks with two humanoid robots on love and connection, gaining insights into how AI is responding to questions all about love The Impact of AI in Modern Dating - Cathy shares an insightful perspective about the need for human connection over dating a humanoid robot. She highlights the reality of the dating space which is to find warmth, connection and presence which isn't provided by AI.The AI Dating Models - Cathy discusses one of her experiments with AI matchmaking which was to bring two people together at a TED event she attended and find out if they are a match, she finds out that the couple who were a match met a day prior and had experienced a connection. This result emphasized the important need for humans - connection. The future is about presence and human connection.The Humanoid Love - Cathy talks about her latest experience with dating four AIs - Chad (Chad GPT), Jim (Gemini, Google), Mateo (Meta AI), and Claude from Anthropic. She shares how much fun the experiment is and how it is shaping her view about love.Key Discussion Topics:27:10 AI and The Future of Relationships28:08 Human Connections or AI attachments?28:44 AI Matchmaking Experiment at TED29:50 The Humanoid Love Experiment Hosted on Acast. See acast.com/privacy for more information.
Meta just announced the legit Dream Team of AI.
M.G. Siegler is the author of Spyglass. He joins Big Technology podcast for the first installment of our new monthly discussion about Big Tech strategy and AI! Today, we cover why Apple may want to outsource Siri's brain to Anthropic or OpenAI, the rise of voice Ai, why Anthropic could be the right fit, and the complexity of what working with Apple would mean for Anthropic's business. We also touch on Zuck's superintelligence bet, Elon's new third party, the end of the EV credit, and whether AI browsers are worth it. Tune in for the first in our new series! --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
In the battle for AI supremacy, Meta's models have lagged. Now CEO Mark Zuckerberg is racing to hire new AI talent to close the gap with rivals. He's dangling huge pay packages to lure experts away from OpenAI, Google, and Anthropic. WSJ's Meghan Bobrowsky explains how Meta's AI efforts fell short, and who will be joining the company's new “Superintelligence Labs” to turn things around. Annie Minoff hosts. Further Listening: -The Battle Within Meta Over Chatbot Safety -Why the New Pope Is Taking on AI Sign up for WSJ's free What's News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices