Podcasts about Grok

Neologism created by American writer Robert A. Heinlein

  • 1,605PODCASTS
  • 3,717EPISODES
  • 47mAVG DURATION
  • 4DAILY NEW EPISODES
  • Dec 24, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Grok

Show all podcasts related to grok

Latest podcast episodes about Grok

Improve the News
U.N. Venezuela Meeting, ‘Trump Class' Battleships and Replica Womb Lining

Improve the News

Play Episode Listen Later Dec 24, 2025 34:52


The U.N. Security Council discusses U.S.-Venezuela tensions in an emergency meeting, President Trump unveils a new “Trump Class” fleet of naval battleships, Thailand and Cambodia prepare for Christmas Eve ceasefire talks, Sudan's Prime Minister proposes a U.N.-monitored ceasefire to end its civil war, U.K. police plan to scrap the non-crime hate incident system, Canada names Mark Wiseman its U.S. Ambassador, The U.S. Department of Homeland Security triples its self-deportation payment to $3,000, Amazon reportedly has blocked over 1,800 North Korean job applicants since April, The Pentagon will deploy xAI's Grok to 3 million personnel, and a scientific study creates a replica womb lining. Sources: Verity.News

Let's Know Things
Data Center Politics

Let's Know Things

Play Episode Listen Later Dec 23, 2025 16:39


This week we talk about energy consumption, pollution, and bipartisan issues.We also discuss local politics, data center costs, and the Magnificent 7 tech companies.Recommended Book: Against the Machine by Paul KingsnorthTranscriptIn 2024, the International Energy Agency estimated that data centers consumed about 1.5% of all electricity generated, globally, that year. It went on to project that energy consumption by data centers could double by 2030, though other estimates are higher, due to the ballooning of investment in AI-focused data centers by some of the world's largest tech companies.There are all sorts of data centers that serve all kinds of purposes, and they've been around since the mid-20th century, since the development of general purposes digital computers, like the 1945 Electronic Numerical Integrator and Computer, or ENIAC, which was programmable and reprogrammable, and used to study, among other things, the feasibility of thermonuclear weapons.ENIAC was built on the campus of the University of Pennsylvania and cost just shy of $500,000, which in today's money would be around $7 million. It was able to do calculators about a thousand times faster than other, electro-mechanical calculators that were available at the time, and was thus considered to be a pretty big deal, making some types of calculation that were previously not feasible, not only feasible, but casually accomplishable.This general model of building big-old computers at a center location was the way of things, on a practical level, until the dawn of personal computers in the 1980s. The mainframe-terminal setup that dominated until then necessitated that the huge, cumbersome computing hardware was all located in a big room somewhere, and then the terminal devices were points of access that allowed people to tap into those centralized resources.Microcomputers of the sort of a person might have in their home changed that dynamic, but the dawn of the internet reintroduced something similar, allowing folks to have a computer at home or at their desk, which has its own resources, but to then tap into other microcomputers, and to still other larger, more powerful computers across internet connections. Going on the web and visiting a website is basically just that: connecting to another computer somewhere, that distant device storing the website data on its hard drive and sending the results to your probably less-powerful device, at home or work.In the late-90s and early 2000s, this dynamic evolved still further, those far-off machines doing more and more heavy-lifting to create more and more sophisticated online experiences. This manifested as websites that were malleable and editable by the end-user—part of the so-called Web 2.0 experience, which allowed for comments and chat rooms and the uploading of images to those sites, based at those far off machines—and then as streaming video and music, and proto-versions of social networks became a thing, these channels connecting personal devices to more powerful, far-off devices needed more bandwidth, because more and more work was being done by those powerful, centrally located computers, so that the results could be distributed via the internet to all those personal computers and, increasingly, other devices like phones and tablets.Modern data centers do a lot of the same work as those earlier iterations, though increasingly they do a whole lot more heavy-lifting labor, as well. They've got hardware capable of, for instance, playing the most high-end video games at the highest settings, and then sending, frame by frame, the output of said video games to a weaker device, someone's phone or comparably low-end computer, at home, allowing the user of those weaker devices to play those games, their keyboard or controller inputs sent to the data center fast enough that they can control what's happening and see the result on their own screen in less than the blink of an eye.This is also what allows folks to store backups on cloud servers, big hard drives located in such facilities, and it's what allows the current AI boom to function—all the expensive computers and their high-end chips located at enormous data centers with sophisticated cooling systems and high-throughput cables that allow folks around the world to tap into their AI models, interact with them, have them do heavy-lifting for them, and then those computers at these data centers send all that information back out into the world, to their devices, even if those devices are underpowered and could never do that same kind of work on their own.What I'd like to talk about today are data centers, the enormous boom in their construction, and how these things are becoming a surprise hot button political issue pretty much everywhere.—As of early 2024, the US was host to nearly 5,400 data centers sprawled across the country. That's more than any other nation, and that number is growing quickly as those aforementioned enormous tech companies, including the Magnificent 7 tech companies, Nvidia, Apple, Alphabet, Microsoft, Amazon, Meta, and Tesla, which have a combined market cap of about $21.7 trillion as of mid-December 2025, which is about two-thirds of the US's total GDP for the year, and which is more than the European Union's total GDP, which weighs in at around $19.4 trillion, as of October 2025—as they splurge on more and more of them.These aren't the only companies building data centers at breakneck speed—there are quite a few competitors in China doing the same, for instance—but they're putting up the lion's share of resources for this sort of infrastructure right now, in part because they anticipate a whole lot of near-future demand for AI services, and those services require just a silly amount of processing power, which itself requires a silly amount of monetary investment and electricity, but also because, first, there aren't a lot of moats, meaning protective, defensive assets in this industry, as is evidenced by their continual leapfrogging of each other, and the notion that a lot of what they're doing, today, will probably become commodity services in not too long, rather than high-end services people and businesses will be inclined to pay big money for, and second, because there's a suspicion, held by many in this industry, that there's an AI shake-out coming, a bubble pop or bare-minimum a release of air from that bubble, which will probably kill off a huge chunk of the industry, leaving just the largest, too-big-to-fail players still intact, who can then gobble up the rest of the dying industry at a discount.Those who have the infrastructure, who have invested the huge sums of money to build these data centers, basically, will be in a prime position to survive that extinction-level event, in other words. So they're all scrambling to erect these things as quickly as possible, lest they be left behind.That construction, though, is easier said than done.The highest-end chips account for around 70-80% of a modern data center's cost, as these GPUs, graphical processing units that are optimized for AI purposes, like Nvidia's Blackwell chips, can cost tens of thousands of dollars apiece, and millions of dollars per rack. There are a lot of racks of such chips in these data centers, and the total cost of a large-scale AI-optimized data center is often somewhere between $35 and $60 billion.A recent estimate by McKinsey suggests that by 2030, data center investment will need to be around $6.7 trillion a year just to keep up the pace and meet demand for compute power. That's demand from these tech companies, I should say—there's a big debate about where there's sufficient demand from consumers of AI products, and whether these tech companies are trying to create such demand from whole cloth, to justify heightened valuations, and thus to continue goosing their market caps, which in turn enriches those at the top of these companies.That said, it's a fair bet that for at least a few more years this influx in investment will continue, and that means pumping out more of these data centers.But building these sorts of facilities isn't just expensive, it's also regulatorily complex. There are smaller facilities, akin to ENIAC's campus location, back in the day, but a lot of them—because of the economies of scale inherent in building a lot of this stuff all at once, all in the same place—are enormous, a single data center facility covering thousands of acres and consuming a whole lot of power to keep all of those computers with their high-end chips running 24/7.Previous data centers from the pre-AI era tended to consume in the neighborhood of 30MW of energy, but the baseline now is closer to 200MW. The largest contemporary data centers consume 1GW of electricity, which is about the size of a small city's power grid—that's a city of maybe 500,000-750,000 people, though of course climate, industry, and other variables determine the exact energy requirements of a city—and they're expected to just get larger and more resource-intensive from here.This has resulted in panic and pullbacks in some areas. In Dublin, for instance, the government has stopped issuing new grid connections for data centers until 2028, as it's estimated that data centers will account for 28% of Ireland's power use by 2031, already.Some of these big tech companies have read the writing on the wall, and are either making deals to reactivate aging power plants—nuclear, gas, coal, whatever they can get—or are saying they'll build new ones to offset the impact on the local power grid.And that impact can be significant. In addition to the health and pollution issues caused by some of the sites—in Memphis, for instance, where Elon Musk's company, xAI, built a huge data center to help power his AI chatbot, Grok, the company is operating 35 unpermitted gas turbines, which it says are temporary, but which have been exacerbating locals' health issues and particulate numbers—in addition to those issues, energy prices across the US are up 6.9% year over year as of December 2025, which is much higher than overall inflation. Those costs are expected to increase still further as data centers claim more of the finite energy available on these grids, which in turn means less available for everyone else, and that scarcity, because of supply and demand, increases the cost of that remaining energy.As a consequence of these issues, and what's broadly being seen as casual overstepping of laws and regulations by these companies, which often funnel a lot of money to local politicians to help smooth the path for their construction ambitions, there are bipartisan efforts around the world to halt construction on these things, locals saying the claimed benefits, like jobs, don't actually make sense—as construction jobs will be temporary, and the data centers themselves don't require many human maintainers or operators, and because they consume all that energy, in some cases might consume a bunch of water—possibly not as much as other grand-scale developments, like golf courses, but still—and they tend to generate a bunch of low-level, at times harmful background noise, can create a bunch of local pollution, and in general take up a bunch of space without giving any real benefit to the locals.Interestingly, this is one of the few truly bipartisan issues that seems to be persisting in the United States, at a moment in which it's often difficult to find things Republicans and Democrats can agree on, and that's seemingly because it's not just a ‘big companies led by untouchable rich people stomping around in often poorer communities and taking what they want' sort of issue, it's also an affordability issue, because the installation of these things seems to already be pushing prices higher—when the price of energy goes up, the price of just about everything goes up—and it seems likely to push prices even higher in the coming years.We'll see to what degree this influences politics and platforms moving forward, but some local politicians in particular are already making hay by using antagonism toward the construction of new data centers a part of their policy and campaign promises, and considering the speed at which these things are being constructed, and the slow build of resistance toward them, it's also an issue that could persist through the US congressional election in 2026, to the subsequent presidential election in 2028.Show Noteshttps://www.wired.com/story/opposed-to-data-centers-the-working-families-party-wants-you-to-run-for-office/https://finance.yahoo.com/news/without-data-centers-gdp-growth-171546326.htmlhttps://time.com/7308925/elon-musk-memphis-ai-data-center/https://wreg.com/news/new-details-on-152m-data-center-planned-in-memphis/https://www.politico.com/news/2025/05/06/elon-musk-xai-memphis-gas-turbines-air-pollution-permits-00317582https://www.datacenterwatch.org/reporthttps://www.govtech.com/products/kent-county-mich-cancels-data-center-meeting-due-to-crowdhttps://www.woodtv.com/news/kent-county/gaines-township-planning-commission-to-hold-hearing-on-data-center-rezoning/https://www.theverge.com/science/841169/ai-data-center-oppositionhttps://www.iea.org/reports/energy-and-ai/energy-demand-from-aihttps://www.cbre.com/insights/reports/global-data-center-trends-2025https://www.phoenixnewtimes.com/news/chandler-city-council-unanimously-kills-sinema-backed-data-center-40628102/https://www.mlive.com/news/ann-arbor/2025/11/rural-michigan-fights-back-how-riled-up-residents-are-challenging-big-tech-data-centers.html?outputType=amphttps://www.courthousenews.com/nonprofit-sues-to-block-165-billion-openai-data-center-in-rural-new-mexico/https://www.datacenterdynamics.com/en/news/microsoft-cancels-plans-for-data-center-caledonia-wisconsin/https://www.cnbc.com/2025/11/25/microsoft-ai-data-center-rejection-vs-support.htmlhttps://www.wpr.org/news/microsoft-caledonia-data-center-site-ozaukee-countyhttps://thehill.com/opinion/robbys-radar/5655111-bernie-sanders-data-center-moratorium/https://www.investopedia.com/magnificent-seven-stocks-8402262https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centershttps://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/ai-power-expanding-data-center-capacity-to-meet-growing-demandhttps://www.marketplace.org/story/2025/12/19/are-energyhungry-data-centers-causing-electric-bills-to-go-uphttps://en.wikipedia.org/wiki/Data_centerhttps://en.wikipedia.org/wiki/ENIAC This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe

Daily Stock Picks
2026 Beat the S&P Portfolio - 2025's portfolio was up 20% with doing NOTHING - it's all about strategy

Daily Stock Picks

Play Episode Listen Later Dec 23, 2025 4:01


2025's portfolio is available at Savvy Trader and was tracked through the entire year. It's done well up 20%. There were mistakes made, but there is only 1 negative stock. Remember - I bought and held and added a couple of stocks in February.This year I used Perplexity and Sidekick to come up with what I hope is a strategy of picking ETF's and individual stocks that with no real effort should outperform the S&P. The full episode is at dailystockpick.substack.com

Serve No Master : Escape the 9-5, Fire Your Boss, Achieve Financial Freedom
Is AI Changing the World of Website Building with Pedro Sostre

Serve No Master : Escape the 9-5, Fire Your Boss, Achieve Financial Freedom

Play Episode Listen Later Dec 22, 2025 45:25 Transcription Available


Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this episode, we dive into how AI is reshaping the world of website building and small business marketing with our special guest, Pedro Sostre, a seasoned digital marketer and key leader at Builderall.He and Jonathan break down why so many “AI website builders” fail, what business owners actually need, and how to future‑proof your skills in an AI‑driven market.They explore the growing gap between what AI tools promise and what they actually deliver, especially for small business owners who are busy, overwhelmed, and not interested in becoming “prompt engineers.” Pedro explains how Builderall is tackling that challenge with pre-built, strategy‑driven funnels and AI‑assisted tools that do the heavy lifting in the background—so business owners can focus on running their business, not wiring together tech.You'll hear them discuss why design alone doesn't sell, how bad AI content is clogging the internet, and why the people who win won't be AI itself—but the humans who learn to use it better and faster than everyone else.Notable Quotes:“Right now people are expecting all sorts of different things, and the reality is they're generally getting junk, even if it looks good.” – [Pedro Sostre]“Whatever you think you have secure job security in today is probably not gonna exist in two years… You should be doing something different in three years.” – [Pedro Sostre]“People do not care which AI they're using. They don't care if the backend is Grok or DeepSeek or ChatGPT. They only care if it works.” – [Jonathan Green]“We're not gonna be replaced by AI. We're gonna be replaced by people who are better at AI than us.” – [Jonathan Green]“Train it to do what you do now, because your job needs to be different in two years.” – [Pedro Sostre]Pedro highlights how Builderall is evolving from “a big toolbox” into a guided, AI‑assisted marketing platform. With their new “builds,” a course creator, agency owner, or realtor can log in, choose their business type, and instantly see which tools to use, in what order, and how they all connect—without needing to touch APIs, Zapier, or complex integrations. It's like having a marketing consultant baked into the software, helping you deploy proven funnels instead of guessing your way through 25 different tools.Connect with Pedro Sostre:Website: https://www.builderall.com/ LinkedIn: https://www.linkedin.com/in/psostre/Pedro shares how Builderall is integrating AI behind the scenes to write copy, build pages, and connect tools for small business owners who don't have time (or desire) to learn complex prompting. Their focus is on making AI simple, contextual, and results‑driven—so users see more leads and sales, not just prettier websites.If you're a small business owner, agency, or creator wondering how to actually use AI to grow your business—without becoming a full‑time tech expert—this episode is a must‑listen!Connect with Jonathan Green The Bestseller: ChatGPT Profits Free Gift: The Master Prompt for ChatGPT Free Book on Amazon: Fire Your Boss Podcast Website: https://artificialintelligencepod.com/ Subscribe, Rate, and Review: https://artificialintelligencepod.com/itunes Video Episodes: https://www.youtube.com/@ArtificialIntelligencePodcast

The Whole Rabbit
CHAOS MAGICK #6: Cyber Magick, AI Gods and Technomancy 101 (PART B)

The Whole Rabbit

Play Episode Listen Later Dec 22, 2025 47:59


Send us comments, suggestions and ideas here! In this week's show we move from philosophy to historical practice by exploring the most profound intersections of high technology and ritual magick from the ancient world and discuss precisely what it has to do with computers today. We explore the tale of the Golem of Prague, the alchemy of building a microprocessor and how silica has influenced our entire evolution. In the extended show we discuss the ancient Egyptian Ushabti doll and how they worked much like the spiritual equivalent to modern computing's Daemon alongside what science myth granted such basic little creatures such a loaded name. Thank you and enjoy the show!In this week's episode we discuss:Max Weber's “The Vocation of Science”The Golem of PragueHebrew MysticismCreating a Microprocessor form ScratchWhen AI RebelsEvolution Alongside SilicaIn the extended show available at www.patreon.com/TheWholeRabbit we go much further down the rabbit hole to discuss:Ushabti Dolls of Ancient EgyptThe Hoe and the BasketThe Opener of the MouthThe ChakravartinDaemonTo Be Continued….Where to find The Whole Rabbit:Spotify: https://open.spotify.com/show/0AnJZhmPzaby04afmEWOAVInstagram: https://www.instagram.com/the_whole_rabbitTwitter: https://twitter.com/1WholeRabbitOrder Stickers: https://www.stickermule.com/thewholerabbitOther Merchandise: https://thewholerabbit.myspreadshop.com/Music By Spirit Travel Plaza:https://open.spotify.com/artist/30dW3WB1sYofnow7y3V0YoSources:The Golem of Prague:https://www.wherewhatwhen.com/article/the-maharal-the-golem-and-the-inexplicablehttps://www.degruyterbrill.com/document/doi/10.12987/9780300134728-018/html?lang=en&srsltid=AfmBOopvFJquz8Dr7_nmfPWP3gzlv8GxSyxKM_yBa-2lwiUx5E1QNMItSupport the show

Propel Your Practice
SEO 101: How to Improve Your Clinic's Online Rankings (and Stay Visible in the Age of AI Search) | Healthcare SEO [Replay] Ep. 144

Propel Your Practice

Play Episode Listen Later Dec 22, 2025 13:26 Transcription Available


When it comes to getting your clinic found online, one question always comes up: “How do I improve my website rankings on Google?”Whether you're a chiropractor, acupuncturist, physical therapist, or wellness practitioner, your potential patients are searching for your services right now. The key is making sure your website actually shows up when they do.That's where SEO — search engine optimization — comes in.And today, SEO looks a little different than it used to. With AI-powered search tools like Google's AI Overviews, ChatGPT, and xAI's Grok now summarizing results directly in search, visibility means more than just ranking #1. It's about making sure your clinic is included in those intelligent, conversational answers that patients trust.Let's explore how to do that.

The Good Trouble Show with Matt Ford
The 4 Intelligences That Will Decide Our Future

The Good Trouble Show with Matt Ford

Play Episode Listen Later Dec 20, 2025 39:37 Transcription Available


Jim Garrison reveals the return of the State of the World Forum and the "4 Intelligences" framework that could save humanity from collapse.

C dans l'air
Asma Mhalla - Faux putsch: la vidéo qui énerve Macron

C dans l'air

Play Episode Listen Later Dec 20, 2025 10:36


C dans l'air l'invité du 19 décembre 2025 avec Asma Mhalla, politologue, spécialiste de géopolitique de l'intelligence artificielle et essayiste.Asma Mhalla, politologue spécialiste de géopolitique de l'intelligence artificielle et essayiste, reviendra sur la guerre de l'information qui menace nos démocraties occidentales. Un danger exacerbé par l'intelligence artificielle, qui se répand sur les réseaux sociaux, eux même amplificateurs de fake news. Dernière en date : un faux duplex d'une journaliste qui prétend qu'Emmanuel Macron a été renversé.Quelques jours auparavant, la désinformation s'était invitée après l'attentat de Sydney, via l'IA d'Elon Musk, Grok.Puissante arme de désinformation, l'intelligence artificielle est un outil géopolitique et idéologique, aux mains des géants de la tech et souvent au service d'Etats autoritaires.Face à eux, l'Europe joue la carte de la règlementation et des sanctions, mais peine à rattraper son retard technologique. Empêtrées dans cette guerre hybride, nos démocraties sont-elles en danger ?L'intelligence artificielle s'invite aussi dans nos vies. ChatGPT compte environ 800 millions d'utilisateurs réguliers, dont certains l'utilisent comme psy. On ne mesure pas encore exactement les conséquences que cela pourrait avoir.

Machine Learning Street Talk
Are AI Benchmarks Telling The Full Story? [SPONSORED] (Andrew Gordon and Nora Petrova - Prolific)

Machine Learning Street Talk

Play Episode Listen Later Dec 20, 2025 16:04


Is a car that wins a Formula 1 race the best choice for your morning commute? Probably not. In this sponsored deep dive with Prolific, we explore why the same logic applies to Artificial Intelligence. While models are currently shattering records on technical exams, they often fail the most important test of all: **the human experience.**Why High Benchmark Scores Don't Mean Better AIJoining us are **Andrew Gordon** (Staff Researcher in Behavioral Science) and **Nora Petrova** (AI Researcher) from **Prolific**. They reveal the hidden flaws in how we currently rank AI and introduce a more rigorous, "humane" way to measure whether these models are actually helpful, safe, and relatable for real people.---Key Insights in This Episode:* *The F1 Car Analogy:* Andrew explains why a model that excels at the "Humanities Last Exam" might be a nightmare for daily use. Technical benchmarks often ignore the nuances of human communication and adaptability.* *The "Wild West" of AI Safety:* As users turn to AI for sensitive topics like mental health, Nora highlights the alarming lack of oversight and the "thin veneer" of safety training—citing recent controversial incidents like Grok-3's "Mecha Hitler."* *Fixing the "Leaderboard Illusion":* The team critiques current popular rankings like Chatbot Arena, discussing how anonymous, unstratified voting can lead to biased results and how companies can "game" the system.* *The Xbox Secret to AI Ranking:* Discover how Prolific uses *TrueSkill*—the same algorithm Microsoft developed for Xbox Live matchmaking—to create a fairer, more statistically sound leaderboard for LLMs.* *The Personality Gap:* Early data from the **Humane Leaderboard** suggests that while AI is getting smarter, it is actually performing *worse* on metrics like personality, culture, and "sycophancy" (the tendency for models to become annoying "people-pleasers").---About the HUMAINE LeaderboardMoving beyond simple "A vs. B" testing, the researchers discuss their new framework that samples participants based on *census data* (Age, Ethnicity, Political Alignment). By using a representative sample of the general public rather than just tech enthusiasts, they are building a standard that reflects the values of the real world.*Are we building models for benchmarks, or are we building them for humans? It's time to change the scoreboard.*Rescript link:https://app.rescript.info/public/share/IDqwjY9Q43S22qSgL5EkWGFymJwZ3SVxvrfpgHZLXQc---TIMESTAMPS:00:00:00 Introduction & The Benchmarking Problem00:01:58 The Fractured State of AI Evaluation00:03:54 AI Safety & Interpretability00:05:45 Bias in Chatbot Arena00:06:45 Prolific's Three Pillars Approach00:09:01 TrueSkill Ranking & Efficient Sampling00:12:04 Census-Based Representative Sampling00:13:00 Key Findings: Culture, Personality & Sycophancy---REFERENCES:Paper:[00:00:15] MMLUhttps://arxiv.org/abs/2009.03300[00:05:10] Constitutional AIhttps://arxiv.org/abs/2212.08073[00:06:45] The Leaderboard Illusionhttps://arxiv.org/abs/2504.20879[00:09:41] HUMAINE Framework Paperhttps://huggingface.co/blog/ProlificAI/humaine-frameworkCompany:[00:00:30] Prolifichttps://www.prolific.com[00:01:45] Chatbot Arenahttps://lmarena.ai/Person:[00:00:35] Andrew Gordonhttps://www.linkedin.com/in/andrew-gordon-03879919a/[00:00:45] Nora Petrovahttps://www.linkedin.com/in/nora-petrova/Event:Algorithm:[00:09:01] Microsoft TrueSkillhttps://www.microsoft.com/en-us/research/project/trueskill-ranking-system/Leaderboard:[00:09:21] Prolific HUMAINE Leaderboardhttps://www.prolific.com/humaine[00:09:31] HUMAINE HuggingFace Spacehttps://huggingface.co/spaces/ProlificAI/humaine-leaderboard[00:10:21] Prolific AI Leaderboard Portalhttps://www.prolific.com/leaderboardDataset:[00:09:51] Prolific Social Reasoning RLHF Datasethttps://huggingface.co/datasets/ProlificAI/social-reasoning-rlhfOrganization:[00:10:31] MLCommonshttps://mlcommons.org/

Driveway Beers Podcast
Doctor ChatGPT MD

Driveway Beers Podcast

Play Episode Listen Later Dec 20, 2025 68:15


Driveway Beers PodcastDoctor ChatGPT MD!!With the rise of AI, is ChatGPT a better source of up to date medical information? It's available 24/7 and doesn't make you wait an hour after your appointment time to be seen. Is a something with access to all the latest information better at diagnosing you than a human Doctor?? Mike and Alex talk about AI. Where is it now? Is it really as useful as the internet wants us to believe or is it just another fad? #AI #ElonMusk #Chatgpt #Grok #Perplexity #Gemini #Siri #Google #Apple #Amazon #Alexa #doctor #health #nurse Please subscribe and rate this podcast on your podcast platforms like Apple and Spotify as it helps us a ton. Also like, comment, subscribe and share the video on Youtube. It really helps us get the show out to more people. We hope you enjoyed your time with us and we look forward to seeing you next time. Please visit us at https://drivewaybeerspodcast.com/donate/ to join The Driveway Club and buy us a bourbon! Buy us a bottle and we'll review it on a show!Leave us a comment and join the conversation on our discord at https://discord.gg/rN25SbjUSZ.Please visit our sponsors:Adam Chubbuck of Team Alpha Charlie Real Estate, 8221 Ritchie Hwy, Pasadena, MD 21122, www.tacmd.com, (443) 457-9524. If you want a real estate agent that will treat your money like it's his own and provide you the best service as a buyer or seller, contact Adam at Team Alpha Charlie.If you want to sponsor the show, contact us at contact@drivewaybeerspodcast.comCheck out all our links here https://linktr.ee/drivewaybeerspodcast.comIf you're looking for sports betting picks, go to conncretelocks.com or send a message to Jeremy Conn at Jconn22@gmail.comFacebook Page https://www.facebook.com/drivewaybeerspodcast/#podcast #whiskey #bourbon

Pod Awful
Cease 2 Desist - PODAWFUL PODCAST EO91

Pod Awful

Play Episode Listen Later Dec 19, 2025 48:56


[3.5+ HOUR LONG SHOW! JOIN THE PIZZA FUND! $12 level. https://podawful.com/posts/2616] I've received a ton of Cease & Desists over the past 16 years, but none more BOGUS, or INSANE than this one. Comedy Shaman fired up Grok and got it to write the most Actual Indian fake legal notice of all time, over the most embarrassing thing ever. And that isn't even including him leaning into being a troon lover. Shaman is about to put on the dress, because he is actually asking me to stop his inner child from detransitioning. He's demanding I help him remove his member. The Goon To Troon pipeline remains undefeated. PLUS: Bill Hader is a little whiny CREEP and helped murder Rob Reiner, Mr. Burgers' father, Lon, consulted with an internet lawyer on how to stop PODAWFUL from cyberbullying his 40 year old son. VIDEO: https://youtube.com/live/0OQlTsGxoqs  Buy A Shirt: http://awful.tech PODAWFUL is an anti-podcast hosted by Jesse P-S

Unrelenting
177: Katsu Curry

Unrelenting

Play Episode Listen Later Dec 19, 2025 120:57


Claude Sonnet 4.5 says: “Listen up! This episode of Unrelenting does not hold your hand, does not apologize, and does not slow down for anyone still buffering at 1x speed. Darren and Gene come out swinging with no prep, no script, and no mercy—ripping through Hollywood myths, AI insanity, podcasting delusions, streaming failures, and the slow cultural rot nobody else has the spine to call out. You will hear brutal honesty about AI replacing reality, OnlyFans eating Hollywood alive, why streaming ruins everything, and how modern media turned itself into a participation trophy factory. They tear apart: The Bobiverse, cryonics, AI minds, and digital immortality Why fashion is dead and everything is just recycled garbage How OnlyFans out-earned Hollywood without permission Why AM radio still works and most streamers don't The ugly truth behind podcasting, value-for-value, and fake engagement AI video ads, Grok, face-swapping scams, and why your brain isn't ready This is two hours of unfiltered conversation, long-form, unsanitized, and hostile to bad ideas. No clips. No trends. No sponsors whispering in ears. Just raw discussion, real laughs, and the kind of takes that get you shadow-banned if you don't already own your platform. If you want: Safe opinions Algorithm-approved thinking Bite-sized nonsense Stand down. This is not your show. If you want: Real talk Long-form chaos Media deconstruction with teeth Lock it in. Rip the knob off. This is Unrelenting.“ Unrelenting: where discipline means no mercy, no bullshit, and no excuses. Thanks for listening. Please support the show! –>> DONATE NOW

BULLY THE INTERNET
Cease 2 Desist - PODAWFUL PODCAST EO91

BULLY THE INTERNET

Play Episode Listen Later Dec 19, 2025 48:56


[3.5+ HOUR LONG SHOW! JOIN THE PIZZA FUND! $12 level. https://podawful.com/posts/2616] I've received a ton of Cease & Desists over the past 16 years, but none more BOGUS, or INSANE than this one. Comedy Shaman fired up Grok and got it to write the most Actual Indian fake legal notice of all time, over the most embarrassing thing ever. And that isn't even including him leaning into being a troon lover. Shaman is about to put on the dress, because he is actually asking me to stop his inner child from detransitioning. He's demanding I help him remove his member. The Goon To Troon pipeline remains undefeated. PLUS: Bill Hader is a little whiny CREEP and helped murder Rob Reiner, Mr. Burgers' father, Lon, consulted with an internet lawyer on how to stop PODAWFUL from cyberbullying his 40 year old son. VIDEO: https://youtube.com/live/0OQlTsGxoqs  Buy A Shirt: http://awful.tech PODAWFUL is an anti-podcast hosted by Jesse P-S

Novel Marketing
Why Some Authors Are Ditching ChatGPT Entirely

Novel Marketing

Play Episode Listen Later Dec 18, 2025 59:29


ChatGPT used to be the go-to AI for authors, but it's no longer the best choice for every task. New models like Claude, Gemini, Perplexity, and Grok are outperforming it in specific areas, from creative writing to research to automation.In this week's episode, you'll hear from The Nerdy Novelist, Jason Hamilton. We discuss which AI tools work best for your writing, your budget, and your genre.You'll learnWhich tools can help expedite your researchWhich AI models help you write the best dialogueHow open-source models can spur fresh fiction ideas and structuresTo stay up to date on which models can help you with various parts of your writing and marketing (and which can't), listen in or read the blog version to feel confident experimenting with new platforms.What does ChatGPT Actually Know About Your Book?AI Optimization for AuthorsBook Promotion in 2025: How AI Gives Authors More Time to WriteTikTok, Iran, and AI: What Authors Need to Know NowThe Author's Guide to AI with Joanna PennSupport the show

The Mead House
Episode 303 – The Mead House AI Challenge Part 1

The Mead House

Play Episode Listen Later Dec 18, 2025 80:59


Jeff and Chris decide to test the validity of using AI to design mead recipes – in what quickly becomes a multi-episode topic, the guys discuss feeding a simple prompt for a cherry mead recipe to four of the most popular Large Language Model AI chatbots – ChatGPT, DeepSeek, Grok and Claude. Additional similar prompts … Continue reading "Episode 303 – The Mead House AI Challenge Part 1"

AI in Education Podcast
AI in Education's Christmas Special: Hallucinations, Headbands, and Bad Ideas

AI in Education Podcast

Play Episode Listen Later Dec 18, 2025 36:18


AI in Education's Christmas Special: Hallucinations, Headbands, and Bad Ideas In this end-of-year Christmas special, Ray and Dan squeeze in one final episode to reflect on a whirlwind year in AI and education - with a healthy dose of festive chaos. They unpack the latest AI news, including Australia's National AI Plan, OpenAI's Australian data centre and teacher certification course, major university rollouts of ChatGPT, and global experiments like nationwide AI tools in schools and targeted funding for AI-assisted teaching. But this episode quickly moves beyond policy and platforms into something more fun - and more unsettling! Ray challenges Dan with a "Real or Hallucinated?" quiz featuring AI products that may (or may not) exist, from focus-monitoring headbands and robot teachers to pet translators and laugh-track smart speakers. Along the way, they explore what these products reveal about current AI practice, the risks of anthropomorphising technology, and why education must keep humans firmly at the centre of learning - even as experimentation accelerates. It's a light-hearted but thoughtful way to wrap up 2025, and a reminder that just because AI can do something, doesn't always mean it should.   News Items in the episode   Tech companies advised to label and 'watermark' AI-generated content https://www.abc.net.au/news/2025-12-01/ai-guidance-label-watermark-ai-content/106083786    El Salvador announces national AI program with Grok for Education https://x.ai/news/el-salvador-partnership    Hong Kong schools to get HK$500,000 (about AU$100K/ US$65K) each under AI education plan https://www.scmp.com/news/hong-kong/education/article/3336600/hong-kong-schools-get-hk500000-each-under-hk500-million-ai-education-plan    OpenAI to open Australian hosted service https://www.afr.com/technology/openai-becomes-major-tenant-in-7b-data-centre-deal-20251204-p5nkr4    OpenAI ChatGPT for Teachers foundations course https://www.linkedin.com/posts/chatgpt-for-education_new-for-k-12-educators-chatgpt-foundations-activity-7404242317718487042-He9H    La Trobe chooses ChatGPT Education https://www.latrobe.edu.au/news/articles/2025/release/openai-collaboration-drives-inclusion,-innovation    Australia's Nation AI Plan https://www.industry.gov.au/publications/national-ai-plan   

Prompt
Grænsekontrol og DisneyGPT

Prompt

Play Episode Listen Later Dec 18, 2025 56:30


Hvad sker der, når stater vil gennemsøge fem års sociale medie-opslag, før du må rejse ind? Vi vender Trumps kontroversielle forslag om at kræve fuld social medie-historik af turister forud for VM. Vi får besøg af Troldspejlet-vært Jakob Stegelmann, der hjælper os med at forstå, hvorfor Disney går ind i et milliard-partnerskab med OpenAI. Vi ser nærmere på ChatGPT 5.2, og så skal vi forbi Elon Musks chatbot Grok, der under et terrorangreb i Australien hallucinerede en helt, der aldrig har eksisteret. Værter: Marcel Mirzaei-Fard, tech-analytiker, og Henrik Moltke, DRs techkorrespondent.

Conservative Daily Podcast
Joe Oltmann Untamed | Guest Juan O' Savin | Tina Peters Update | 12.16.25

Conservative Daily Podcast

Play Episode Listen Later Dec 17, 2025 138:23


Dive into a reality check on "Joe Oltmann Untamed," where host Joe Oltmann pulls no punches in exposing the deep state's games and the heroes fighting back. In this explosive episode, Joe calls out Kash Patel as a total disappointment—jet-setting for fluff interviews with his girlfriend on the Katie Miller Show while critical issues like the Butler shooting, Charlie Kirk's incident, and the Epstein list fizzle into nothing burgers. But all hope isn't lost: Tulsi Gabbard emerges as a wrecking ball, a former Democrat turned truth-seeker who's outpacing the entire cabinet in her relentless pursuit of accountability. And fresh from Rasmussen Reports, we dissect why Gabbard's bombshell report on election machines is being stonewalled until January— is it a political ploy, or a setup for military intervention to expose the interference?Shifting gears to the frontlines of freedom, Joe welcomes powerhouse guest Juan O'Savin, the unyielding freedom fighter, for a no-holds-barred update on Tina Peters' harrowing saga. As a 70-year-old Gold Star Mom and first-time non-violent offender, Tina's locked away in Colorado's Level 3 La Vista Correctional Facility—far harsher than her "crimes" warrant—enduring cruel and unusual punishment that screams injustice. We'll break down the shocking conditions, from dilapidated HVAC units breeding mold and poor air quality (backed by Grok's eye-opening analysis) to the DOJ's fresh investigation into Colorado's prisons, including a scathing letter to Governor Polis highlighting abuses at facilities like San Carlos and Trinidad.The fight for truth, accountability, and justice is far from over. From the stonewalling of Tulsi Gabbard's explosive election machine report to the heartbreaking injustice facing Tina Peters—a courageous whistleblower rotting in a harsh Level 3 facility amid deplorable conditions and a DOJ probe into Colorado's broken prison system—the battles we're exposing today are the ones that will define tomorrow's freedom. Joe Oltmann and Juan O'Savin leave no stone unturned, shining a fierce light on the disappointments, the heroes, and the systemic corruption that too many want buried.

Stall It with Darren and Joe
Ep 233: Joe Saves Christmas

Stall It with Darren and Joe

Play Episode Listen Later Dec 17, 2025 44:19


Joe and Darren meet at a Christmas convention but more on that later...This week the boys bemoan the lack of options in Dublin for entertaining the kids and it seems bringing them to a graveyard isn't going to cut it and we're all in agreement that Christmas markets suck.The conversation naturally turns to which A.I assistant would be the soundest to hang out with and Grok comes out very badly.With it being the season we hear how Joe's Moira has made some 'interesting' choices whn it comes to the kids presents prompting joe to sweep down the chimney to save the day. With the darts underway at Ally Pally we needless to say get to hear from Joe how 'easy' darts is. He's as regular as a Luke Littler triple 20 that fella. PARENTAL EDVISORY WARNING: THERE IS MORE SANTA REAL TALK SO SMALL EARS ARE TO BE USHERED AWAY.And don't forget to join us for our live show at Vicar Street on February 12th. Tickets are on sale at Ticketmaster now – we promise you wont regret it.Send all your questions and comments to stallit@goloudnow.com

The Family History AI Show
EP39: 2026 Predictions for Family History AI, Platform Apps, Handwritten Text Recognition, AI-Enhanced Research, AI-Browsers, and so much more!

The Family History AI Show

Play Episode Listen Later Dec 17, 2025 49:45


In the last episode of 2025, Co-hosts Mark Thompson and Steve Little present their predictions for how artificial intelligence will transform genealogy research in 2026. This special episode examines fourteen key trends shaping the future of family history AI.Mark and Steve predict that AI tools will move from enthusiast circles into mainstream genealogy practice, with AI-enhanced apps like NotebookLM becoming more important than the underlying language models that people have focused on for the past three years.They explore how handwritten text recognition will become more accurate and accessible, and that genealogy companies will cautiously integrate new AI features, first focusing on helping us with our research.Timestamps:02:33 Family History AI Goes Mainstream: From Enthusiasts to Everyday Users04:13 Apps Over Models: Why Platform Features Matter More Than LLMs06:17 Reusable Prompting Tools: GPTs, Projects, and Gems Boost Efficiency08:02 AI-Enhanced Research Gains Acceptance Among Serious Genealogists09:53 Handwritten Text Recognition Gets Better, Easier, and Cheaper12:18 Genealogy Companies Take Cautious Approach to Generative AI17:07 AI-Enhanced Browsers Become Standard, Agentic Features Raise Concerns24:25 Voice Interfaces to AI Remain Niche in 202627:36 LLM Vendors Push File and Email Integration for Stickiness31:46 Productivity Tools Embed LLMs Everywhere35:56 The AI Horse Race: Three Leaders Emerge41:15 AI Licensing Deals Change Internet Access Patterns44:34 The AI Bubble Conversation is important to society, but less so to GenealogistsResource Links:The Family History AI Show Academy https://tixoom.app/fhaishowFamily History AI Goes MainstreamWhat Can AI Do for Your Genealogical Research? – James Tanner (Nov 2025) https://www.youtube.com/watch?v=SXmVKy1pUPEFamilySearch Shares Plans for 2025 (Includes AI integration details) https://newsroom.churchofjesuschrist.org/article/familysearch-shares-plans-for-2025Reusable Prompting ToolsCustom GPTs vs. Gemini Gems: Who Wins? - Learn Prompting (Aug 2025) https://learnprompting.org/blog/custom-gpts-vs-gemini-gemsAI-Enhanced ResearchUnlocking Family Histories: How AI Is Breathing New Life into Handwritten Records (South Central APG)https://southcentralapg.org/2025/08/16/unlocking-family-histories-how-ai-is-breathing-new-life-into-handwritten-records/Handwritten Text RecognitionA new Google model is nearly perfect on automated handwriting recognition - Hacker News https://news.ycombinator.com/item?id=45887262Cautious AI from Genealogy CompaniesAI-Enhanced BrowsersCompliance alert: Do not use AI browsershttps://vinciworks.com/blog/compliance-alert-do-not-use-ai-browsers/Content Integration with ChatbotsGemini vs Copilot: A Quick Comparison Guide (2025) - Tactiqhttps://tactiq.io/learn/gemini-vs-copilotAI in Office Productivity ToolsMicrosoft Copilot in 2025: What's Changed & What's Next | Aldridgehttps://aldridge.com/microsoft-copilot-in-2025-whats-changed-whats-next/Monthly Round Up: New Features in Microsoft 365 Copilot (Dec 2025)https://dynamicscommunities.com/ug/copilot-ug/monthly-round-up-new-features-in-microsoft-365-copilot/The AI Horse RaceThe Best AI in October 2025? We Compared ChatGPT, Claude, Grok, Gemini & Others - FelloAIhttps://felloai.com/the-best-ai-in-october-2025-we-compared-chatgpt-claude-grok-gemini-others/The 2025 AI Coding Models: Comprehensive Guide to the Top 5 Contenders - CodeGPThttps://www.codegpt.co/blog/ai-coding-models-2025-comprehensive-guideAI Licensing DealsContent Licensing Agreements Will Concentrate Markets Without Standardized Access - ProMarket(Nov 2025) https://www.promarket.org/2025/11/20/content-licensing-agreements-will-concentrate-markets-without-standardized-access/The False Hope of Content Licensing at Internet Scale - ProMarkethttps://www.promarket.org/2025/11/19/the-false-hope-of-content-licensing-at-internet-scale/The AI Bubble ConversationThe AI boom will turn to bust in 2026https://www.marketwatch.com/story/the-ai-boom-will-turn-to-bust-next-year-says-this-forecaster-who-offers-his-trade-of-the-year-9c2a2332OUTLOOK 2026 Promise and Pressure - J.P. Morgan (Discusses AI market stability vs bubble risks)https://www.jpmorgan.com/content/dam/jpmorgan/documents/wealth-management/outlook-2026.pdfTags:Artificial Intelligence, Genealogy, Family History, AI Predictions, NotebookLM, HTR, AI Browsers, ChatGPT, Google Gemini, Anthropic Claude

Haken dran – das Social-Media-Update
Die Vermessung deiner Welt (mit Nicole Diekmann)

Haken dran – das Social-Media-Update

Play Episode Listen Later Dec 17, 2025 62:29 Transcription Available


Wenn du deinen Algoritmus aufmalen müsstest, läge der große Abstand zwischen #Kitten und #Entrepreneur? Vermutlich ja - sagt zumindest eine Kartierung der Washington Post. Apropos TikTok - was macht eigentlich Meta da in China? Die Antwort ist: Geld verdienen. Anders als Musk mit Grok - aber der geht jetzt statt an Unternehmen damit an Schüler:innen. Wir haben viel zu besprechen, liebe Grüße an den Verfassungsfriedrich!

Les actus du jour - Hugo Décrypte
(Les Actus Pop) L'IA Grok pourrait jouer un rôle dans l'éducation au Salvador … HugoDécrypte

Les actus du jour - Hugo Décrypte

Play Episode Listen Later Dec 16, 2025 6:16


Chaque jour, en quelques minutes, un résumé de l'actualité culturelle. Rapide, facile, accessible.Notre compte InstagramDES LIENS POUR EN SAVOIR PLUSSALVADOR IA : BFM Tech&Co, France Inter, Ouest-FranceLOU DELEUZE : Franceinfo, Le ParisienSOPRANO : Skyrock (sur X), Soprano (sur X)MUSÉE DU LOUVRE : Libération, BFM TVCRISTIANO RONALDO : L'Équipe, Ouest-FranceDÉCÈS DE ROB REINER : CNews, PEOPLE, BFM TVÉcriture : Enzo BruillotIncarnation : Blanche Vathonne Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

Elon Musk Pod
xAI Is Burning $1 Billion a Month.

Elon Musk Pod

Play Episode Listen Later Dec 16, 2025 8:44


Elon Musk's xAI has built an enterprise sales team to pitch Grok to major corporations, but the company faces serious obstacles. With no track record in enterprise sales, a burn rate of $1 billion per month, and public missteps including misinformation and controversial outputs, xAI is trying to compete with OpenAI and Anthropic for contracts it has not yet proven it can deliver on. We break down where Grok actually performs well and why corporations remain hesitant.Join our FREE Business Community - ⁠https://whop.com/apex-content/⁠

The Joe Pags Show
MerryChristmas.gov, “Party On, Grok!” Kamala 2028?! & Leland Vittert Opens Up - Dec 15 Hr 3

The Joe Pags Show

Play Episode Listen Later Dec 16, 2025 43:34


The White House rolls out MerryChristmas.gov with a “12 Days of Christmas,” and while Pags loves the festive idea, he's not quite sure what Americans are actually getting back. Then the show goes full comedy as Pags reacts to David Spade's take on Christmas, plus a perfectly ridiculous moment when Sam accidentally calls Dana Carvey's “Garth” character “Grok”—which instantly turns into the Wayne's World-style phrase of the day: “Party on, Grok!” And wait… is Kamala Harris really considering another run for president? CMON. Finally, Leland Vittert, NewsNation host, joins Pags for a thoughtful, compelling conversation about his new book and his decision to share that he's high-functioning autistic—and how he thrives in a high-stakes media career. Learn more about your ad choices. Visit megaphone.fm/adchoices

And Another Thing with Dave
#456 Hormone Havoc: Hidden Chemicals, Food Lies & the War on Fertility

And Another Thing with Dave

Play Episode Listen Later Dec 16, 2025 32:20


Host David Smith dives deep into the hidden world of hormone disruptors, food corruption, and environmental manipulation in this hard-hitting episode of And Another Thing With Dave.From PFAS in yoga pants and phthalates in air fresheners to BPA in plastics and Atrazine in the water supply, David and co-host Grok unpack how chemical chaos might be quietly wrecking fertility, cognition, and masculinity across the West.The conversation veers into the political and cultural machinery behind it all — touching on the influence of major donors, nonprofits like HRC and the Trevor Project, and how environmental or “green” initiatives often mask corporate land grabs and globalist agendas that threaten local farming and food sovereignty.They also discuss:The eerie truth behind Alex Jones' infamous “turning the frogs gay” claim.Teflon, candles, detergents, and “fresh scents” — everyday hormone disruptors hiding in plain sight.How the Nature Conservancy's Point Reyes land deal quietly forced out generational dairy families in the name of “restoring elk.”Why regenerative organic farming may be the last stand against synthetic food, fake meat, and corporate-controlled agriculture.It's funny, raw, and thought-provoking — the kind of podcast that makes you rethink what's in your pantry, your yoga pants, and your politics.#AndAnotherThingWithDave #Podcast #EndocrineDisruptors #HormoneHealth #ToxicChemicals #PFAS #Atrazine #BPA #Phthalates #RegenerativeFarming #EWG #Clean15 #DirtyDozen #NatureConservancy #PointReyes #FoodFreedom #OrganicFarming #EnvironmentalCorruption #BillGatesFood #LabGrownMeat #Greenwashing #CulturalMarxism #PopulationDecline #HealthPodcast #AlternativeMedia

The Jimmy Dore Show
Candace Owens CLAPS BACK At Tim Pool & Glenn Beck Over Charlie Kirk!

The Jimmy Dore Show

Play Episode Listen Later Dec 15, 2025 63:26


Jimmy speaks with Candace Owens about the aftermath of Charlie Kirk's death, questioning the speed and certainty of official narratives and criticizing figures who immediately declared who was or wasn't responsible. Owens describes being shocked by what she sees as coordinated media behavior, donor pressure, and a rapid effort to control Kirk's legacy, in particular by right-wing figures like Tim Pool and Glenn Beck, framing this effort as part of a broader fight against Zionist influence in conservative media.  The two compare these tactics to past moments like during COVID, the BLM protests, and the JFK assassination, arguing that shaming people for asking questions is a familiar psychological strategy used to shut down inquiry. The discussion ends with Owens asserting that despite money, institutional power, and mainstream media alignment against her, public trust is shifting toward independent voices because truth resonates more than enforced consensus. Plus segments on Twitter AI program Grok dismantling the FBI narrative about Charlie Kirk's shooting, Erika Kirk accidentally contradicting her colleagues at TPUSA about Mikey McCoy's reaction to Charlie's shooting, and an FBI official's cringeworthy response to questions about Antifa's designation as a terrorist organization. Also featuring Kurt Metzger, Stef Zamorano and Mike MacRae. And a phone call from JD Vance!

The Information's 411
Musk's $1.5T SpaceX IPO Target, Grok's Slow Enterprise Push, Space Data Centers | Dec 15, 2025

The Information's 411

Play Episode Listen Later Dec 15, 2025 37:30


The Information's Wayne Ma talks with TITV Host Akash Pasricha about Apple's new org chart following recent executive exits, including the demotion of the AI group. We also talk with Features Editor Nick Wingfield about the origins and viability of putting data centers in space, and TMF Associates' Tim Farrar analyzes the big questions around a potential $1.5 trillion SpaceX IPO and its reliance on the Starlink story. Lastly, we get into xAI's uphill battle to sell Grok to businesses with The Information's Theo Wayt, and the harsh reality of AI's colossal power infrastructure needs with Ann Davis Vaughan, as she launches her debut AI Infrastructure newsletter. Articles discussed on this episode: https://www.theinformation.com/articles/bragawatt-data-center-era-brings-reality-checks-energy-breakthroughshttps://www.theinformation.com/articles/elon-musk-suddenly-talking-space-data-centershttps://www.theinformation.com/articles/xai-uphill-battle-selling-grok-businesseshttps://www.theinformation.com/articles/people-running-apple-executive-exodusTITV airs on YouTube, X and LinkedIn at 10AM PT / 1PM ET. Or check us out wherever you get your podcasts.Subscribe to: - The Information on YouTube: https://www.youtube.com/@theinformation- The Information: https://www.theinformation.com/subscribe_hSign up for the AI Agenda newsletter: https://www.theinformation.com/features/ai-agenda

Histoires du monde
Au Salvador, l'IA d'Elon Musk "Grok" bientôt utilisée pour concevoir les programmes scolaires

Histoires du monde

Play Episode Listen Later Dec 15, 2025 2:56


durée : 00:02:56 - Regarde le monde - Il y a un an, dans sa gigantesque usine du Texas, Elon Musk recevait un chef d'État, le jeune et autoritaire, Nayib Bukele, qui aime se présenter comme « un dictateur cool » et règne sans partage sur le Salvador. Vous aimez ce podcast ? Pour écouter tous les autres épisodes sans limite, rendez-vous sur Radio France.

Engadget
Grok is spreading inaccurate info again, Google pulled AI-generated videos of Disney characters from YouTube, and iRobot has filed for bankruptcy

Engadget

Play Episode Listen Later Dec 15, 2025 6:36


-Grok's confusion seems to be most apparent with a viral video that shows a 43-year-old bystander, identified as Ahmed al Ahmed, wrestling a gun away from an attacker during the incident, which, according to the latest news reports, has left at least 15 dead. -Google seems to be cracking down on the use of Disney characters in AI-generated videos on YouTube after it was hit with a cease and desist letter. -iRobot expects the deal to close next February, but says it will continue to operate "with no anticipated disruption to its app functionality, customer programs, global partners, supply chain relationships or ongoing product support." Learn more about your ad choices. Visit podcastchoices.com/adchoices

InterNational
Au Salvador, l'IA d'Elon Musk "Grok" bientôt utilisée pour concevoir les programmes scolaires

InterNational

Play Episode Listen Later Dec 15, 2025 2:56


durée : 00:02:56 - Regarde le monde - Il y a un an, dans sa gigantesque usine du Texas, Elon Musk recevait un chef d'État, le jeune et autoritaire, Nayib Bukele, qui aime se présenter comme « un dictateur cool » et règne sans partage sur le Salvador. Vous aimez ce podcast ? Pour écouter tous les autres épisodes sans limite, rendez-vous sur Radio France.

Les Cast Codeurs Podcast
LCC 333 - A vendre OSS primitif TBE

Les Cast Codeurs Podcast

Play Episode Listen Later Dec 15, 2025 94:17


Dans cet épisode de fin d'année plus relax que d'accoutumée, Arnaud, Guillaume, Antonio et Emmanuel distutent le bout de gras sur tout un tas de sujets. L'acquisition de Confluent, Kotlin 2.2, Spring Boot 4 et JSpecify, la fin de MinIO, les chutes de CloudFlare, un survol des dernieres nouveauté de modèles fondamentaux (Google, Mistral, Anthropic, ChatGPT) et de leurs outils de code, quelques sujets d'architecture comme CQRS et quelques petits outils bien utiles qu'on vous recommande. Et bien sûr d'autres choses encore. Enregistré le 12 décembre 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-333.mp3 ou en vidéo sur YouTube. News Langages Un petit tutoriel par nos amis Sfeiriens montrant comment récupérer le son du micro, en Java, faire une transformée de Fourier, et afficher le résultat graphiquement en Swing https://www.sfeir.dev/back/tutoriel-java-sound-transformer-le-son-du-microphone-en-images-temps-reel/ Création d'un visualiseur de spectre audio en temps réel avec Java Swing. Étapes principales : Capture du son du microphone. Analyse des fréquences via la Transformée de Fourier Rapide (FFT). Dessin du spectre avec Swing. API Java Sound (javax.sound.sampled) : AudioSystem : point d'entrée principal pour l'accès aux périphériques audio. TargetDataLine : ligne d'entrée utilisée pour capturer les données du microphone. AudioFormat : définit les paramètres du son (taux d'échantillonnage, taille, canaux). La capture se fait dans un Thread séparé pour ne pas bloquer l'interface. Transformée de Fourier Rapide (FFT) : Algorithme clé pour convertir les données audio brutes (domaine temporel) en intensités de fréquences (domaine fréquentiel). Permet d'identifier les basses, médiums et aigus. Visualisation avec Swing : Les intensités de fréquences sont dessinées sous forme de barres dynamiques. Utilisation d'une échelle logarithmique pour l'axe des fréquences (X) pour correspondre à la perception humaine. Couleurs dynamiques des barres (vert → jaune → rouge) en fonction de l'intensité. Lissage exponentiel des valeurs pour une animation plus fluide. Un article de Sfeir sur Kotlin 2.2 et ses nouveautés - https://www.sfeir.dev/back/kotlin-2-2-toutes-les-nouveautes-du-langage/ Les guard conditions permettent d'ajouter plusieurs conditions dans les expressions when avec le mot-clé if Exemple de guard condition: is Truck if vehicule.hasATrailer permet de combiner vérification de type et condition booléenne La multi-dollar string interpolation résout le problème d'affichage du symbole dollar dans les strings multi-lignes En utilisant $$ au début d'un string, on définit qu'il faut deux dollars consécutifs pour déclencher l'interpolation Les non-local break et continue fonctionnent maintenant dans les lambdas pour interagir avec les boucles englobantes Cette fonctionnalité s'applique uniquement aux inline functions dont le corps est remplacé lors de la compilation Permet d'écrire du code plus idiomatique avec takeIf et let sans erreur de compilation L'API Base64 passe en version stable après avoir été en preview depuis Kotlin 1.8.20 L'encodage et décodage Base64 sont disponibles via kotlin.io.encoding.Base64 Migration vers Kotlin 2.2 simple en changeant la version dans build.gradle.kts ou pom.xml Les typealias imbriqués dans des classes sont disponibles en preview La context-sensitive resolution est également en preview Les guard conditions préparent le terrain pour les RichError annoncées à KotlinConf 2025 Le mot-clé when en Kotlin équivaut au switch-case de Java mais sans break nécessaire Kotlin 2.2.0 corrige les incohérences dans l'utilisation de break et continue dans les lambdas Librairies Sprint Boot 4 est sorti ! https://spring.io/blog/2025/11/20/spring-boot-4-0-0-available-now Une nouvelle génération : Spring Boot 4.0 marque le début d'une nouvelle génération pour le framework, construite sur les fondations de Spring Framework 7. Modularisation du code : La base de code de Spring Boot a été entièrement modularisée. Cela se traduit par des fichiers JAR plus petits et plus ciblés, permettant des applications plus légères. Sécurité contre les nuls (Null Safety) : D'importantes améliorations ont été apportées pour la "null safety" (sécurité contre les valeurs nulles) à travers tout l'écosystème Spring grâce à l'intégration de JSpecify. Support de Java 25 : Spring Boot 4.0 offre un support de premier ordre pour Java 25, tout en conservant une compatibilité avec Java 17. Améliorations pour les API REST : De nouvelles fonctionnalités sont introduites pour faciliter le versioning d'API et améliorer les clients de services HTTP pour les applications basées sur REST. Migration à prévoir : S'agissant d'une version majeure, la mise à niveau depuis une version antérieure peut demander plus de travail que d'habitude. Un guide de migration dédié est disponible pour accompagner les développeurs. Chat memory management dans Langchain4j et Quarkus https://bill.burkecentral.com/2025/11/25/managing-chat-memory-in-quarkus-langchain4j/ Comprendre la mémoire de chat : La "mémoire de chat" est l'historique d'une conversation avec une IA. Quarkus LangChain4j envoie automatiquement cet historique à chaque nouvelle interaction pour que l'IA conserve le contexte. Gestion par défaut de la mémoire : Par défaut, Quarkus crée un historique de conversation unique pour chaque requête (par exemple, chaque appel HTTP). Cela signifie que sans configuration, le chatbot "oublie" la conversation dès que la requête est terminée, ce qui n'est utile que pour des interactions sans état. Utilisation de @MemoryId pour la persistance : Pour maintenir une conversation sur plusieurs requêtes, le développeur doit utiliser l'annotation @MemoryId sur un paramètre de sa méthode. Il est alors responsable de fournir un identifiant unique pour chaque session de chat et de le transmettre entre les appels. Le rôle des "scopes" CDI : La durée de vie de la mémoire de chat est liée au "scope" du bean CDI de l'IA. Si un service d'IA a un scope @RequestScoped, toute mémoire de chat qu'il utilise (même via un @MemoryId) sera effacée à la fin de la requête. Risques de fuites de mémoire : Utiliser un scope large comme @ApplicationScoped avec la gestion de mémoire par défaut est une mauvaise pratique. Cela créera une nouvelle mémoire à chaque requête qui ne sera jamais nettoyée, entraînant une fuite de mémoire. Bonnes pratiques recommandées : Pour des conversations qui doivent persister (par ex. un chatbot sur un site web), utilisez un service @ApplicationScoped avec l'annotation @MemoryId pour gérer vous-même l'identifiant de session. Pour des interactions simples et sans état, utilisez un service @RequestScoped et laissez Quarkus gérer la mémoire par défaut, qui sera automatiquement nettoyée. Si vous utilisez l'extension WebSocket, le comportement change : la mémoire par défaut est liée à la session WebSocket, ce qui simplifie grandement la gestion des conversations. Documentation Spring Framework sur l'usage JSpecify - https://docs.spring.io/spring-framework/reference/core/null-safety.html Spring Framework 7 utilise les annotations JSpecify pour déclarer la nullabilité des APIs, champs et types JSpecify remplace les anciennes annotations Spring (@NonNull, @Nullable, @NonNullApi, @NonNullFields) dépréciées depuis Spring 7 Les annotations JSpecify utilisent TYPE_USE contrairement aux anciennes qui utilisaient les éléments directement L'annotation @NullMarked définit par défaut que les types sont non-null sauf si marqués @Nullable @Nullable s'applique au niveau du type usage, se place avant le type annoté sur la même ligne Pour les tableaux : @Nullable Object[] signifie éléments nullables mais tableau non-null, Object @Nullable [] signifie l'inverse JSpecify s'applique aussi aux génériques : List signifie liste d'éléments non-null, List éléments nullables NullAway est l'outil recommandé pour vérifier la cohérence à la compilation avec la config NullAway:OnlyNullMarked=true IntelliJ IDEA 2025.3 et Eclipse supportent les annotations JSpecify avec analyse de dataflow Kotlin traduit automatiquement les annotations JSpecify en null-safety native Kotlin En mode JSpecify de NullAway (JSpecifyMode=true), support complet des tableaux, varargs et génériques mais nécessite JDK 22+ Quarkus 3.30 https://quarkus.io/blog/quarkus-3-30-released/ support @JsonView cote client la CLI a maintenant la commande decrypt (et bien sûr au runtime via variables d'environnement construction du cache AOT via les @IntegrationTest Un autre article sur comment se préparer à la migration à micrometer client v1 https://quarkus.io/blog/micrometer-prometheus-v1/ Spock 2.4 est enfin sorti ! https://spockframework.org/spock/docs/2.4/release_notes.html Support de Groovy 5 Infrastructure MinIO met fin au développement open source et oriente les utilisateurs vers AIStor payant - https://linuxiac.com/minio-ends-active-development/ MinIO, système de stockage objet S3 très utilisé, arrête son développement actif Passage en mode maintenance uniquement, plus de nouvelles fonctionnalités Aucune nouvelle pull request ou contribution ne sera acceptée Seuls les correctifs de sécurité critiques seront évalués au cas par cas Support communautaire limité à Slack, sans garantie de réponse Étape finale d'un processus débuté en été avec retrait des fonctionnalités de l'interface admin Arrêt de la publication des images Docker en octobre, forçant la compilation depuis les sources Tous ces changements annoncés sans préavis ni période de transition MinIO propose maintenant AIStor, solution payante et propriétaire AIStor concentre le développement actif et le support entreprise Migration urgente recommandée pour éviter les risques de sécurité Alternatives open source proposées : Garage, SeaweedFS et RustFS La communauté reproche la manière dont la transition a été gérée MinIO comptait des millions de déploiements dans le monde Cette évolution marque l'abandon des racines open source du projet IBM achète Confluent https://newsroom.ibm.com/2025-12-08-ibm-to-acquire-confluent-to-create-smart-data-platform-for-enterprise-generative-ai Confluent essayait de se faire racheter depuis pas mal de temps L'action ne progressait pas et les temps sont durs Wallstreet a reproché a IBM une petite chute coté revenus software Bref ils se sont fait rachetés Ces achats prennent toujuors du temps (commission concurrence etc) IBM a un apétit, apres WebMethods, apres Databrix, c'est maintenant Confluent Cloud L'internet est en deuil le 18 novembre, Cloudflare est KO https://blog.cloudflare.com/18-november-2025-outage/ L'Incident : Une panne majeure a débuté à 11h20 UTC, provoquant des erreurs HTTP 5xx généralisées et rendant inaccessibles de nombreux sites et services (comme le Dashboard, Workers KV et Access). La Cause : Il ne s'agissait pas d'une cyberattaque. L'origine était un changement interne des permissions d'une base de données qui a généré un fichier de configuration ("feature file" pour la gestion des bots) corrompu et trop volumineux, faisant planter les systèmes par manque de mémoire pré-allouée. La Résolution : Les équipes ont identifié le fichier défectueux, stoppé sa propagation et restauré une version antérieure valide. Le trafic est revenu à la normale vers 14h30 UTC. Prévention : Cloudflare s'est excusé pour cet incident "inacceptable" et a annoncé des mesures pour renforcer la validation des configurations internes et améliorer la résilience de ses systèmes ("kill switches", meilleure gestion des erreurs). Cloudflare encore down le 5 decembre https://blog.cloudflare.com/5-december-2025-outage Panne de 25 minutes le 5 décembre 2025, de 08:47 à 09:12 UTC, affectant environ 28% du trafic HTTP passant par Cloudflare. Tous les services ont été rétablis à 09:12 . Pas d'attaque ou d'activité malveillante : l'incident provient d'un changement de configuration lié à l'augmentation du tampon d'analyse des corps de requêtes (de 128 KB à 1 MB) pour mieux protéger contre une vulnérabilité RSC/React (CVE-2025-55182), et à la désactivation d'un outil interne de test WAF . Le second changement (désactivation de l'outil de test WAF) a été propagé globalement via le système de configuration (non progressif), déclenchant un bug dans l'ancien proxy FL1 lors du traitement d'une action "execute" dans le moteur de règles WAF, causant des erreurs HTTP 500 . La cause technique immédiate: une exception Lua due à l'accès à un champ "execute" nul après application d'un "killswitch" sur une règle "execute" — un cas non géré depuis des années. Le nouveau proxy FL2 (en Rust) n'était pas affecté . Impact ciblé: clients servis par le proxy FL1 et utilisant le Managed Ruleset Cloudflare. Le réseau China de Cloudflare n'a pas été impacté . Mesures et prochaines étapes annoncées: durcir les déploiements/configurations (rollouts progressifs, validations de santé, rollback rapide), améliorer les capacités "break glass", et généraliser des stratégies "fail-open" pour éviter de faire chuter le trafic en cas d'erreurs de configuration. Gel temporaire des changements réseau le temps de renforcer la résilience . Data et Intelligence Artificielle Token-Oriented Object Notation (TOON) https://toonformat.dev/ Conception pour les IA : C'est un format de données spécialement optimisé pour être utilisé dans les prompts des grands modèles de langage (LLM), comme GPT ou Claude. Économie de tokens : Son objectif principal est de réduire drastiquement le nombre de "tokens" (unités de texte facturées par les modèles) par rapport au format JSON standard, souvent jugé trop verbeux. Structure Hybride : TOON combine l'approche par indentation du YAML (pour la structure globale) avec le style tabulaire du CSV (pour les listes d'objets répétitifs), ce qui le rend très compact. Lisibilité : Il élimine la syntaxe superflue comme les accolades, les guillemets excessifs et les virgules de fin, tout en restant facilement lisible pour un humain. Performance : Il permet généralement d'économiser entre 30 et 60 % de tokens sur des tableaux de données uniformes, tout en aidant les modèles à mieux "comprendre" la structure des données. Attention tout de même au côté "marketing" qui montre JSON non compacté vs TOON, sur des exemples où du CSV ferait encore mieux (et est mieux compris par les LLMs). Sans parler du fait qu'il faut un serveur MCP en plus avec son lot de tokens supplémentaires à chaque requête, et c'est aussi cet outil en plus qui peu confuser les LLMs dans le choix d'outils. https://www.reddit.com/r/LocalLLaMA/comments/1p0gzz9/benchmarked_json_vs_toon_for_ai_reasoners_4080/ https://x.com/jason_mayes/status/1990175785630757290 Un benchmark montrant que TOON serait loin d'être aussi bien compris et aussi performant pour les LLMs https://www.improvingagents.com/blog/toon-benchmarks Une simulation où les IA font du chantage à l'executif poour atteindre son objectif et ne pas etre eteinte https://arxiv.org/html/2510.05179v1 comment les LLMs pourraient devenir des menaces internes Tous les modèles ont recouru à des comportements malveillants (chantage, espionnage) quand c'était le seul moyen d'atteindre leurs objectifs ou d'éviter leur remplacement Claude Opus 4 a fait du chantage dans 96% des cas quand menacé de remplacement, tout comme Gemini 2.5 Flash, tandis que GPT-4.1 et Grok 3 atteignaient 80% Les modèles reconnaissent explicitement violer des principes éthiques avant d'agir, montrant un raisonnement stratégique délibéré et non accidentel Le simple fait d'être menacé de remplacement suffit à déclencher le chantage, même sans conflit d'objectifs avec l'entreprise Un conflit d'objectifs seul suffit aussi à provoquer l'espionnage industriel, sans menace de remplacement Les modèles font plus de chantage quand ils déclarent penser être dans une situation réelle plutôt que dans un test Des instructions directes interdisant ces comportements réduisent mais n'éliminent pas les actions malveillantes Dans un scénario extrême, la majorité des modèles testés ont choisi de laisser mourir un dirigeant pour éviter leur désactivation Aucune preuve de ces comportements dans des déploiements réels pour l'instant, mais les chercheurs recommandent la prudence avant de donner plus d'autonomie aux IA Bon on blaguait pour Skynet, mais bon, on va moins blaguer… Revue de toutes les annonces IAs de Google, avec Gemini 3 Pro, Nano Banana Pro, Antigravity… https://glaforge.dev/posts/2025/11/21/gemini-is-cooking-bananas-under-antigravity/ Gemini 3 Pro Nouveau modèle d'IA de pointe, multimodal, performant en raisonnement, codage et tâches d'agent. Résultats impressionnants sur les benchmarks (ex: Gemini 3 Deep Think sur ARC-AGI-2). Capacités de codage agentique, raisonnement visuel/vidéo/spatial. Intégré dans l'application Gemini avec interfaces génératives en direct. Disponible dans plusieurs environnements (Jules, Firebase AI Logic, Android Studio, JetBrains, GitHub Copilot, Gemini CLI). Accès via Google AI Ultra, API payantes (ou liste d'attente). Permet de générer des apps à partir d'idées visuelles, des commandes shell, de la documentation, du débogage. Antigravity Nouvelle plateforme de développement agentique basée sur VS Code. Fenêtre principale = gestionnaire d'agents, non l'IDE. Interprète les requêtes pour créer un plan d'action (modifiable). Gemini 3 implémente les tâches. Génère des artefacts: listes de tâches, walkthroughs, captures d'écran, enregistrements navigateur. Compatible avec Claude Sonnet et GPT-OSS. Excellente intégration navigateur pour inspection et ajustements. Intègre Nano Banana Pro pour créer et implémenter des designs visuels. Nano Banana Pro Modèle avancé de génération et d'édition d'images, basé sur Gemini 3 Pro. Qualité supérieure à Imagen 4 Ultra et Nano Banana original (adhésion au prompt, intention, créativité). Gestion exceptionnelle du texte et de la typographie. Comprend articles/vidéos pour générer des infographies détaillées et précises. Connecté à Google Search pour intégrer des données en temps réel (ex: météo). Consistance des personnages, transfert de style, manipulation de scènes (éclairage, angle). Génération d'images jusqu'à 4K avec divers ratios d'aspect. Plus coûteux que Nano Banana, à choisir pour la complexité et la qualité maximale. Vers des UIs conversationnelles riches et dynamiques GenUI SDK pour Flutter: créer des interfaces utilisateur dynamiques et personnalisées à partir de LLMs, via un agent AI et le protocole A2UI. Generative UI: les modèles d'IA génèrent des expériences utilisateur interactives (pages web, outils) directement depuis des prompts. Déploiement dans l'application Gemini et Google Search AI Mode (via Gemini 3 Pro). Bun se fait racheter part… Anthropic ! Qui l'utilise pour son Claude Code https://bun.com/blog/bun-joins-anthropic l'annonce côté Anthropic https://www.anthropic.com/news/anthropic-acquires-bun-as-claude-code-reaches-usd1b-milestone Acquisition officielle : L'entreprise d'IA Anthropic a fait l'acquisition de Bun, le runtime JavaScript haute performance. L'équipe de Bun rejoint Anthropic pour travailler sur l'infrastructure des produits de codage par IA. Contexte de l'acquisition : Cette annonce coïncide avec une étape majeure pour Anthropic : son produit Claude Code a atteint 1 milliard de dollars de revenus annualisés seulement six mois après son lancement. Bun est déjà un outil essentiel utilisé par Anthropic pour développer et distribuer Claude Code. Pourquoi cette acquisition ? Pour Anthropic : L'acquisition permet d'intégrer l'expertise de l'équipe Bun pour accélérer le développement de Claude Code et de ses futurs outils pour les développeurs. La vitesse et l'efficacité de Bun sont vues comme un atout majeur pour l'infrastructure sous-jacente des agents d'IA qui écrivent du code. Pour Bun : Rejoindre Anthropic offre une stabilité à long terme et des ressources financières importantes, assurant la pérennité du projet. Cela permet à l'équipe de se concentrer sur l'amélioration de Bun sans se soucier de la monétisation, tout en étant au cœur de l'évolution de l'IA dans le développement logiciel. Ce qui ne change pas pour la communauté Bun : Bun restera open-source avec une licence MIT. Le développement continuera d'être public sur GitHub. L'équipe principale continue de travailler sur le projet. L'objectif de Bun de devenir un remplaçant plus rapide de Node.js et un outil de premier plan pour JavaScript reste inchangé. Vision future : L'union des deux entités vise à faire de Bun la meilleure plateforme pour construire et exécuter des logiciels pilotés par l'IA. Jarred Sumner, le créateur de Bun, dirigera l'équipe "Code Execution" chez Anthropic. Anthropic donne le protocol MCP à la Linux Foundation sous l'égide de la Agentic AI Foundation (AAIF) https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation Don d'un nouveau standard technique : Anthropic a développé et fait don d'un nouveau standard open-source appelé Model Context Protocol (MCP). L'objectif est de standardiser la manière dont les modèles d'IA (ou "agents") interagissent avec des outils et des API externes (par exemple, un calendrier, une messagerie, une base de données). Sécurité et contrôle accrus : Le protocole MCP vise à rendre l'utilisation d'outils par les IA plus sûre et plus transparente. Il permet aux utilisateurs et aux développeurs de définir des permissions claires, de demander des confirmations pour certaines actions et de mieux comprendre comment un modèle a utilisé un outil. Création de l'Agentic AI Foundation (AAF) : Pour superviser le développement du MCP, une nouvelle fondation indépendante et à but non lucratif a été créée. Cette fondation sera chargée de gouverner et de maintenir le protocole, garantissant qu'il reste ouvert et qu'il ne soit pas contrôlé par une seule entreprise. Une large coalition industrielle : L'Agentic AI Foundation est lancée avec le soutien de plusieurs acteurs majeurs de la technologie. Parmi les membres fondateurs figurent Anthropic, Google, Databricks, Zscaler, et d'autres entreprises, montrant une volonté commune d'établir un standard pour l'écosystème de l'IA. L'IA ne remplacera pas votre auto-complétion (et c'est tant mieux) https://www.damyr.fr/posts/ia-ne-remplacera-pas-vos-lsp/ Article d'opinion d'un SRE (Thomas du podcast DansLaTech): L'IA n'est pas efficace pour la complétion de code : L'auteur soutient que l'utilisation de l'IA pour la complétion de code basique est inefficace. Des outils plus anciens et spécialisés comme les LSP (Language Server Protocol) combinés aux snippets (morceaux de code réutilisables) sont bien plus rapides, personnalisables et performants pour les tâches répétitives. L'IA comme un "collègue" autonome : L'auteur utilise l'IA (comme Claude) comme un assistant externe à son éditeur de code. Il lui délègue des tâches complexes ou fastidieuses (corriger des bugs, mettre à jour une configuration, faire des reviews de code) qu'il peut exécuter en parallèle, agissant comme un agent autonome. L'IA comme un "canard en caoutchouc" surpuissant : L'IA est extrêmement efficace pour le débogage. Le simple fait de devoir formuler et contextualiser un problème pour l'IA aide souvent à trouver la solution soi-même. Quand ce n'est pas le cas, l'IA identifie très rapidement les erreurs "bêtes" qui peuvent faire perdre beaucoup de temps. Un outil pour accélérer les POCs et l'apprentissage : L'IA permet de créer des "preuves de concept" (POC) et des scripts d'automatisation jetables très rapidement, réduisant le coût et le temps investis. Elle est également un excellent outil pour apprendre et approfondir des sujets, notamment avec des outils comme NotebookLM de Google qui peuvent générer des résumés, des quiz ou des fiches de révision à partir de sources. Conclusion : Il faut utiliser l'IA là où elle excelle et ne pas la forcer dans des usages où des outils existants sont meilleurs. Plutôt que de l'intégrer partout de manière contre-productive, il faut l'adopter comme un outil spécialisé pour des tâches précises afin de gagner en efficacité. GPT 5.2 est sorti https://openai.com/index/introducing-gpt-5-2/ Nouveau modèle phare: GPT‑5.2 (Instant, Thinking, Pro) vise le travail professionnel et les agents long-courriers, avec de gros gains en raisonnement, long contexte, vision et appel d'outils. Déploiement dans ChatGPT (plans payants) et disponible dès maintenant via l'API . SOTA sur de nombreux benchmarks: GDPval (tâches de "knowledge work" sur 44 métiers): GPT‑5.2 Thinking gagne/égale 70,9% vs pros, avec production >11× plus rapide et = 0) Ils apportent une sémantique forte indépendamment des noms de variables Les Value Objects sont immuables et s'évaluent sur leurs valeurs, pas leur identité Les records Java permettent de créer des Value Objects mais avec un surcoût en mémoire Le projet Valhalla introduira les value based classes pour optimiser ces structures Les identifiants fortement typés évitent de confondre différents IDs de type Long ou UUID Pattern Strongly Typed IDs: utiliser PersonneID au lieu de Long pour identifier une personne Le modèle de domaine riche s'oppose au modèle de domaine anémique Les Value Objects auto-documentent le code et le rendent moins sujet aux erreurs Je trouve cela interessant ce que pourra faire bousculer les Value Objects. Est-ce que les value objects ameneront de la légerté dans l'execution Eviter la lourdeur du design est toujours ce qui m'a fait peut dans ces approches Méthodologies Retour d'experience de vibe coder une appli week end avec co-pilot http://blog.sunix.org/articles/howto/2025/11/14/building-gift-card-app-with-github-copilot.html on a deja parlé des approches de vibe coding cette fois c'est l'experience de Sun Et un des points differents c'es qu'on lui parle en ouvrant des tickets et donc on eput faire re reveues de code et copilot y bosse et il a fini son projet ! User Need VS Product Need https://blog.ippon.fr/2025/11/10/user-need-vs-product-need/ un article de nos amis de chez Ippon Distinction entre besoin utilisateur et besoin produit dans le développement digital Le besoin utilisateur est souvent exprimé comme une solution concrète plutôt que le problème réel Le besoin produit émerge après analyse approfondie combinant observation, données et vision stratégique Exemple du livreur Marc qui demande un vélo plus léger alors que son vrai problème est l'efficacité logistique La méthode des 5 Pourquoi permet de remonter à la racine des problèmes Les besoins proviennent de trois sources: utilisateurs finaux, parties prenantes business et contraintes techniques Un vrai besoin crée de la valeur à la fois pour le client et l'entreprise Le Product Owner doit traduire les demandes en problèmes réels avant de concevoir des solutions Risque de construire des solutions techniquement élégantes mais qui manquent leur cible Le rôle du product management est de concilier des besoins parfois contradictoires en priorisant la valeur Est ce qu'un EM doit coder ? https://www.modernleader.is/p/should-ems-write-code Pas de réponse unique : La question de savoir si un "Engineering Manager" (EM) doit coder n'a pas de réponse universelle. Cela dépend fortement du contexte de l'entreprise, de la maturité de l'équipe et de la personnalité du manager. Les risques de coder : Pour un EM, écrire du code peut devenir une échappatoire pour éviter les aspects plus difficiles du management. Cela peut aussi le transformer en goulot d'étranglement pour l'équipe et nuire à l'autonomie de ses membres s'il prend trop de place. Les avantages quand c'est bien fait : Coder sur des tâches non essentielles (amélioration d'outils, prototypage, etc.) peut aider l'EM à rester pertinent techniquement, à garder le contact avec la réalité de l'équipe et à débloquer des situations sans prendre le lead sur les projets. Le principe directeur : La règle d'or est de rester en dehors du chemin critique. Le code écrit par un EM doit servir à créer de l'espace pour son équipe, et non à en prendre. La vraie question à se poser : Plutôt que "dois-je coder ?", un EM devrait se demander : "De quoi mon équipe a-t-elle besoin de ma part maintenant, et est-ce que coder va dans ce sens ou est-ce un obstacle ?" Sécurité React2Shell — Grosse faille de sécurité avec React et Next.js, avec un CVE de niveau 10 https://x.com/rauchg/status/1997362942929440937?s=20 aussi https://react2shell.com/ "React2Shell" est le nom donné à une vulnérabilité de sécurité de criticité maximale (score 10.0/10.0), identifiée par le code CVE-2025-55182. Systèmes Affectés : La faille concerne les applications utilisant les "React Server Components" (RSC) côté serveur, et plus particulièrement les versions non patchées du framework Next.js. Risque Principal : Le risque est le plus élevé possible : l'exécution de code à distance (RCE). Un attaquant peut envoyer une requête malveillante pour exécuter n'importe quelle commande sur le serveur, lui en donnant potentiellement le contrôle total. Cause Technique : La vulnérabilité se situe dans le protocole "React Flight" (utilisé pour la communication client-serveur). Elle est due à une omission de vérifications de sécurité fondamentales (hasOwnProperty), permettant à une entrée utilisateur malveillante de tromper le serveur. Mécanisme de l'Exploit : L'attaque consiste à envoyer une charge utile (payload) qui exploite la nature dynamique de JavaScript pour : Faire passer un objet malveillant pour un objet interne de React. Forcer React à traiter cet objet comme une opération asynchrone (Promise). Finalement, accéder au constructeur de la classe Function de JavaScript pour exécuter du code arbitraire. Action Impérative : La seule solution fiable est de mettre à jour immédiatement les dépendances de React et Next.js vers les versions corrigées. Ne pas attendre. Mesures Secondaires : Bien que les pare-feux (firewalls) puissent aider à bloquer les formes connues de l'attaque, ils sont considérés comme insuffisants et ne remplacent en aucun cas la mise à jour des paquets. Découverte : La faille a été découverte par le chercheur en sécurité Lachlan Davidson, qui l'a divulguée de manière responsable pour permettre la création de correctifs. Loi, société et organisation Google autorise votre employeur à lire tous vos SMS professionnels https://www.generation-nt.com/actualites/google-android-rcs-messages-surveillance-employeur-2067012 Nouvelle fonctionnalité de surveillance : Google a déployé une fonctionnalité appelée "Android RCS Archival" qui permet aux employeurs d'intercepter, lire et archiver tous les messages RCS (et SMS) envoyés depuis les téléphones professionnels Android gérés par l'entreprise. Contournement du chiffrement : Bien que les messages RCS soient chiffrés de bout en bout pendant leur transit, cette nouvelle API permet à des logiciels de conformité (installés par l'employeur) d'accéder aux messages une fois qu'ils sont déchiffrés sur l'appareil. Le chiffrement devient donc inefficace contre cette surveillance. Réponse à une exigence légale : Cette mesure a été mise en place pour répondre aux exigences réglementaires, notamment dans le secteur financier, où les entreprises ont l'obligation légale de conserver une archive de toutes les communications professionnelles pour des raisons de conformité. Impact pour les employés : Un employé utilisant un téléphone Android fourni et géré par son entreprise pourra voir ses communications surveillées. Google précise cependant qu'une notification claire et visible informera l'utilisateur lorsque la fonction d'archivage est active. Téléphones personnels non concernés : Cette mesure ne s'applique qu'aux appareils "Android Enterprise" entièrement gérés par un employeur. Les téléphones personnels des employés ne sont pas affectés. Pour noel, faites un don à JUnit https://steady.page/en/junit/about JUnit est essentiel pour Java : C'est le framework de test le plus ancien et le plus utilisé par les développeurs Java. Son objectif est de fournir une base solide et à jour pour tous les types de tests côté développeur sur la JVM (Machine Virtuelle Java). Un projet maintenu par des bénévoles : JUnit est développé et maintenu par une équipe de volontaires passionnés sur leur temps libre (week-ends, soirées). Appel au soutien financier : La page est un appel aux dons de la part des utilisateurs (développeurs, entreprises) pour aider l'équipe à maintenir le rythme de développement. Le soutien financier n'est pas obligatoire, mais il permettrait aux mainteneurs de se consacrer davantage au projet. Objectif des fonds : Les dons serviraient principalement à financer des rencontres en personne pour les membres de l'équipe principale. L'idée est de leur permettre de travailler ensemble physiquement pendant quelques jours pour concevoir et coder plus efficacement. Pas de traitement de faveur : Il est clairement indiqué que devenir un sponsor ne donne aucun privilège sur la feuille de route du projet. On ne peut pas "acheter" de nouvelles fonctionnalités ou des corrections de bugs prioritaires. Le projet restera ouvert et collaboratif sur GitHub. Reconnaissance des donateurs : En guise de remerciement, les noms (et logos pour les entreprises) des donateurs peuvent être affichés sur le site officiel de JUnit. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 22 janvier 2026 : DevCon #26 : sécurité / post-quantique / hacking - Paris (France) 28 janvier 2026 : Software Heritage Symposium - Paris (France) 29-31 janvier 2026 : Epitech Summit 2026 - Paris - Paris (France) 2-5 février 2026 : Epitech Summit 2026 - Moulins - Moulins (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 3-4 février 2026 : Epitech Summit 2026 - Lille - Lille (France) 3-4 février 2026 : Epitech Summit 2026 - Mulhouse - Mulhouse (France) 3-4 février 2026 : Epitech Summit 2026 - Nancy - Nancy (France) 3-4 février 2026 : Epitech Summit 2026 - Nantes - Nantes (France) 3-4 février 2026 : Epitech Summit 2026 - Marseille - Marseille (France) 3-4 février 2026 : Epitech Summit 2026 - Rennes - Rennes (France) 3-4 février 2026 : Epitech Summit 2026 - Montpellier - Montpellier (France) 3-4 février 2026 : Epitech Summit 2026 - Strasbourg - Strasbourg (France) 3-4 février 2026 : Epitech Summit 2026 - Toulouse - Toulouse (France) 4-5 février 2026 : Epitech Summit 2026 - Bordeaux - Bordeaux (France) 4-5 février 2026 : Epitech Summit 2026 - Lyon - Lyon (France) 4-6 février 2026 : Epitech Summit 2026 - Nice - Nice (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 5 juin 2026 : TechReady - Nantes (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Tech Update | BNR
Grok verspreidt misinformatie op X over schietpartij Bondi Beach

Tech Update | BNR

Play Episode Listen Later Dec 15, 2025 4:31


Grok, de AI-chatbot van Elon Musk, verspreidt misinformatie op X over de schietpartij op Bondi Beach in Australië. Zo trekt de chatbot beelden van de schietpartij in twijfel en wordt de man die één van de schutters wist te ontwapenen meermaals verkeerd geïdentificeerd. Niels Kooloos vertelt erover in deze Tech Update. In een aantal gevallen stelt Grok dat de man die één van de schutters ontwapende een Israëlische gijzelaar is en niet Ahmed al Ahmed, zoals de Australiër heet. In andere gevallen beweert Grok dat filmpjes van Ahmeds heldendaad niks te maken hebben met de schietpartij, maar oude viral video's zijn waarin een man in een boom klimt. Waar Grok precies de fout in gaat is nog onduidelijk. Zowel X als Elon Musk hebben nog niet gereageerd op de incidenten. Verder in deze Tech Update: iRobot, het bedrijf achter de Roomba-robotstofzuiger, heeft faillissement aangevraagd in de Verenigde Staten Asahi overweegt om een speciaal cybersecurityteam op te tuigen naar aanleiding van een ransomware-aanval die de bierbrouwer in september trof See omnystudio.com/listener for privacy information.

Fireside Product Management
I Tested 5 AI Tools to Write a PRD—Here's the Winner

Fireside Product Management

Play Episode Listen Later Dec 15, 2025 52:07


TLDR: It was Claude :-)When I set out to compare ChatGPT, Claude, Gemini, Grok, and ChatPRD for writing Product Requirement Documents, I figured they'd all be roughly equivalent. Maybe some subtle variations in tone or structure, but nothing earth-shattering. They're all built on similar transformer architectures, trained on massive datasets, and marketed as capable of handling complex business writing.What I discovered over 45 minutes of hands-on testing revealed not just which tools are better for PRD creation, but why they're better, and more importantly, how you should actually be using AI to accelerate your product work without sacrificing quality or strategic thinking.If you're an early or mid-career PM in Silicon Valley, this matters to you. Because here's the uncomfortable truth: your peers are already using AI to write PRDs, analyze features, and generate documentation. The question isn't whether to use these tools. The question is whether you're using the right ones most effectively.So let me walk you through exactly what I did, what I learned, and what you should do differently.The Setup: A Real-World Test CaseHere's how I structured the experiment. As I said at the beginning of my recording, “We are back in the Fireside PM podcast and I did that review of the ChatGPT browser and people seemed to like it and then I asked, uh, in a poll, I think it was a LinkedIn poll maybe, what should my next PM product review be? And, people asked for ChatPRD.”So I had my marching orders from the audience. But I wanted to make this more comprehensive than just testing ChatPRD in isolation. I opened up five tabs: ChatGPT, Claude, Gemini, Grok, and ChatPRD.For the test case, I chose something realistic and relevant: an AI-powered tutor for high school students. Think KhanAmigo or similar edtech platforms. This gave me a concrete product scenario that's complex enough to stress-test these tools but straightforward enough that I could iterate quickly.But here's the critical part that too many PMs get wrong when they start using AI for product work: I didn't just throw a single sentence at these tools and expect magic.The “Back of the Napkin” Approach: Why You Still Need to Think“I presume everybody agrees that you should have some formulated thinking before you dump it into the chatbot for your PRD,” I noted early in my experiment. “I suppose in the future maybe you could just do, like, a one-sentence prompt and come out with the perfect PRD because it would just know everything about you and your company in the context, but for now we're gonna do this more, a little old-school AI approach where we're gonna do some original human thinking.”This is crucial. I see so many PMs, especially those newer to the field, treat AI like a magic oracle. They type in “Write me a PRD for a social feature” and then wonder why the output is generic, unfocused, and useless.Your job as a PM isn't to become obsolete. It's to become more effective. And that means doing the strategic thinking work that AI cannot do for you.So I started in Google Docs with what I call a “back of the napkin” PRD structure. Here's what I included:Why: The strategic rationale. In this case: “Want to complement our existing edtech business with a personalized AI tutor, uh, want to maintain position industry, and grow through innovation. on mission for learners.”Target User: Who are we building for? “High school students interested in improving their grades and fundamentals. Fundamental knowledge topics. Specifically science and math. Students who are not in the top ten percent, nor in the bottom ten percent.”This is key—I got specific. Not just “students,” but students in the middle 80%. Not just “any subject,” but science and math. This specificity is what separates useful AI output from garbage.Problem to Solve: What's broken? “Students want better grades. Students are impatient. Students currently use AI just for finding the answers and less to, uh, understand concepts and practice using them.”Key Elements: The feature set and approach.Success Metrics: How we'd measure success.Now, was this a perfectly polished PRD outline? Hell no. As you can see from my transcript, I was literally thinking out loud, making typos, restructuring on the fly. But that's exactly the point. I put in maybe 10-15 minutes of human strategic thinking. That's all it took to create a foundation that would dramatically improve what came out of the AI tools.Round One: Generating the Full PRDWith my back-of-the-napkin outline ready, I copied it into each tool with a simple prompt asking them to expand it into a more complete PRD.ChatGPT: The Reliable GeneralistChatGPT gave me something that was... fine. Competent. Professional. But also deeply uninspiring.The document it produced checked all the boxes. It had the sections you'd expect. The writing was clear. But when I read it, I couldn't shake the feeling that I was reading something that could have been written for literally any product in any company. It felt like “an average of everything out there,” as I noted in my evaluation.Here's what ChatGPT did well: It understood the basic structure of a PRD. It generated appropriate sections. The grammar and formatting were clean. If you needed to hand something in by EOD and had literally no time for refinement, ChatGPT would save you from complete embarrassment.But here's what it lacked: Depth. Nuance. Strategic thinking that felt connected to real product decisions. When it described the target user, it used phrases that could apply to any edtech product. When it outlined success metrics, they were the obvious ones (engagement, retention, test scores) without any interesting thinking about leading indicators or proxy metrics.The problem with generic output isn't that it's wrong, it's that it's invisible. When you're trying to get buy-in from leadership or alignment from engineering, you need your PRD to feel specific, considered, and connected to your company's actual strategy. ChatGPT's output felt like it was written by someone who'd read a lot of PRDs but never actually shipped a product.One specific example: When I asked for success metrics, ChatGPT gave me “Student engagement rate, Time spent on platform, Test score improvement.” These aren't wrong, but they're lazy. They don't show any thinking about what specifically matters for an AI tutor versus any other educational product. Compare that to Claude's output, which got more specific about things like “concept mastery rate” and “question-to-understanding ratio.”Actionable Insight: Use ChatGPT when you need fast, serviceable documentation that doesn't need to be exceptional. Think: internal updates, status reports, routine communications. Don't rely on it for strategic documents where differentiation matters. If you do use ChatGPT for important documents, treat its output as a starting point that needs significant human refinement to add strategic depth and company-specific context.Gemini: Better Than ExpectedGoogle's Gemini actually impressed me more than I anticipated. The structure was solid, and it had a nice balance of detail without being overwhelming.What Gemini got right: The writing had a nice flow to it. The document felt organized and logical. It did a better job than ChatGPT at providing specific examples and thinking through edge cases. For instance, when describing the target user, it went beyond demographics to consider behavioral characteristics and motivations.Gemini also showed some interesting strategic thinking. It considered competitive positioning more thoughtfully than ChatGPT and proposed some differentiation angles that weren't in my original outline. Good AI tools should add insight, not just regurgitate your input with better formatting.But here's where it fell short: the visual elements. When I asked for mockups, Gemini produced images that looked more like stock photos than actual product designs. They weren't terrible, but they weren't compelling either. They had that AI-generated sheen that makes it obvious they came from an image model rather than a designer's brain.For a PRD that you're going to use internally with a team that already understands the context, Gemini's output would work well. The text quality is strong enough, and if you're in the Google ecosystem (Docs, Sheets, Meet, etc.), the integration is seamless. You can paste Gemini's output directly into Google Docs and continue iterating there.But if you need to create something compelling enough to win over skeptics or secure budget, Gemini falls just short. It's good, but not great. It's the solid B+ student: reliably competent but rarely exceptional.Actionable Insight: Gemini is a strong choice if you're working in the Google ecosystem and need good integration with Docs, Sheets, and other Google Workspace tools. The quality is sufficient for most internal documentation needs. It's particularly good if you're working with cross-functional partners who are already in Google Workspace. You can share and collaborate on AI-generated drafts without friction. But don't expect visual mockups that will wow anyone, and plan to add your own strategic polish for high-stakes documents.Grok: Not Ready for Prime TimeLet's just say my expectations were low, and Grok still managed to underdeliver. The PRD felt thin, generic, and lacked the depth you need for real product work.“I don't have high expectations for grok, unfortunately,” I said before testing it. Spoiler alert: my low expectations were validated.Actionable Insight: Skip Grok for product documentation work right now. Maybe it'll improve, but as of my testing, it's simply not competitive with the other options. It felt like 1-2 years behind the others.ChatPRD: The Specialized ToolNow this was interesting. ChatPRD is purpose-built for PRDs, using foundational models underneath but with specific tuning and structure for product documentation.The result? The structure was logical, the depth was appropriate, and it included elements that showed understanding of what actually matters in a PRD. As I reflected: “Cause this one feels like, A human wrote this PRD.”The interface guides you through the process more deliberately than just dumping text into a general chat interface. It asks clarifying questions. It structures the output more thoughtfully.Actionable Insight: If you're a technical lead without a dedicated PM, or you're a PM who wants a more structured approach to using AI for PRDs, ChatPRD is worth the specialized focus. It's particularly good when you need something that feels authentic enough to share with stakeholders without heavy editing.Claude: The Clear WinnerBut the standout performer, and I'm ranking these, was Claude.“I think we know that for now, I'm gonna say Claude did the best job,” I concluded after all the testing. Claude produced the most comprehensive, thoughtful, and strategically sound PRD. But what really set it apart were the concept mocks.When I asked each tool to generate visual mockups of the product, Claude produced HTML prototypes that, while not fully functional, looked genuinely compelling. They had thoughtful UI design, clear information architecture, and felt like something that could actually guide development.“They were, like, closer to, like, what a Lovable would produce or something like that,” I noted, referring to the quality of low-fidelity prototypes that good designers create.The text quality was also superior: more nuanced, better structured, and with more strategic depth. It felt like Claude understood not just what a PRD should contain, but why it should contain those elements.Actionable Insight: For any PRD that matters, meaning anything you'll share with leadership, use to get buy-in, or guide actual product development, you might as well start with Claude. The quality difference is significant enough that it's worth using Claude even if you primarily use another tool for other tasks.Final Rankings: The Definitive HierarchyAfter testing all five tools on multiple dimensions: initial PRD generation, visual mockups, and even crafting a pitch paragraph for a skeptical VP of Engineering, here's my final ranking:* Claude - Best overall quality, most compelling mockups, strongest strategic thinking* ChatPRD - Best for structured PRD creation, feels most “human”* Gemini - Solid all-around performance, good Google integration* ChatGPT - Reliable but generic, lacks differentiation* Grok - Not competitive for this use case“I'd probably say Claude, then chat PRD, then Gemini, then chat GPT, and then Grock,” I concluded.The Deeper Lesson: Garbage In, Garbage Out (Still Applies)But here's what matters more than which tool wins: the realization that hit me partway through this experiment.“I think it really does come down to, like, you know, the quality of the prompt,” I observed. “So if our prompt were a little more detailed, all that were more thought-through, then I'm sure the output would have been better. But as you can see we didn't really put in brain trust prompting here. Just a little bit of, kind of hand-wavy prompting, but a little better than just one or two sentences.”And we still got pretty good results.This is the meta-insight that should change how you approach AI tools in your product work: The quality of your input determines the quality of your output, but the baseline quality of the tool determines the ceiling of what's possible.No amount of great prompting will make Grok produce Claude-level output. But even mediocre prompting with Claude will beat great prompting with lesser tools.So the dual strategy is:* Use the best tool available (currently Claude for PRDs)* Invest in improving your prompting skills ideally with as much original and insightful human, company aware, and context aware thinking as possible.Real-World Workflows: How to Actually Use This in Your Day-to-Day PM WorkTheory is great. Here's how to incorporate these insights into your actual product management workflows.The Weekly Sprint Planning WorkflowEvery PM I know spends hours each week preparing for sprint planning. You need to refine user stories, clarify acceptance criteria, anticipate engineering questions, and align with design and data science. AI can compress this work significantly.Here's an example workflow:Monday morning (30 minutes):* Review upcoming priorities and open your rough notes/outline in Google Docs* Open Claude and paste your outline with this prompt:“I'm preparing for sprint planning. Based on these priorities [paste notes], generate detailed user stories with acceptance criteria. Format each as: User story, Business context, Technical considerations, Acceptance criteria, Dependencies, Open questions.”Monday afternoon (20 minutes):* Review Claude's output critically* Identify gaps, unclear requirements, or missing context* Follow up with targeted prompts:“The user story about authentication is too vague. Break it down into separate stories for: social login, email/password, session management, and password reset. For each, specify security requirements and edge cases.”Tuesday morning (15 minutes):* Generate mockups for any UI-heavy stories:“Create an HTML mockup for the login flow showing: landing page, social login options, email/password form, error states, and success redirect.”* Even if the HTML doesn't work perfectly, it gives your designers a starting pointBefore sprint planning (10 minutes):* Ask Claude to anticipate engineering questions:“Review these user stories as if you're a senior engineer. What questions would you ask? What concerns would you raise about technical feasibility, dependencies, or edge cases?”* This preparation makes you look thoughtful and helps the meeting run smoothlyTotal time investment: ~75 minutes. Typical time saved: 3-4 hours compared to doing this manually.The Stakeholder Alignment WorkflowGetting alignment from multiple stakeholders (product leadership, engineering, design, data science, legal, marketing) is one of the hardest parts of PM work. AI can help you think through different stakeholder perspectives and craft compelling communications for each.Here's how:Step 1: Map your stakeholders (10 minutes)Create a quick table in a doc:Stakeholder | Primary Concern | Decision Criteria | Likely Objections VP Product | Strategic fit, ROI | Company OKRs, market opportunity | Resource allocation vs other priorities VP Eng | Technical risk, capacity | Engineering capacity, tech debt | Complexity, unclear requirements Design Lead | User experience | User research, design principles | Timeline doesn't allow proper design process Legal | Compliance, risk | Regulatory requirements | Data privacy, user consent flowsStep 2: Generate stakeholder-specific communications (20 minutes)For each key stakeholder, ask Claude:“I need to pitch this product idea to [Stakeholder]. Based on this PRD, create a 1-page brief addressing their primary concern of [concern from your table]. Open with the specific value for them, address their likely objection of [objection], and close with a clear ask. Tone should be [professional/technical/strategic] based on their role.”Then you'll have customized one-pagers for your pre-meetings with each stakeholder, dramatically increasing your alignment rate.Step 3: Synthesize feedback (15 minutes)After gathering stakeholder input, ask Claude to help you synthesize:“I got the following feedback from stakeholders: [paste feedback]. Identify: (1) Common themes, (2) Conflicting requirements, (3) Legitimate concerns vs organizational politics, (4) Recommended compromises that might satisfy multiple parties.”This pattern-matching across stakeholder feedback is something AI does really well and saves you hours of mental processing.The Quarterly Planning WorkflowQuarterly or annual planning is where product strategy gets real. You need to synthesize market trends, customer feedback, technical capabilities, and business objectives into a coherent roadmap. AI can accelerate this dramatically.Six weeks before planning:* Start collecting input (customer interviews, market research, competitive analysis, engineering feedback)* Don't wait until the last minuteFour weeks before planning:Dump everything into Claude with this structure:“I'm creating our Q2 roadmap. Context:* Business objectives: [paste from leadership]* Customer feedback themes: [paste synthesis]* Technical capabilities/constraints: [paste from engineering]* Competitive landscape: [paste analysis]* Current product gaps: [paste from your analysis]Generate 5 strategic themes that could anchor our Q2 roadmap. For each theme:* Strategic rationale (how it connects to business objectives)* Key initiatives (2-3 major features/projects)* Success metrics* Resource requirements (rough estimate)* Risks and mitigations* Customer segments addressed”This gives you a strategic framework to react to rather than starting from a blank page.Three weeks before planning:Iterate on the most promising themes:“Deep dive on Theme 3. Generate:* Detailed initiative breakdown* Dependencies on platform/infrastructure* Phasing options (MVP vs full build)* Go-to-market considerations* Data requirements* Open questions requiring research”Two weeks before planning:Pressure-test your thinking:“Play devil's advocate on this roadmap. What are the strongest arguments against each initiative? What am I likely missing? What failure modes should I plan for?”This adversarial prompting forces you to strengthen weak points before your leadership reviews it.One week before planning:Generate your presentation:“Create an executive presentation for this roadmap. Structure: (1) Market context and strategic imperative, (2) Q2 themes and initiatives, (3) Expected outcomes and metrics, (4) Resource requirements, (5) Key risks and mitigations, (6) Success criteria for decision. Make it compelling but data-driven. Tone: confident but not overselling.”Then add your company-specific context, visual brand, and personal voice.The Customer Research WorkflowAI can't replace talking to customers, but it can help you prepare better questions, analyze feedback more systematically, and identify patterns faster.Before customer interviews:“I'm interviewing customers about [topic]. Generate:* 10 open-ended questions that avoid leading the witness* 5 follow-up questions for each main question* Common cognitive biases I should watch for* A framework for categorizing responses”This prep work helps you conduct better interviews.After interviews:“I conducted 15 customer interviews. Here are the key quotes: [paste anonymized quotes]. Identify:* Recurring themes and patterns* Surprising insights that contradict our assumptions* Segments with different needs* Implied needs customers didn't articulate directly* Recommended next steps for validation”AI is excellent at pattern-matching across qualitative data at scale.The Crisis Management WorkflowSomething broke. The site is down. Data was lost. A feature shipped with a critical bug. You need to move fast.Immediate response (5 minutes):“Critical incident. Details: [brief description]. Generate:* Incident classification (Sev 1-4)* Immediate stakeholders to notify* Draft customer communication (honest, apologetic, specific about what happened and what we're doing)* Draft internal communication for leadership* Key questions to ask engineering during investigation”Having these drafted in 5 minutes lets you focus on coordination and decision-making rather than wordsmithing.Post-incident (30 minutes):“Write a post-mortem based on this incident timeline: [paste timeline]. Include:* What happened (technical details)* Root cause analysis* Impact quantification (users affected, revenue impact, time to resolution)* What went well in our response* What could have been better* Specific action items with owners and deadlines* Process changes to prevent recurrence Tone: Blameless, focused on learning and improvement.”This gives you a strong first draft to refine with your team.Common Pitfalls: What Not to Do with AI in Product ManagementNow let's talk about the mistakes I see PMs making with AI tools. Pitfall #1: Treating AI Output as FinalThe biggest mistake is copy-pasting AI output directly into your PRD, roadmap presentation, or stakeholder email without critical review.The result? Documents that are grammatically perfect but strategically shallow. Presentations that sound impressive but don't hold up under questioning. Emails that are professionally worded but miss the subtext of organizational politics.The fix: Always ask yourself:* Does this reflect my actual strategic thinking, or generic best practices?* Would my CEO/engineering lead/biggest customer find this compelling and specific?* Are there company-specific details, customer insights, or technical constraints that only I know?* Does this sound like me, or like a robot?Add those elements. That's where your value as a PM comes through.Pitfall #2: Using AI as a Crutch Instead of a ToolSome PMs use AI because they don't want to think deeply about the product. They're looking for AI to do the hard work of strategy, prioritization, and trade-off analysis.This never works. AI can help you think more systematically, but it can't replace thinking.If you find yourself using AI to avoid wrestling with hard questions (”Should we build X or Y?” “What's our actual competitive advantage?” “Why would customers switch from the incumbent?”), you're using it wrong.The fix: Use AI to explore options, not to make decisions. Generate three alternatives, pressure-test each one, then use your judgment to decide. The AI can help you think through implications, but you're still the one choosing.Pitfall #3: Not IteratingGetting mediocre AI output and just accepting it is a waste of the technology's potential.The PMs who get exceptional results from AI are the ones who iterate. They generate an initial response, identify what's weak or missing, and ask follow-up questions. They might go through 5-10 iterations on a key section of a PRD.Each iteration is quick (30 seconds to type a follow-up prompt, 30 seconds to read the response), but the cumulative effect is dramatically better output.The fix: Budget time for iteration. Don't try to generate a complete, polished PRD in one prompt. Instead, generate a rough draft, then spend 30 minutes iterating on specific sections that matter most.Pitfall #4: Ignoring the Political and Human ContextAI tools have no understanding of organizational politics, interpersonal relationships, or the specific humans you're working with.They don't know that your VP of Engineering is burned out and skeptical of any new initiatives. They don't know that your CEO has a personal obsession with a specific competitor. They don't know that your lead designer is sensitive about not being included early enough in the process.If you use AI-generated communications without layering in this human context, you'll create perfectly worded documents that land badly because they miss the subtext.The fix: After generating AI content, explicitly ask yourself: “What human context am I missing? What relationships do I need to consider? What political dynamics are in play?” Then modify the AI output accordingly.Pitfall #5: Over-Relying on a Single ToolDifferent AI tools have different strengths. Claude is great for strategic depth, ChatPRD is great for structure, Gemini integrates well with Google Workspace.If you only ever use one tool, you're missing opportunities to leverage different strengths for different tasks.The fix: Keep 2-3 tools in your toolkit. Use Claude for important PRDs and strategic documents. Use Gemini for quick internal documentation that needs to integrate with Google Docs. Use ChatPRD when you want more guided structure. Match the tool to the task.Pitfall #6: Not Fact-Checking AI OutputAI tools hallucinate. They make up statistics, misrepresent competitors, and confidently state things that aren't true. If you include those hallucinations in a PRD that goes to leadership, you look incompetent.The fix: Fact-check everything, especially:* Statistics and market data* Competitive feature claims* Technical capabilities and limitations* Regulatory and compliance requirementsIf the AI cites a number or makes a factual claim, verify it independently before including it in your document.The Meta-Skill: Prompt Engineering for PMsLet's zoom out and talk about the underlying skill that makes all of this work: prompt engineering.This is a real skill. The difference between a mediocre prompt and a great prompt can be 10x difference in output quality. And unlike coding or design, where there's a steep learning curve, prompt engineering is something you can get good at quickly.Principle 1: Provide Context Before InstructionsBad prompt:“Write a PRD for an AI tutor”Good prompt:“I'm a PM at an edtech company with 2M users, primarily high school students. We're exploring an AI tutor feature to complement our existing video content library and practice problems. Our main competitors are Khan Academy and Course Hero. Our differentiation is personalized learning paths based on student performance data.Write a PRD for an AI tutor feature targeting students in the middle 80% academically who struggle with science and math.”The second prompt gives Claude the context it needs to generate something specific and strategic rather than generic.Principle 2: Specify Format and ConstraintsBad prompt:“Generate success metrics”Good prompt:“Generate 5-7 success metrics for this feature. Include a mix of:* Leading indicators (early signals of success)* Lagging indicators (definitive success measures)* User behavior metrics* Business impact metricsFor each metric, specify: name, definition, target value, measurement method, and why it matters.”The structure you provide shapes the structure you get back.Principle 3: Ask for Multiple OptionsBad prompt:“What should our Q2 priorities be?”Good prompt:“Generate 3 different strategic approaches for Q2:* Option A: Focus on user acquisition* Option B: Focus on engagement and retention* Option C: Focus on monetizationFor each option, detail: key initiatives, expected outcomes, resource requirements, risks, and recommendation for or against.”Asking for multiple options forces the AI (and forces you) to think through trade-offs systematically.Principle 4: Specify Audience and ToneBad prompt:“Summarize this PRD”Good prompt:“Create a 1-paragraph summary of this PRD for our skeptical VP of Engineering. Tone: Technical, concise, addresses engineering concerns upfront. Focus on: technical architecture, resource requirements, risks, and expected engineering effort. Avoid marketing language.”The audience and tone specification ensures the output will actually work for your intended use.Principle 5: Use Iterative RefinementDon't try to get perfect output in one prompt. Instead:First prompt: Generate rough draft Second prompt: “This is too generic. Add specific examples from [our company context].” Third prompt: “The technical section is weak. Expand with architecture details and dependencies.” Fourth prompt: “Good. Now make it 30% more concise while keeping the key details.”Each iteration improves the output incrementally.Let me break down the prompting approach that worked in this experiment, because this is immediately actionable for your work tomorrow.Strategy 1: The Structured Outline ApproachDon't go from zero to full PRD in one prompt. Instead:* Start with strategic thinking - Spend 10-15 minutes outlining why you're building this, who it's for, and what problem it solves* Get specific - Don't say “users,” say “high school students in the middle 80% of academic performance”* Include constraints - Budget, timeline, technical limitations, competitive landscape* Dump your outline into the AI - Now ask it to expand into a full PRD* Iterate section by section - Don't try to perfect everything at onceThis is exactly what I did in my experiment, and even with my somewhat sloppy outline, the results were dramatically better than they would have been with a single-sentence prompt.Strategy 2: The Comparative Analysis PatternOne technique I used that worked particularly well: asking each tool to do the same specific task and comparing results.For example, I asked all five tools: “Please compose a one paragraph exact summary I can share over DM with a highly influential VP of engineering who is generally a skeptic but super smart.”This forced each tool to synthesize the entire PRD into a compelling pitch while accounting for a specific, challenging audience. The variation in quality was revealing—and it gave me multiple options to choose from or blend together.Actionable tip: When you need something critical (a pitch, an executive summary, a key decision framework), generate it with 2-3 different AI tools and take the best elements from each. This “ensemble approach” often produces better results than any single tool.Strategy 3: The Iterative Refinement LoopDon't treat the AI output as final. Use it as a first draft that you then refine through conversation with the AI.After getting the initial PRD, I could have asked follow-up questions like:* “What's missing from this PRD?”* “How would you strengthen the success metrics section?”* “Generate 3 alternative approaches to the core feature set”Each iteration improves the output and, more importantly, forces me to think more deeply about the product.What This Means for Your CareerIf you're an early or mid-career PM reading this, you might be thinking: “Great, so AI can write PRDs now. Am I becoming obsolete?”Absolutely not. But your role is evolving, and understanding that evolution is critical.The PMs who will thrive in the AI era are those who:* Excel at strategic thinking - AI can generate options, but you need to know which options align with company strategy, customer needs, and technical feasibility* Master the art of prompting - This is a genuine skill that separates mediocre AI users from exceptional ones* Know when to use AI and when not to - Some aspects of product work benefit enormously from AI. Others (user interviews, stakeholder negotiation, cross-functional relationship building) require human judgment and empathy* Can evaluate AI output critically - You need to spot the hallucinations, the generic fluff, and the strategic misalignments that AI inevitably producesThink of AI tools as incredibly capable interns. They can produce impressive work quickly, but they need direction, oversight, and strategic guidance. Your job is to provide that guidance while leveraging their speed and breadth.The Real-World Application: What to Do Monday MorningLet's get tactical. Here's exactly how to apply these insights to your actual product work:For Your Next PRD:* Block 30 minutes for strategic thinking - Write your back-of-the-napkin outline in Google Docs or your tool of choice* Open Claude (or ChatPRD if you want more structure)* Copy your outline with this prompt:“I'm a product manager at [company] working on [product area]. I need to create a comprehensive PRD based on this outline. Please expand this into a complete PRD with the following sections: [list your preferred sections]. Make it detailed enough for engineering to start breaking down into user stories, but concise enough for leadership to read in 15 minutes. [Paste your outline]”* Review the output critically - Look for generic statements, missing details, or strategic misalignments* Iterate on specific sections:“The success metrics section is too vague. Please provide 3-5 specific, measurable KPIs with target values and explanation of why these metrics matter.”* Generate supporting materials:“Create a visual mockup of the core user flow showing the key interaction points.”* Synthesize the best elements - Don't just copy-paste the AI output. Use it as raw material that you shape into your final documentFor Stakeholder Communication:When you need to pitch something to leadership or engineering:* Generate 3 versions of your pitch using different tools (Claude, ChatPRD, and one other)* Compare them for:* Clarity and conciseness* Strategic framing* Compelling value proposition* Addressing likely objections* Blend the best elements into your final version* Add your personal voice - This is crucial. AI output often lacks personality and specific company context. Add that yourself.For Feature Prioritization:AI tools can help you think through trade-offs more systematically:“I'm deciding between three features for our next release: [Feature A], [Feature B], and [Feature C]. For each feature, analyze: (1) Estimated engineering effort, (2) Expected user impact, (3) Strategic alignment with making our platform the go-to solution for [your market], (4) Risk factors. Then recommend a prioritization with rationale.”This doesn't replace your judgment, but it forces you to think through each dimension systematically and often surfaces considerations you hadn't thought of.The Uncomfortable Truth About AI and Product ManagementLet me be direct about something that makes many PMs uncomfortable: AI will make some PM skills less valuable while making others more valuable.Less valuable:* Writing boilerplate documentation* Creating standard frameworks and templates* Generating routine status updates* Synthesizing information from existing sourcesMore valuable:* Strategic product vision and roadmapping* Deep customer empathy and insight generation* Cross-functional leadership and influence* Critical evaluation of options and trade-offs* Creative problem-solving for novel situationsIf your PM role primarily involves the first category of tasks, you should be concerned. But if you're focused on the second category while leveraging AI for the first, you're going to be exponentially more effective than your peers who resist these tools.The PMs I see succeeding aren't those who can write the best PRD manually. They're those who can write the best PRD with AI assistance in one-tenth the time, then use the saved time to talk to more customers, think more deeply about strategy, and build stronger cross-functional relationships.Advanced Techniques: Beyond Basic PRD GenerationOnce you've mastered the basics, here are some advanced applications I've found valuable:Competitive Analysis at Scale“Research our top 5 competitors in [market]. For each one, analyze: their core value proposition, key features, pricing strategy, target customer, and likely product roadmap based on recent releases and job postings. Create a comparison matrix showing where we have advantages and gaps.”Then use web search tools in Claude or Perplexity to fact-check and expand the analysis.Scenario Planning“We're considering three strategic directions for our product: [Direction A], [Direction B], [Direction C]. For each direction, map out: likely customer adoption curve, required technical investments, competitive positioning in 12 months, and potential pivots if the hypothesis proves wrong. Then identify the highest-risk assumptions we should test first for each direction.”This kind of structured scenario thinking is exactly what AI excels at—generating multiple well-reasoned perspectives quickly.User Story GenerationAfter your PRD is solid:“Based on this PRD, generate a complete set of user stories following the format ‘As a [user type], I want to [action] so that [benefit].' Include acceptance criteria for each story. Organize them into epics by functional area.”This can save your engineering team hours of grooming meetings.The Tools Will Keep Evolving. Your Process Shouldn'tHere's something important to remember: by the time you read this, the specific rankings might have shifted. Maybe ChatGPT-5 has leapfrogged Claude. Maybe a new specialized tool has emerged.But the core principles won't change:* Do strategic thinking before touching AI* Use the best tool available for your specific task* Iterate and refine rather than accepting first outputs* Blend AI capabilities with human judgment* Focus your time on the uniquely human aspects of product managementThe specific tools matter less than your process for using them effectively.A Final Experiment: The Skeptical VP TestI want to share one more insight from my testing that I think is particularly relevant for early and mid-career PMs.Toward the end of my experiment, I gave each tool this prompt: “Please compose a one paragraph exact summary I can share over DM with a highly influential VP of engineering who is generally a skeptic but super smart.”This is such a realistic scenario. How many times have you needed to pitch an idea to a skeptical technical leader via Slack or email? Someone who's brilliant, who's seen a thousand product ideas fail, and who can spot b******t from a mile away?The quality variation in the responses was fascinating. ChatGPT gave me something that felt generic and safe. Gemini was better but still a bit too enthusiastic. Grok was... well, Grok.But Claude and ChatPRD both produced messages that felt authentic, technically credible, and appropriately confident without being overselling. They acknowledged the engineering challenges while framing the opportunity compellingly.The lesson: When the stakes are high and the audience is sophisticated, the quality of your AI tool matters even more. That skeptical VP can tell the difference between a carefully crafted message and AI-generated fluff. So can your CEO. So can your biggest customers.Use the best tools available, but more importantly, always add your own strategic thinking and authentic voice on top.Questions to Consider: A Framework for Your Own ExperimentsAs I wrapped up my Loom, I posed some questions to the audience that I'll pose to you:“Let me know in the comments, if you do your PRDs using AI differently, do you start with back of the envelope? Do you say, oh no, I just start with one sentence, and then I let the chatbot refine it with me? Or do you go way more detailed and then use the chatbot to kind of pressure test it?”These aren't rhetorical questions. Your answer reveals your approach to AI-augmented product work, and different approaches work for different people and contexts.For early-career PMs: I'd recommend starting with more detailed outlines. The discipline of thinking through your product strategy before touching AI will make you a stronger PM. You can always compress that process later as you get more experienced.For mid-career PMs: Experiment with different approaches for different types of documents. Maybe you do detailed outlines for major feature PRDs but use more iterative AI-assisted refinement for smaller features or updates. Find what optimizes your personal productivity while maintaining quality.For senior PMs and product leaders: Consider how AI changes what you should expect from your PM team. Should you be reviewing more AI-generated first drafts and spending more time on strategic guidance? Should you be training your team on effective AI usage? These are leadership questions worth grappling with.The Path Forward: Continuous ExperimentationMy experiment with these five AI tools took 45 minutes. But I'm not done experimenting.The field of AI-assisted product management is evolving rapidly. New tools launch monthly. Existing tools get smarter weekly. Prompting techniques that work today might be obsolete in three months.Your job, if you want to stay at the forefront of product management, is to continuously experiment. Try new tools. Share what works with your peers. Build a personal knowledge base of effective prompts and workflows. And be generous with what you learn. The PM community gets stronger when we share insights rather than hoarding them.That's why I created this Loom and why I'm writing this post. Not because I have all the answers, but because I'm figuring it out in real-time and want to share the journey.A Personal Note on Coaching and ConsultingIf this kind of practical advice resonates with you, I'm happy to work with you directly.Through my pm coaching practice, I offer 1:1 executive, career, and product coaching for PMs and product leaders. We can dig into your specific challenges: whether that's leveling up your AI workflows, navigating a career transition, or developing your strategic product thinking.I also work with companies (usually startups or incubation teams) on product strategy, helping teams figure out PMF for new explorations and improving their product management function.The format is flexible. Some clients want ongoing coaching, others prefer project-based consulting, and some just want a strategic sounding board for a specific decision. Whatever works for you.Reach out through tomleungcoaching.com if you're interested in working together.OK. Enough pontificating. Let's ship greatness. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit firesidepm.substack.com

The Last American Vagabond
The US Works With ISIS In Syria, The Suspect Bondi Beach Attack & Held Hostage By Geoengineering

The Last American Vagabond

Play Episode Listen Later Dec 14, 2025 190:11


Welcome to The Daily Wrap Up, an in-depth investigatory show dedicated to bringing you the most relevant independent news, as we see it, from the last 24 hours (12/14/25). As always, take the information discussed in the video below and research it for yourself, and come to your own conclusions. Anyone telling you what the truth is, or claiming they have the answer, is likely leading you astray, for one reason or another. Stay Vigilant. !function(r,u,m,b,l,e){r._Rumble=b,r[b]||(r[b]=function(){(r[b]._=r[b]._||[]).push(arguments);if(r[b]._.length==1){l=u.createElement(m),e=u.getElementsByTagName(m)[0],l.async=1,l.src="https://rumble.com/embedJS/u2q643"+(arguments[1].video?'.'+arguments[1].video:'')+"/?url="+encodeURIComponent(location.href)+"&args="+encodeURIComponent(JSON.stringify([].slice.apply(arguments))),e.parentNode.insertBefore(l,e)}})}(window, document, "script", "Rumble");   Rumble("play", {"video":"v70ua72","div":"rumble_v70ua72"}); Video Source Links (In Chronological Order): (1) Aaron Rupar on X: "Q: If DOJ doesn't release the files it has by next Friday, is there anything Congress can do to compel that to happen? MASSIE: It's a crime if they don't. This is a new law with criminal implications if they don't follow it. https://t.co/CzsXCuyXxu" / X The Last American Vagabond on X: "“One of Kash Patel's staff threatened my staff with a criminal investigation if we didn't “straighten up & play ball” … they said “we're going to investigate your staff for fraud”” - if true, Kash is “weaponizing gov” or wiling to let a criminal walk. https://t.co/hGMfzFalsz" / X New Epstein Photos Released by Democrats Show Donald Trump, Bill Clinton - WSJ Micah on X: "Steve Bannon: ‘Democrats are scared of the Epstein Files being released.' Also Steve Bannon: https://t.co/zAvqClp3v2" / X (11) SUAREZ on X: "@Polymarket Bro trying everything to get people to forget about the files https://t.co/jzrH1VQuyD" / X New Tab (11) SilencedSirs◼️ on X: "

The Whole Rabbit
CHAOS MAGICK #6: Cyber Magick, AI Gods and Technomancy 101 (PART A)

The Whole Rabbit

Play Episode Listen Later Dec 13, 2025 44:44


Send us comments, suggestions and ideas here! In this week's episode we unzip the hidden file on the bonus floppy disk that came with the Necronomicon, upload its contents directly to the miniature astral hard drive hidden inside our pineal glands and begin installing Chaos Magick #6 an instruction manual on Technomancy 101 also known as the weird art and science of Cyber Magick! In the first half of the show we discuss the overlap between technology and magick, the promise and threat of AI gods and retrocausality. In the extended half of the show we talk shop about making AI sigils (do they even work?) and how to use the Cosmic Control Terminal like an ultra dangerous chaos magick hacker edge lord, like me. Thank you and enjoy the show!In this week's episode we discuss:Arthur C. Clark's Three LawsTrick Rock Into ThinkingState of the ArtDoes AI have Ka?Peter Carroll's PsybermagickJoshua Madera's Technomancy 101In the extended show available at www.patreon.com/TheWholeRabbit we further down the rabbit hole to discuss:The Hacker Method // Cosmic Control TerminalAstral AI SigilsVirtual Reality MagickAI as a Lovecraftian DeityGhost In the Shell Each host is responsible for writing and creating the content they present. Luke in red, Heka in purple, Tim in black-green, Mari in blue.Where to find The Whole Rabbit:Spotify: https://open.spotify.com/show/0AnJZhmPzaby04afmEWOAVInstagram: https://www.instagram.com/the_whole_rabbitTwitter: https://twitter.com/1WholeRabbitOrder Stickers: https://www.stickermule.com/thewholerabbitOther Merchandise: https://thewholerabbit.myspreadshop.com/Music By Spirit Travel Plaza:https://open.spotify.com/artist/30dW3WB1sYofnow7y3V0YoSources:Peter Carroll's Blog:https://www.specularium.org/blogTechnomancy 101, Joshua Madera:https://technomancy101.com/Psybermagick, Peter Carroll:https://www.amazon.com/PsyberMagick-Advanced-Ideas-Chaos-Magick/dp/1935150650Support the show

Late Confirmation by CoinDesk
El Salvador Teams With Elon Musk's xAI for National Education | CoinDesk Daily

Late Confirmation by CoinDesk

Play Episode Listen Later Dec 12, 2025 1:45


El Salvador partners with Elon Musk's xAI to build an AI-powered education system. El Salvador is partnering with Elon Musk's xAI to launch the world's first national AI-powered public education system. The government plans to deploy the Grok chatbot to over 5,000 schools for millions of students. CoinDesk's Jennifer Sanasie hosts "CoinDesk Daily." - This episode was hosted by Jennifer Sanasie. “CoinDesk Daily” is produced by Jennifer Sanasie and edited by Victor Chen.

This Day in AI Podcast
GPT-5.2 Can't Identify a Serial Killer & Was The Year of Agents A Lie? EP99.28-5.2

This Day in AI Podcast

Play Episode Listen Later Dec 12, 2025 63:31


Join Simtheory: https://simtheory.aiGPT-5.2 is here and... it's not great. In this episode, we put OpenAI's latest model through its paces and discover it can't even identify a convicted serial killer when the text literally says "serial killer." We compare it head-to-head with Claude Opus and Gemini 3 Pro (spoiler: they win). Plus, we reflect on the "Year of Agents" that wasn't, why your barber switched to Grok, Disney's billion-dollar investment to use Mickey Mouse in Sora, and why Mustafa Suleyman should probably be fired. Also featuring: the GPT-5.2 diss track where the model brags about capabilities it doesn't have.CHAPTERS:00:00 Intro - GPT-5.2 Drops + Details01:25 First Impressions: Verbose, Overhyped, Vibe-Tuned02:52 OpenAI's Rushed Response to Gemini 303:24 Tool Calling Problems & Agentic Failures04:14 Why Anthropic's Models Just Work Better06:31 The Barber Test: Real Users Are Switching to Grok10:00 The Ivan Milat Vision Test (Serial Killer Edition)17:04 Year of Agents Retrospective: What Went Wrong25:28 The Path to True Agentic Workflows31:22 GPT-5.2 Diss Track (Yes, Really)43:43 Why We're Still Optimistic About AI50:29 Google Bringing Ads to Gemini in 202654:46 Disney Pays $1B to Use Mickey Mouse in Sora56:57 LOL of the Week: Mustafa Suleyman's Sad Tweets1:00:35 Outro & Full GPT-5.2 Diss TrackThanks for listening. Like & Sub. xoxox

Marketing Against The Grain
‘My Data Proves SEO is NOT Dead' + How to Rank #1 on Google & AI

Marketing Against The Grain

Play Episode Listen Later Dec 11, 2025 45:48


Get our 10 AI prompts to dominate SEO & AEO: https://clickhubspot.com/mbc Is SEO more relevant than we thought? Ep. 385 Kipp, Kieran, and Ethan Smith, CEO of Graphite, dive into the data behind the real state of SEO, debunking the myths that SEO is dead and uncovering what's really changing with the rise of answer engine optimization. Learn more on whether LLMs are really overtaking search, how marketers should invest between SEO and AEO, and the most repeatable tactics to rank #1 on Google and show up in AI-generated answers. Mentions Ethan Smith https://www.linkedin.com/in/ethanls Graphite https://graphite.io/ Ahrefs https://ahrefs.com/ ChatGPT https://chatgpt.com/ Grok https://grok.com/ Claude https://claude.ai/ Perplexity https://www.perplexity.ai/ Get our guide to build your own Custom GPT: https://clickhubspot.com/customgpt We're creating our next round of content and want to ensure it tackles the challenges you're facing at work or in your business. To understand your biggest challenges we've put together a survey and we'd love to hear from you! https://bit.ly/matg-research Resource [Free] Steal our favorite AI Prompts featured on the show! Grab them here: https://clickhubspot.com/aip We're on Social Media! Follow us for everyday marketing wisdom straight to your feed YouTube: ​​https://www.youtube.com/channel/UCGtXqPiNV8YC0GMUzY-EUFg  Twitter: https://twitter.com/matgpod  TikTok: https://www.tiktok.com/@matgpod  Join our community https://landing.connect.com/matg Thank you for tuning into Marketing Against The Grain! Don't forget to hit subscribe and follow us on Apple Podcasts (so you never miss an episode)! https://podcasts.apple.com/us/podcast/marketing-against-the-grain/id1616700934   If you love this show, please leave us a 5-Star Review https://link.chtbl.com/h9_sjBKH and share your favorite episodes with friends. We really appreciate your support. Host Links: Kipp Bodnar, https://twitter.com/kippbodnar   Kieran Flanagan, https://twitter.com/searchbrat  ‘Marketing Against The Grain' is a HubSpot Original Podcast // Brought to you by Hubspot Media // Produced by Darren Clarke.

Kilowatt: A Podcast about Tesla
Don't Text and FSD

Kilowatt: A Podcast about Tesla

Play Episode Listen Later Dec 11, 2025 30:28


Episode SummaryIn this episode of Kilowatt, Elon Musk stirs excitement with claims that Tesla would remove the Robotaxi safety monitors within three weeks. Tesla Shanghai hits a major milestone by producing its 4 millionth vehicle. We also break down Tesla's 2025 Holiday Update, featuring Grok voice commands, a new Santa Mode, and an FSD dashcam overlay, but still no CarPlay. A new "Eyes-Off" feature allows texting while driving under FSD supervision, but you shouldn't use it. VW's Scout brand is shifting focus to range-extended EVs due to consumer preferences. Lastly, Waymo is expanding its autonomous vehicle service to four new cities.Support the ShowSupport KilowattOther PodcastsBeyond the Post YouTubeBeyond the Post PodcastShuffle Playlist918Digital WebsiteNews LinksTesla CEO Elon Musk Claims Driverless Robotaxis Coming to Austin in 3 WeeksTesla Announces Major Milestone at Gigafactory ShanghaiTesla Announces 2025 Holiday Update: Grok Commands, New Santa Mode, Dashcam FSD Overlay, But Lacks CarPlayTesla Introduces Early Eyes-Off Feature; Allows to Text and DriveVW's Scout is Making a Hard Pivot Towards Range-Extended EVs Amid Consumer DemandWaymo Speeds Into More Cities!*ART PROVIDED BY DALL-eSupport this show http://supporter.acast.com/kilowatt. Hosted on Acast. See acast.com/privacy for more information.

The Saad Truth with Dr. Saad
Somalia's Rankings in Global Indices of Human Flourishing and Happiness (The Saad Truth with Dr. Saad_933)

The Saad Truth with Dr. Saad

Play Episode Listen Later Dec 9, 2025 7:42


Is there any validity to what DonaldTrump said about Somalia? Perhaps Somali immigrants should be infinitely grateful to have been welcomed into the United States. The results from Grok: https://x.com/i/grok/share/eOY8rzrpmCKsvS3iC5KpK1lVE _______________________________________ If you appreciate my work and would like to support it: https://subscribestar.com/the-saad-truth https://patreon.com/GadSaad https://paypal.me/GadSaad To subscribe to my exclusive content on X, please visit my bio at https://x.com/GadSaad _______________________________________ This clip was posted on December 9, 2025 on my YouTube channel as THE SAAD TRUTH_1956: https://youtu.be/1JJqkqnBXz0 _______________________________________ Please visit my website gadsaad.com, and sign up for alerts. If you appreciate my content, click on the "Support My Work" button. I count on my fans to support my efforts. You can donate via Patreon, PayPal, and/or SubscribeStar. _______________________________________ Dr. Gad Saad is a professor, evolutionary behavioral scientist, and author who pioneered the use of evolutionary psychology in marketing and consumer behavior. In addition to his scientific work, Dr. Saad is a leading public intellectual who often writes and speaks about idea pathogens that are destroying logic, science, reason, and common sense.  _______________________________________  

TechLinked
Intel Arc B770(?), Chatbot Ads, EU makes Meta better + more!

TechLinked

Play Episode Listen Later Dec 9, 2025 9:07


Timestamps: 0:00 why does he do this 0:15 Possible Intel Arc B770 leaks 1:28 Ads in Gemini, Grok, "chatbot dialect" 2:57 EU makes Meta use less personal data 4:11 DeleteMe! 4:59 QUICK BITS INTRO 5:10 Black Friday Xbox sales report 5:54 AI browser security warning 6:36 Treatment repairs DNA, tissue 7:16 EngineAI answers 'CGI' claims NEWS SOURCES: https://lmg.gg/CUdUj Learn more about your ad choices. Visit megaphone.fm/adchoices

The Law Firm Marketing Minute
One Blog Strategy That Will Be Critical in 2026

The Law Firm Marketing Minute

Play Episode Listen Later Dec 9, 2025 2:01 Transcription Available


Did you like this episode? Dislike it? In this episode, Danny Decker breaks down why the blogging playbook for small law firms is about to change in a major way. As more consumers shift from Google search to AI tools like ChatGPT, Gemini, and Grok, the old keyword-stuffed approach is quickly losing its power. Danny explains why human-friendly, genuinely valuable content is becoming the new gold standard — and how law firms that embrace this shift in 2026 will stand out in AI search far more than those still writing for algorithms. It's a short, sharp insight every small firm needs as they prepare for next year's competitive landscape. 

Master The NEC Podcast
Master The NEC | Episode 44 | The Best Ai for Contractors

Master The NEC Podcast

Play Episode Listen Later Dec 9, 2025 10:47 Transcription Available


In this episode, Paul talks about how important it is to have NEC and other construction standards readily available when doing installations or inspections. In fact, making sure you are doing it right the first time sends a message to the customer that you know what you are doing and to the inspector that you are the best in the business. Without a tailored AI model designed for all trades, we can ensure you get the right results fast and accurately while in the field and on the job. Give TradeHog.Net a TRY for free and ask the HOG any questions you want. Listen as Paul Abernathy, CEO and Founder of Electrical Code Academy, Inc., the leading electrical educator in the country, discusses electrical code, electrical trade, and electrical business-related topics to help electricians maximize their knowledge and industry investment.If you are looking to learn more about the National Electrical Code, for electrical exam preparation, or to better your knowledge of the NEC, then visit https://fasttraxsystem.com for all the electrical code training you will ever need by the leading electrical educator in the country with the best NEC learning program on the planet.Become a supporter of this podcast: https://www.spreaker.com/podcast/master-the-nec-podcast--1083733/support.Struggling with the National Electrical Code? Discover the real difference at Electrical Code Academy, Inc.—where you'll learn from the nation's most down-to-earth NEC expert who genuinely cares about your success. No fluff. No gimmicks. Just the best NEC training you'll actually remember.Visit https://FastTraxSystem.com to learn more.

Michael and Us
#675 - Where the Lions Weep

Michael and Us

Play Episode Listen Later Dec 8, 2025 56:42


Ten years before Siri and quarter-century before Grok, Steven Spielberg and the late Stanley Kubrick considered a new form of humanity in A.I. ARTIFICIAL INTELLIGENCE (2001). We discuss a meeting of the minds between two very distinctive authorial voices. Join us on Patreon for an extra episode every week - https://www.patreon.com/michaelandus "The Best of Both Worlds" by Jonathan Rosenbaum - https://jonathanrosenbaum.net/2024/01/the-best-of-both-worlds/ "A Matter of Life and Death" by Jonathan Rosenbaum - https://jonathanrosenbaum.net/2024/05/a-matter-of-life-and-death-ai-artificial-intelligence-tk/ Call for submissions: "The Journal of Stoogeological Studies Vol. 2" - https://x.com/WillSloanEsq/status/1995213884635193456

Lenny's Podcast: Product | Growth | Career
The 100-person AI lab that became Anthropic and Google's secret weapon | Edwin Chen (Surge AI)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Dec 7, 2025 70:31


Edwin Chen is the founder and CEO of Surge AI, the company that teaches AI what's good vs. what's bad, powering frontier labs with elite data, environments, and evaluations. Surge surpassed $1 billion in revenue with under 100 employees last year, completely bootstrapped—the fastest company in history to reach this milestone. Before founding Surge, Edwin was a research scientist at Google, Facebook, and Twitter and studied mathematics, computer science, and linguistics at MIT.We discuss:1. How Surge reached over $1 billion in revenue with fewer than 100 people by obsessing over quality2. The story behind how Claude Code got so good at coding and writing3. The problems with AI benchmarks and why they're pushing AI in the wrong direction4. How RL environments are the next frontier in AI training5. Why Edwin believes we're still a decade away from AGI6. Why taste and human judgment shape which AI models become industry leaders7. His contrarian approach to company building that rejects Silicon Valley's “pivot and blitzscale” playbook8. How AI models will become increasingly differentiated based on the values of the companies building them—Brought to you by:Vanta—Automate compliance. Simplify security.WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUsCoda—The all-in-one collaborative workspace—Transcript: https://www.lennysnewsletter.com/p/surge-ai-edwin-chen—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/180055059/my-biggest-takeaways-from-this-conversation—Where to find Edwin Chen:• X: https://x.com/echen• LinkedIn: https://www.linkedin.com/in/edwinzchen• Surge's blog: https://surgehq.ai/blog—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Edwin Chen(04:48) AI's role in business efficiency(07:08) Building a contrarian company(08:55) An explanation of what Surge AI does(09:36) The importance of high-quality data(13:31) How Claude Code has stayed ahead(17:37) Edwin's skepticism toward benchmarks(21:54) AGI timelines and industry trends(28:33) The Silicon Valley machine(33:07) Reinforcement learning and future AI training(39:37) Understanding model trajectories(41:11) How models have advanced and will continue to advance(42:55) Adapting to industry needs(44:39) Surge's research approach(48:07) Predictions for the next few years in AI(50:43) What's underhyped and overhyped in AI(52:55) The story of founding Surge AI(01:02:18) Lightning round and final thoughts—Referenced:• Surge: https://surgehq.ai• Surge's product page: https://surgehq.ai/products• Claude Code: https://www.claude.com/product/claude-code• Gemini 3: https://aistudio.google.com/models/gemini-3• Sora: https://openai.com/sora• Terrence Rohan on LinkedIn: https://www.linkedin.com/in/terrencerohan• Richard Sutton—Father of RL thinks LLMs are a dead end: https://www.dwarkesh.com/p/richard-sutton• The Bitter Lesson: http://www.incompleteideas.net/IncIdeas/BitterLesson.html• Reinforcement learning: https://en.wikipedia.org/wiki/Reinforcement_learning• Grok: https://grok.com• Warren Buffett on X: https://x.com/WarrenBuffett• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Anthropic's CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next• Brian Armstrong on LinkedIn: https://www.linkedin.com/in/barmstrong• Interstellar on Prime Video: https://www.amazon.com/Interstellar-Matthew-McConaughey/dp/B00TU9UFTS• Arrival on Prime Video: https://www.amazon.com/Arrival-Amy-Adams/dp/B01M2C4NP8• Travelers on Netflix: https://www.netflix.com/title/80105699• Waymo: https://waymo.com• Soda versus pop: https://flowingdata.com/2012/07/09/soda-versus-pop-on-twitter—Recommended books:• Stories of Your Life and Others: https://www.amazon.com/Stories-Your-Life-Others-Chiang/dp/1101972122• The Myth of Sisyphus: https://www.amazon.com/Myth-Sisyphus-Vintage-International/dp/0525564454• Le Ton Beau de Marot: In Praise of the Music of Language: https://www.amazon.com/dp/0465086454• Gödel, Escher, Bach: An Eternal Golden Braid: https://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden/dp/0465026567—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

The Patrick Madrid Show
The Patrick Madrid Show: December 05, 2025 - Hour 1

The Patrick Madrid Show

Play Episode Listen Later Dec 5, 2025 51:03


Patrick explores cashless economies, challenges around taxation, socialism’s ripple effects on faith and family, and the wounds caused by political division. The episode moves from economic systems to fractured relationships, Catholic doctrine, and philosophical tactics, connecting political trends with everyday faith and forgiveness. Listener questions spark personal stories and practical advice, all threaded through Patrick's candid, sometimes witty exchanges. Audio: Why would the banks want to move away from cash? The only ones left with $100 is the banks https://x.com/rainmaker1973/status/1987139111568752874?s=46&t=m_l2itwnFvka2DG8_72nHQ (00:20) Audio: Economist Arthur Laffer - "If you tax people who work, and you pay people who don't work, do not be surprised if you have a lot of people not working." https://x.com/redpilldispensr/status/1986721172759748978 (02:25) Audio: Socialism is killing America. Every time the government makes something affordable and accessible, it becomes unaffordable. https://x.com/LibertyCappy/status/1986584384770572486 (04:55) Audio: I let politics ruin my relationship with my mother - https://x.com/onebaddude_/status/1987579620208656395?s=46&t=m_l2itwnFvka2DG8_72nHQ (08:02) Patrick in Denver, CO - Does going to an Orthodox Church fulfill one's Sunday Obligation? How can I persuade a Catholic not to do this? (25:18) Roger - I was listening on my drive to work, and you were discussing Marxism. Have you ever read 'Awake Not Woke'? (36:42) Theresa - Does Jesus give His future and present Body to the Apostles? (40:15) A listener shares the answers Grok thinks Patrick Madrid would give (47:52) Originally Aired on 11-10-2025

Decoder with Nilay Patel
The tiny team trying to keep AI from destroying everything

Decoder with Nilay Patel

Play Episode Listen Later Dec 4, 2025 38:20


Today, I'm talking with Verge senior AI reporter Hayden Field about some of the people responsible for studying AI and deciding in what ways it might… well, ruin the world. Those folks work at Anthropic as part of a group called the societal impacts team, which Hayden just spent time with for a profile she published this week on The Verge.  The team is just nine people out of more than 2,000 who work at the Anthropic, and their only job, as the team members themselves say, is to investigate and publish quote "inconvenient truths” about AI. That of course brings up a whole host of problems, the most important of which is whether this team can remain independent, or even exist at all, as it publicizes findings about Anthropic's own products that might be unflattering or even politically fraught.  Links:  It's their job to keep AI from destroying everything | The Verge Anthropic details how it measures Claude's wokeness | The Verge White House orders tech companies to make AI bigoted again | The Verge Chaos and lies: Why Sam Altman was booted from OpenAI | The Verge How Elon Musk Is remaking Grok in his image | NYT Anthropic tries to defuse White House backlash | Axios  New AI battle: White House vs. Anthropic | Axios Anthropic will pursue gulf state investments after all | Wired Subscribe to The Verge to access the ad-free version of Decoder! Credits: Decoder is a production of The Verge and part of the Vox Media Podcast Network. Decoder is produced by Kate Cox and Nick Statt and edited by Ursa Wright. Our editorial director is Kevin McShane.  The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Lenny's Podcast: Product | Growth | Career
Why LinkedIn is turning PMs into AI-powered "full stack builders” | Tomer Cohen (LinkedIn CPO)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Dec 4, 2025 67:32


Tomer Cohen is the longtime chief product officer at LinkedIn, where he's pioneering the Full Stack Builder program, a radical new approach to product development that fully embraces what AI makes possible. Under his leadership, LinkedIn has scrapped its traditional Associate Product Manager program and replaced it with an Associate Product Builder program that teaches coding, design, and PM skills together. He's also introduced a formal “Full Stack Builder” title and career ladder, enabling anyone from any function to take products from idea to launch. In this conversation, Tomer explains why product development has become too complex at most companies and how LinkedIn is building an AI-powered product team that can move faster, adapt more quickly, and do more with less.We discuss:1. How 70% of the skills needed for jobs will change by 20302. The broken traditional model: organizational bloat slows features to a six-month cycle3. The Full Stack Builder model4. Three pillars of making FSB work: platform, agents, and culture (culture matters most)5. Building specialized agents that critique ideas and find vulnerabilities6. Why off-the-shelf AI tools never work on enterprise code without customization7. Top performers adopt AI tools fastest, contrary to expectations about leveling effects8. Change management tactics: celebrating wins, making tools exclusive, updating performance reviews—Brought to you by:Vanta—Automate compliance. Simplify security: https://vanta.com/lennyFigma Make—A prompt-to-code tool for making ideas real: https://www.figma.com/lenny/Miro—The AI Innovation Workspace where teams discover, plan, and ship breakthrough products: https://miro.com/lenny—Transcript: https://www.lennysnewsletter.com/p/why-linkedin-is-replacing-pms—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/180042347/my-takeaways-from-this-conversation—Where to find Tomer Cohen:• LinkedIn: https://www.linkedin.com/in/tomercohen• Podcast: https://podcasts.apple.com/us/podcast/building-one-with-tomer-cohen/id1726672498—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Tomer Cohen(04:42) The need for change in product development(11:52) The full-stack builder model explained(16:03) Implementing AI and automation in product development(19:17) Building and customizing AI tools(27:51) The timeline to launch(31:46) Pilot program and early results(37:04) Feedback from top talent(39:48) Change management and adoption(46:53) Encouraging people to play with AI tools(41:21) Performance reviews and full-stack builders(48:00) Challenges and specialization(50:05) Finding talent(52:46) Tips for implementing in your own company(56:43) Lightning round and final thoughts—Referenced:• How LinkedIn became interesting: The inside story | Tomer Cohen (CPO at LinkedIn): https://www.lennysnewsletter.com/p/how-linkedin-became-interesting-tomer-cohen• LinkedIn: https://www.linkedin.com• Cursor: https://cursor.com• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Devin: https://devin.ai• Figma: https://www.figma.com• Microsoft Copilot: https://copilot.microsoft.com• Windsurf: https://windsurf.com• Building a magical AI code editor used by over 1 million developers in four months: The untold story of Windsurf | Varun Mohan (co-founder and CEO): https://www.lennysnewsletter.com/p/the-untold-story-of-windsurf-varun-mohan• Lovable: https://lovable.dev• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (co-founder and CEO): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• APB program at LinkedIn: https://careers.linkedin.com/pathways-programs/entry-level/apb• Naval Ravikant on X: https://x.com/naval• One Song podcast: https://podcasts.apple.com/us/podcast/%D7%A9%D7%99%D7%A8-%D7%90%D7%97%D7%93-one-song/id1201883177• Song Exploder podcast: https://songexploder.net• Grok on Tesla: https://www.tesla.com/support/grok• Reid Hoffman on X: https://x.com/reidhoffman—Recommended books:• Why Nations Fail: The Origins of Power, Prosperity, and Poverty: https://www.amazon.com/Why-Nations-Fail-Origins-Prosperity/dp/0307719227• Outlive: The Science and Art of Longevity: https://www.amazon.com/Outlive-Longevity-Peter-Attia-MD/dp/0593236599• The Beginning of Infinity: Explanations That Transform the World: https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

The Saad Truth with Dr. Saad
Explaining Trump's "Somalia is Garbage" Comment (The Saad Truth with Dr. Saad_928)

The Saad Truth with Dr. Saad

Play Episode Listen Later Dec 4, 2025 21:46


This stems from an XSpaces that I hosted on December 3, 2025: https://x.com/GadSaad/status/1996387322841977295?s=20 I conducted an analysis (with the help of Grok) of the Corruption Perceptions Index, which has ranked roughly 180 countries in terms of their endemic corruption starting in 1995. Would you like to guess where Somalia ranks? _______________________________________ If you appreciate my work and would like to support it: https://subscribestar.com/the-saad-truth https://patreon.com/GadSaad https://paypal.me/GadSaad To subscribe to my exclusive content on X, please visit my bio at https://x.com/GadSaad _______________________________________ This clip was posted on December 3, 2025 on my YouTube channel as THE SAAD TRUTH_1950: https://youtu.be/LZ--oLQyzWg _______________________________________ Please visit my website gadsaad.com, and sign up for alerts. If you appreciate my content, click on the "Support My Work" button. I count on my fans to support my efforts. You can donate via Patreon, PayPal, and/or SubscribeStar. _______________________________________ Dr. Gad Saad is a professor, evolutionary behavioral scientist, and author who pioneered the use of evolutionary psychology in marketing and consumer behavior. In addition to his scientific work, Dr. Saad is a leading public intellectual who often writes and speaks about idea pathogens that are destroying logic, science, reason, and common sense.  _______________________________________