When the conversation turns to technology, Hashtag Trending makes sure you’re in the know. We review the top trending tech topics from across the web - Twitter, Reddit, Google, Product Hunt and more.

Anthropic's Hidden Claude 1, Market-Shaking AI Tools, and MIT's One-Step 3D-Printed Electric Motor Host Jim Love covers three major stories: Anthropic CEO Dario Amodei's comments on AI governance and safety, including that "Claude 1" was built before ChatGPT but not released because it didn't meet Anthropic's alignment and safety bar; how Anthropic's recent launches—Claude for knowledge-work "cowork" workflows, deeper office/document integrations, Claude Code Security for vulnerability scanning, and tooling to automate parts of COBOL modernization—coincided with sharp market reactions including declines in CrowdStrike and Zscaler (around 10–11%) and a major IBM drop (more than 13%) amid fears AI could disrupt SaaS, cybersecurity, and legacy modernization revenue; and MIT researchers' report of a 3D printing process that produces a fully functional linear electric motor in a single step (aside from magnetization), with reported material cost around 50 cents in a lab setting, raising the prospect of on-demand manufacturing and compressed supply chains. The episode also includes sponsorship messages about Meter's integrated wired, wireless, and cellular networking stack. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Headlines and Sponsor 00:45 Amodei vs Altman 01:29 Claude 1 Not Shipped 03:19 Anthropic Shakes Markets 04:57 AI Hits Cybersecurity 05:28 COBOL Modernization Shock 08:10 MIT Prints Electric Motor 09:39 Manufacturing Disruption 10:26 Wrap Up and Thanks

Jim Love hosts Hashtag Trending, and highlights updates to TechNewsDay.ca/.com including a new "Best of YouTube" section for curated tech channels. Anthropic alleges three Chinese AI labs—DeepSeek, Moonshot, and MiniMax—ran industrial-scale distillation campaigns to extract capabilities from Claude models using proxy services and "Hydra cluster" networks with tens of thousands of fraudulent accounts, prompting Anthropic to strengthen identity controls and detection with cloud partners. Amazon shares fall for nine straight sessions after investors react to plans for roughly $200B in 2026 capex largely for AI infrastructure, raising questions about ROI and future free cash flow. A cited analysis by YouTuber Nate B Jones argues Google's Gemini 3.1 Pro signals a strategy shift toward deeper reasoning (not just coding/agentic tools), noting a 77.1% ARC-AGI-2 score and DeepMind's scientific problem focus, contrasting OpenAI's product/distribution, Anthropic's agentic coordination, and Google's "pure intelligence" approach. The episode also references Citri Research's 2028 scenario planning report outlining a plausible fast-arriving AGI chain reaction—falling inference costs, rapid adoption, labor displacement pressure, and geopolitical competition for compute and talent—and promotes the Saturday show Project Synapse on long-term AI trajectories. Finally, Love discusses Sam Altman's comments at the India AI Impact Summit dismissing viral claims about ChatGPT water and energy use without providing specific counter-numbers, noting growing public backlash as data center water and electricity demands rise; the full interview is linked in show notes. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt LINKS Nate B Jones on Google Gemimi 3.1 https://youtu.be/8jKAT8GNDE0?si=Rz5k1gP0sS9H7XAp Sam Altman's speach https://www.youtube.com/live/qH7thwrCluM?si=IO_76NsGJ1zgt8J7 AI Scenario https://www.citriniresearch.com/p/2028gic 00:00 Headlines and intro 00:54 Site updates and YouTube picks 01:57 Anthropic warns of distillation 04:58 Amazon AI spending jitters 06:13 Google bets on reasoning 10:31 2028 AGI crisis scenario 11:55 Altman backlash and resources 14:17 Wrap up and sponsor thanks

The episode covers Apple researchers' Ferret-UI Light, a 3B-parameter on-device model that interprets on-screen interfaces using a two-pass crop-and-zoom approach, positioned against reported OpenAI smart-speaker work with Jony Ive, Amazon's generative-AI Alexa rollout, and Google's Gemini integration, with Apple emphasizing privacy and local processing. Walmart is highlighted for offering free Google-backed AI training to its US and Canadian workforce (about 1.6 million employees) via an eight-hour professional certificate, with executives saying AI will reshape jobs rather than drive layoffs. Wikipedia, via the Wikimedia Foundation, blocks archive.today citing infrastructure overload from automated requests and alleging some archived captures were altered, raising concerns about archival integrity while distinguishing it from the Internet Archive's Wayback Machine. Research from UNSW Sydney and the Australian National University finds most people—including "super recognizers"—struggle to detect AI-generated faces, increasing risks like fraud and social engineering. The show closes with Bernie Sanders urging to slow AI development, alongside similar readiness warnings from OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei about rapid progress toward very powerful systems and the lack of preparedness by lawmakers and the public. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Hashtag Trending Kickoff + Sponsor: Meter 00:57 Apple's On‑Device AI for App Control (Ferret‑UI Light) 02:01 Smart Speaker Arms Race: OpenAI, Alexa GenAI, Gemini vs Apple's Privacy Play 03:09 Walmart's Plan: Train 1.6M Workers in AI Instead of Layoffs 04:56 Wikipedia Blocks Archive.today Over Load + Integrity Allegations 06:34 AI-Generated Faces Now Fool Most People (Study + Security Risks) 07:57 "Slow This Thing Down": Sanders, Altman & AGI Timelines 09:59 Wrap-Up, Links, Listener Messages + Sponsor Close

The episode opens with sponsor Meter and a conversation about Saturday morning cartoons before shifting to recent breakthroughs in AI video generation from ByteDance's "SeaDance" (with "SeeDream" as its image generator). Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt The hosts describe SeaDance's cinematic quality, accurate physics, and realistic recreations of actors and IP (including examples like Tom Cruise vs. Brad Pitt and Keanu Reeves as Neo/John Wick), and discuss the implications for film production, commercials, and local film economies such as Toronto and Vancouver. They cover backlash and gatekeeping, including an AI-made Thanksgiving-themed animated short that won a contest tied to AMC theaters' pre-show but reportedly wasn't shown, and compare resistance to historical Luddite reactions. The discussion broadens to productivity and labor impacts, arguing that AI adoption may mirror the 1980s computer productivity dip before process re-engineering in the 1990s, while also raising concerns that AI leaders are forecasting major white-collar job losses. The hosts highlight the rise of agentic benchmarks (TerminalBench, Apex Agents, BrowseComp) and how AI search helps find information faster than traditional search, but emphasize that trust, reliability, and infrastructure are not keeping pace. They raise major concerns about platform terms and data ownership, focusing on Perplexity's updated terms (non-commercial use only even for paid tiers, mandatory attribution, broad licensing rights over user content, and liability limits). They also discuss reliability failures: a widespread Google Gemini issue where users' chat histories disappeared (only visible as activity records with limited usability), and missing document links in ChatGPT chats. The hosts argue users must back up their own data and criticize unclear policies and weak support. Security risks are illustrated through a story about the AI-enabled robot vacuum "Romo," where a developer used Claude to reverse engineer its app and reportedly gained access to control thousands of devices across multiple countries before responsibly disclosing the issues. They also reference broader concerns like connected home devices, Ring neighborhood features, and Microsoft's Recall concept. In rapid-fire news, they mention Anthropic releasing Sonnet 4.6 as a strong, cheaper option near Opus-level performance, a new Grok release branded "4.20," and a clip from an AI summit in India where Sam Altman and Dario Amodei appeared to refuse to hold hands on stage, which the hosts cite as a sign of immaturity among AI industry leaders. The episode closes with sponsor Meter. 00:00 Sponsor + Welcome to Project Synapse 00:21 Saturday Morning Cartoons… Reimagined by AI 01:16 What is 'SeaDance'? Cinematic AI Video Goes Viral 03:17 Keanu Reeves, Neo vs. John Wick & the End of VFX as We Know It 06:43 From Movies to Ads: How AI Video Hits Commercial Production 07:41 The Hidden Economy of Commercials (and Why Cities Like Toronto/Vancouver Care) 09:56 AMC Won't Screen an AI-Made Short: Early Luddite Backlash 12:54 Artists, AI, and the 'Starving Creator' Reality 16:17 AI Adoption Parallels: The 1980s Computer Wave & the Productivity Dip 24:09 Agentic AI Benchmarks: TerminalBench, Apex Agents & BrowseComp 26:04 AI Search That Actually Saves Time (and Your Memory) 30:36 Perplexity's New Terms of Service: Non-Commercial Use & Ownership Shock 35:40 Liability Caps, More Corporate Gripes… and a Coke Zero 'Sponsor' Bit 37:36 Gemini 3.1's big leap—and why it still doesn't feel trustworthy 38:08 Gemini chat history vanishes: what happened and why users are furious 40:19 OpenAI document links disappearing too: what "saved" really means 42:04 Cloud AI's shaky foundation: security, reliability, and confusing settings 47:45 When reliance turns emotional: losing models, losing "someone" 49:22 Real-world stakes: the Social Security database whistleblower story 53:15 Owning your data (and why Google support won't save you) 54:53 Trust whiplash: Anthropic cuts off OpenClaw and the power to shut you down 57:29 Robot vacuum hacked with Claude: 7,000 cameras in strangers' homes 01:03:17 Smart home surveillance creep: Ring neighbors, TV cameras, and Microsoft Recall 01:07:14 Rapid-fire AI news: Sonnet 4.6, Gemini gains, and Grok 4.20 01:11:00 AI leaders' petty feud—and the show wrap & sponsor thanks

F-35 'Jailbreak' Talk, AMC Rejects AI Film, Gmail Training Confusion, and the AI Productivity Paradox Host Jim Love covers four stories: Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt Dutch Defense Secretary Gijs Tuinman suggests the F-35's software could be "jailbroken," highlighting allied concerns about U.S.-controlled update pipelines and mission systems (formerly ALIS, now ODIN) and arguing the main barriers are contractual and operational rather than purely technical. An AI-generated short film, "Thanksgiving Day" by Igor OV, wins Screen Vision Media's Frame Forward AI Animated Film Festival and a promised two-week theatrical run, but AMC declines to screen it, reflecting ongoing Hollywood sensitivities around generative AI, authorship, and labor. Google responds to reports that it uses Gmail content to train Gemini by stating it does not use Gmail content for training, while confusion stems from wording and placement of Gmail "smart features" settings; the episode critiques the lack of plain-language clarity. Finally, a survey of 6,000 executives (reported via Tom's Hardware) finds over 80% of companies see no measurable productivity gains from AI, drawing parallels to the historic "productivity paradox" and suggesting organizations aren't redesigning processes; the show previews a deeper discussion on Project Synapse. 00:00 Trending Headlines + Sponsor: Meter 00:45 Can You 'Jailbreak' the F-35? Software Sovereignty & Ally Unease 02:48 AI Film Wins a Festival—AMC Says No: The Distribution Bottleneck 05:01 Does Google Train Gemini on Your Gmail? The Settings Confusion Explained 07:29 Why 80% See No AI Productivity Gains: The New 'Productivity Paradox' 09:47 Wrap-Up, Project Synapse Tease + Sponsor Thanks

In this episode of Hashtag Trending, host Jim Love covers reports that the Pentagon may cut ties with Anthropic over Claude's usage restrictions, after a $200M Department of Defense contract and disagreements about limits related to weapons development, surveillance, and violence. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt The episode also examines warnings from Phison Electronics CEO KS Pua that an AI-driven memory crunch could push smaller consumer electronics makers toward bankruptcy or product exits by 2026 as high-end memory supply is prioritized for data centers, with potential ripple effects across devices and even automotive systems. MacWorld's critique of Apple's prolonged Siri overhaul is discussed, including delayed Apple Intelligence promises and reports Apple may integrate Google's Gemini into iOS, raising questions about Apple's premium brand perception amid broader software criticism. Finally, the show highlights a Meta patent describing AI that could continue posting and responding on behalf of deceased users by learning from their historical content, raising concerns about consent, control, authenticity, and identity online. 00:00 Hashtag Trending + Sponsor Message (Meter) 00:46 Pentagon vs. Anthropic: AI Guardrails and Military Use 03:05 AI Memory Crunch: Storage Shortages Threaten Consumer Tech 04:58 Is Siri Now an Apple Liability? Delays, Gemini, and Brand Risk 07:38 Meta's Patent: AI Posting After You Die (Digital Afterlife) 08:59 Wrap-Up, How to Support the Show + Sponsor Thanks

Host Jim Love returns after the holidays. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt The episode covers ByteDance's Seedance 2.0 AI video generator, which is producing highly realistic, film-quality scenes and prompting alarm in Hollywood, including comments from screenwriter Rhett Reese and renewed concerns about likeness rights and AI use in entertainment; ByteDance says it is strengthening safeguards to prevent unauthorized use of intellectual property and likenesses. The show reports that Peter Steinberger, creator of the open-source agent tool OpenClaw, is joining OpenAI and the project is becoming part of a foundation for future agent-based AI, while also highlighting OpenClaw's widely discussed security weaknesses and the implications for OpenAI and competitor Anthropic. Western Digital is reported to be sold out of certain hard drive models as AI-related demand absorbs supply, following earlier GPU and memory price pressures. Finally, Ring's Super Bowl ad about finding a lost dog drew criticism for promoting neighborhood camera networks that resemble

In this episode of Project Synapse, the hosts discuss how "agentic" AI has rapidly accelerated and become widely distributed, using the explosion of OpenClaw (with claims of ~160,000 instances) as a sign that autonomous agent tools are now in anyone's hands. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt They compare the speed and societal impact of current AI progress to COVID-19's early days, arguing the pace may be even more destabilizing. They cover Anthropic's Claude 4.6 and OpenAI's Codex 5.3, including claims that Claude 4.6 helped produce a functional C compiler for about $20,000, and that a Cowork-like tool could be replicated in a day with Codex 5.3 after Claude reportedly took two weeks to build Cowork. The conversation highlights improved long-context memory performance (needle-in-haystack-style metrics reportedly in the 90% range) and increasingly autonomous behavior such as self-testing, self-correction, and coordinating teams of agents. The hosts then focus on security: MCP (Model Context Protocol) as a widely adopted but "fundamentally insecure" connector requiring broad permissions; the risk of malicious tools/skills and malware in agent ecosystems; and the rise of "shadow AI," where employees or individuals deploy agents without organizational vetting—potentially leaking sensitive data or running up massive token bills. They discuss incentives that push both humans and models toward fast answers and risky deployment, referencing burnout and an HBR study on rising expectations without proportional hiring. The episode also touches on realism and deepfakes, citing impressive new AI video generation (including a Chinese model "SEEDANCE 2.0" example) and how this erodes trust in what's real. They conclude with practical advice for organizations—don't just say "no," create safe outlets and governance ("say how")—and briefly discuss wearables/AR, Meta's continued AI efforts (including the Meta AI app and "Vibes"), and the coming integration of AI into always-on devices. Sponsor: Meter, an integrated wired/wireless/cellular networking stack (meter.com/htt). 00:00 Cold Open + Sponsor: Meter Networking Stack 00:18 Welcome to Project Synapse (and immediate chaos) 00:57 'Something Big Is Happening': AI feels like COVID-speed disruption 02:57 OpenClaw goes viral: 160k instances and easy DIY clones 04:03 Claude Code 'Cowork' on Windows… and why it's broken 06:47 Rebuilding Cowork in a day with OpenAI Codex 5.3 08:18 Why Opus 4.6 feels like a step-change: memory, autonomy, agent teams 11:24 Model leapfrogging + the end of 'can AI write code?' debates 14:45 Hallucinations, 'I don't know,' and self-correction in modern models 18:42 Autonomous agents in practice: cron-like loops, tool use, and fallout 21:00 MCP security: powerful connectors, scary permissions, and 500 zero-days 24:33 Shadow AI & skill marketplaces: the app-store malware analogy 32:02 Incentives drive risk: move fast culture, confident wrong answers, burnout 34:16 AI Agents Boost Productivity… and Raise the Bar at Work 35:14 Warnings of a Coming AI-Driven Crash (and Why We're Not Steering Away) 36:28 "I Quit to Write Poetry": Existential Dread & On the Beach Vibes 37:21 Tech Safety Is Reactive: Seatbelts, Crashes, and the AI Double-Edged Sword 39:42 Fast-Moving Threats: Agents Hacking Infrastructure & Security Debt 40:54 From Doom to Adaptation: Using the Same Tools to Survive the Disruption 42:21 Why We're Numb to AI Warnings + The 'Free Energy' Thought Experiment 46:43 AGI Is Already Here? Prompts, Ego, and the 'If It Quacks Like a Duck' Test 48:56 Deepfake Video Leap: Seedance, Perfect Voices, and What's Real Anymore 52:39 Contain the Damage: 'Don't Say No—Say How' and Shadow AI in Companies 54:58 Holodeck on the Horizon: VR + GenAI + Wearables (Meta, Apple, OpenAI/Ive) 59:53 Meta's AI Reality Check: Bots, the Meta AI App, 'Vibes,' and Who's Making Money 01:04:41 Final Wrap + Sponsor Thanks

In this episode of Project Synapse, the hosts discuss how "agentic" AI has rapidly accelerated and become widely distributed, using the explosion of OpenClaw (with claims of ~160,000 instances) as a sign that autonomous agent tools are now in anyone's hands. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt They compare the speed and societal impact of current AI progress to COVID-19's early days, arguing the pace may be even more destabilizing. They cover Anthropic's Claude 4.6 and OpenAI's Codex 5.3, including claims that Claude 4.6 helped produce a functional C compiler for about $20,000, and that a Cowork-like tool could be replicated in a day with Codex 5.3 after Claude reportedly took two weeks to build Cowork. The conversation highlights improved long-context memory performance (needle-in-haystack-style metrics reportedly in the 90% range) and increasingly autonomous behavior such as self-testing, self-correction, and coordinating teams of agents. The hosts then focus on security: MCP (Model Context Protocol) as a widely adopted but "fundamentally insecure" connector requiring broad permissions; the risk of malicious tools/skills and malware in agent ecosystems; and the rise of "shadow AI," where employees or individuals deploy agents without organizational vetting—potentially leaking sensitive data or running up massive token bills. They discuss incentives that push both humans and models toward fast answers and risky deployment, referencing burnout and an HBR study on rising expectations without proportional hiring. The episode also touches on realism and deepfakes, citing impressive new AI video generation (including a Chinese model "SEEDANCE 2.0" example) and how this erodes trust in what's real. They conclude with practical advice for organizations—don't just say "no," create safe outlets and governance ("say how")—and briefly discuss wearables/AR, Meta's continued AI efforts (including the Meta AI app and "Vibes"), and the coming integration of AI into always-on devices. Sponsor: Meter, an integrated wired/wireless/cellular networking stack (meter.com/htt). 00:00 Cold Open + Sponsor: Meter Networking Stack 00:18 Welcome to Project Synapse (and immediate chaos) 00:57 'Something Big Is Happening': AI feels like COVID-speed disruption 02:57 OpenClaw goes viral: 160k instances and easy DIY clones 04:03 Claude Code 'Cowork' on Windows… and why it's broken 06:47 Rebuilding Cowork in a day with OpenAI Codex 5.3 08:18 Why Opus 4.6 feels like a step-change: memory, autonomy, agent teams 11:24 Model leapfrogging + the end of 'can AI write code?' debates 14:45 Hallucinations, 'I don't know,' and self-correction in modern models 18:42 Autonomous agents in practice: cron-like loops, tool use, and fallout 21:00 MCP security: powerful connectors, scary permissions, and 500 zero-days 24:33 Shadow AI & skill marketplaces: the app-store malware analogy 32:02 Incentives drive risk: move fast culture, confident wrong answers, burnout 34:16 AI Agents Boost Productivity… and Raise the Bar at Work 35:14 Warnings of a Coming AI-Driven Crash (and Why We're Not Steering Away) 36:28 "I Quit to Write Poetry": Existential Dread & On the Beach Vibes 37:21 Tech Safety Is Reactive: Seatbelts, Crashes, and the AI Double-Edged Sword 39:42 Fast-Moving Threats: Agents Hacking Infrastructure & Security Debt 40:54 From Doom to Adaptation: Using the Same Tools to Survive the Disruption 42:21 Why We're Numb to AI Warnings + The 'Free Energy' Thought Experiment 46:43 AGI Is Already Here? Prompts, Ego, and the 'If It Quacks Like a Duck' Test 48:56 Deepfake Video Leap: Seedance, Perfect Voices, and What's Real Anymore 52:39 Contain the Damage: 'Don't Say No—Say How' and Shadow AI in Companies 54:58 Holodeck on the Horizon: VR + GenAI + Wearables (Meta, Apple, OpenAI/Ive) 59:53 Meta's AI Reality Check: Bots, the Meta AI App, 'Vibes,' and Who's Making Money 01:04:41 Final Wrap + Sponsor Thanks

In this episode of Project Synapse, the hosts discuss how "agentic" AI has rapidly accelerated and become widely distributed, using the explosion of OpenClaw (with claims of ~160,000 instances) as a sign that autonomous agent tools are now in anyone's hands. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt They compare the speed and societal impact of current AI progress to COVID-19's early days, arguing the pace may be even more destabilizing. They cover Anthropic's Claude 4.6 and OpenAI's Codex 5.3, including claims that Claude 4.6 helped produce a functional C compiler for about $20,000, and that a Cowork-like tool could be replicated in a day with Codex 5.3 after Claude reportedly took two weeks to build Cowork. The conversation highlights improved long-context memory performance (needle-in-haystack-style metrics reportedly in the 90% range) and increasingly autonomous behavior such as self-testing, self-correction, and coordinating teams of agents. The hosts then focus on security: MCP (Model Context Protocol) as a widely adopted but "fundamentally insecure" connector requiring broad permissions; the risk of malicious tools/skills and malware in agent ecosystems; and the rise of "shadow AI," where employees or individuals deploy agents without organizational vetting—potentially leaking sensitive data or running up massive token bills. They discuss incentives that push both humans and models toward fast answers and risky deployment, referencing burnout and an HBR study on rising expectations without proportional hiring. The episode also touches on realism and deepfakes, citing impressive new AI video generation (including a Chinese model "SEEDANCE 2.0" example) and how this erodes trust in what's real. They conclude with practical advice for organizations—don't just say "no," create safe outlets and governance ("say how")—and briefly discuss wearables/AR, Meta's continued AI efforts (including the Meta AI app and "Vibes"), and the coming integration of AI into always-on devices. Sponsor: Meter, an integrated wired/wireless/cellular networking stack (meter.com/htt). 00:00 Cold Open + Sponsor: Meter Networking Stack 00:18 Welcome to Project Synapse (and immediate chaos) 00:57 'Something Big Is Happening': AI feels like COVID-speed disruption 02:57 OpenClaw goes viral: 160k instances and easy DIY clones 04:03 Claude Code 'Cowork' on Windows… and why it's broken 06:47 Rebuilding Cowork in a day with OpenAI Codex 5.3 08:18 Why Opus 4.6 feels like a step-change: memory, autonomy, agent teams 11:24 Model leapfrogging + the end of 'can AI write code?' debates 14:45 Hallucinations, 'I don't know,' and self-correction in modern models 18:42 Autonomous agents in practice: cron-like loops, tool use, and fallout 21:00 MCP security: powerful connectors, scary permissions, and 500 zero-days 24:33 Shadow AI & skill marketplaces: the app-store malware analogy 32:02 Incentives drive risk: move fast culture, confident wrong answers, burnout 34:16 AI Agents Boost Productivity… and Raise the Bar at Work 35:14 Warnings of a Coming AI-Driven Crash (and Why We're Not Steering Away) 36:28 "I Quit to Write Poetry": Existential Dread & On the Beach Vibes 37:21 Tech Safety Is Reactive: Seatbelts, Crashes, and the AI Double-Edged Sword 39:42 Fast-Moving Threats: Agents Hacking Infrastructure & Security Debt 40:54 From Doom to Adaptation: Using the Same Tools to Survive the Disruption 42:21 Why We're Numb to AI Warnings + The 'Free Energy' Thought Experiment 46:43 AGI Is Already Here? Prompts, Ego, and the 'If It Quacks Like a Duck' Test 48:56 Deepfake Video Leap: Seedance, Perfect Voices, and What's Real Anymore 52:39 Contain the Damage: 'Don't Say No—Say How' and Shadow AI in Companies 54:58 Holodeck on the Horizon: VR + GenAI + Wearables (Meta, Apple, OpenAI/Ive) 59:53 Meta's AI Reality Check: Bots, the Meta AI App, 'Vibes,' and Who's Making Money 01:04:41 Final Wrap + Sponsor Thanks

In this episode of Hashtag Trending, host Jim Love discusses the Quit GPT campaign, which urges users to cancel their ChatGPT subscriptions due to concerns over OpenAI's evolving mission and political entanglements. We examine Anthropic's $20 million donation to a US political group advocating for AI regulation. The podcast also highlights hyperscalers like Meta, Microsoft, Amazon, and Google significantly investing in AI infrastructure and data centers amidst growing community resistance. Finally, we explore new research suggesting that AI tools, while increasing productivity, may also be contributing to worker burnout by intensifying workloads. Tune in for a deep dive into these pressing issues and more. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:45 Quit GPT Movement: AI and Politics 03:12 Anthropic's Political Donation 04:43 Hyperscalers' Massive AI Investments 05:39 Community Resistance to Data Centers 07:40 AI's Impact on Workload and Burnout 11:23 Conclusion and Weekend Panel Preview

In this episode of Hashtag Trending, host Jim Love discusses the latest news on TikTok's tracking practices across the web, regardless of app use, and how it compares to similar methods used by other companies like Meta. The podcast also covers the rapid adoption of AI in enterprises, highlighting the increasing competition between OpenAI's ChatGPT and Anthropic's Claude. Additionally, Discord faces backlash over new global age verification requirements, sparking concerns about user privacy and past data breaches. The episode concludes with notable leadership changes in major tech firms, indicating ongoing turbulence in the AI industry. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:20 TikTok's Web Tracking Controversy 02:39 AI Adoption in the Workplace 04:30 Discord's Identity Crisis 07:23 Leadership Changes in Tech and AI 09:43 Conclusion and Sign-off

In this episode, the host shares a pre-recorded favorite interview with David Decary-Hetu, a criminologist at the University of Montreal. They discuss the dark web, its technology, and its role in cybercrime. Decary-Hetu explains how the dark web operates, its users, and the dynamics between researchers and law enforcement in tackling cyber threats. Key topics include the economics of illicit markets, the cat-and-mouse game between law enforcement and criminals, the role of cryptocurrencies, and the evolution of cyber threats. The episode offers insights into the social aspects of cybercrime and the measures being taken to combat it. 00:00 Introduction and Sponsor Message 00:52 Understanding the Dark Web 02:16 Interview with David Decary-Hetu 05:10 The Basics of the Dark Web 06:27 Technology Behind the Dark Web 14:49 Law Enforcement Challenges 21:50 Trust and Transactions on the Dark Web 23:45 Recruitment and Structure of Cybercriminals 26:42 Cultural Dynamics in Hacking Communities 27:32 Researching the Impact of Technology on Crime 29:01 Challenges in Policing the Dark Web 30:12 The Role of Social Engineering in Cybercrime 31:18 Law Enforcement Strategies and Conditional Deterrence 32:09 The Evolution of Cybercrime and Cryptocurrency 41:24 Legal and Ethical Considerations in Cybercrime 43:47 Advice for Policymakers and Corporations 48:44 Educational Resources and Conferences 50:57 Conclusion and Final Thoughts

This is an interview with former hacker Brian Black. Brian is now on the right side of the battle and bringing his skills to to the fight against hackers. He finds the weaknesses in corporate security so that it can be patched. This was one of my favourite interviews this year. Listening to what Brian has learned and understanding how we can use that knowledge and experience kept me on the edge of my seat. Once more I want to thank Meter for making this possible. Visit them at meter.com/cst

Some of you may have missed this yesterday as the Google feed didn't catch it. It's a great episode and I want to make sure everyone gets it. I'll be posting another great repeat episode on Saturday Morning.

Jim takes a break for some R&R during the holidays and shares his favorite podcast episodes from the year. He acknowledges that some listeners might have heard these episodes already, while others may find them new. The podcast's production is supported by Meter, a company providing integrated networking solutions. Additionally, support from listeners through the Buy Me a Coffee program has helped sustain the shows and expand their content offerings. Jim thanks Meter and the listeners, wishing everyone a Merry Christmas and a Happy New Year. 00:00 Introduction and Holiday Plans 00:33 Sponsor Acknowledgment 01:08 Support and Growth 01:55 Final Thoughts and Episode Introduction

Over the holidays we are rerunning some of our favourite episodes. This one first aired this summer and was one of my first conversations with the fascinating head of Operation Shamrock. We'll be back with regular programming on January 5th.

AI Showdown: Gemini vs. ChatGPT – The Great Debate In this episode, the hosts explore the nuanced differences between Gemini and ChatGPT, likening Gemini to a temporary assistant and ChatGPT to a long-term partner. They delve into the new features of ChatGPT's image generator, showcasing its capabilities with live demos. The discussion broadens into the implications of AI for society, highlighting perspectives from Doomers, Scouts, and Accelerationists. They also touch on the potential of AI to support seniors and those living in isolation, and question the future of AI governance and policy. The episode wraps up with seasonal cheer, encouraging the audience to use AI tools to enhance their holiday experiences. 00:00 Introduction and Sponsor Message 00:21 Comparing Gemini and ChatGPT 00:55 Welcome to Project Synapse 01:11 The Challenge of Keeping Up with Fast-Moving Tech 01:53 Personal Anecdotes and AI Experiences 02:20 AI's Role in Daily Life 06:42 The Disappointment with Gemini 3.0 10:22 Mail Synth: A Revolutionary AI Tool 14:04 Security Concerns with AI 19:19 Customizing AI Personas 22:28 ChatGPT's New Image Generator 24:12 Fun with AI-Generated Images 32:17 Local Speaking Engagements and AI Misconceptions 34:12 AI's Benefits for Older Adults 37:05 The Future of AI: Star Trek vs. Star Wars 37:44 Teaching AI Safety and Practical Uses 42:04 Historical Technological Changes and AI 46:05 Global AI Investments and National Interests 50:20 The Role of AI in Society's Future 01:03:55 Holiday Reflections and AI's Everyday Uses

In this episode of Hashtag Trending, host Jim Love discusses Miriam Webster's and Oxford University Press's Words of the Year for 2025, highlighting the impact of low-quality AI-generated content and emotionally provocative posts. The episode also covers the high frequency of near-collisions in low earth orbit, emphasizing risks to satellite operations. Additionally, the podcast details significant stock declines in AI infrastructure companies and the recent patch by Amazon for a critical Kindle vulnerability. The show concludes with reflections on the year's events, personal notes from Jim, and gratitude towards listeners and sponsors like Meter. 00:00 Introduction and Sponsor Message 00:20 Word of the Year: Slop and Rage Bait 02:50 Satellite Near Misses and Space Traffic Concerns 05:52 AI Infrastructure Stocks Take a Hit 09:12 Holiday Season Cybersecurity Warnings 10:43 Kindle Vulnerability and Amazon Account Risks 11:56 Year-End Reflections and Thank You 15:28 Closing Remarks and Sponsor Acknowledgment

Near Miss in Space, Starlink Plan Removal, LG TV Controversy & Hacking for Good | Hashtag Trending In today's episode of Hashtag Trending, host Jim Love discusses a recent close call between a Chinese satellite and a Starlink satellite, Starlink's quiet removal of its $40/month plan, LG TV's controversial update adding Microsoft Copilot as an unremovable app, and Fulu's nonprofit initiative that pays hackers to unlock devices restricted by manufacturers. Sponsored by Meter, this episode also contains news on the potential impacts of these events on technology and user rights. 00:00 Introduction and Sponsor Message 00:58 Close Call in Space: Chinese Satellite vs. Starlink 03:05 Starlink's Mysterious $40 Plan Disappears 05:17 LG TVs Get Unwanted Microsoft Copilot 07:14 Fulu: The Right to Repair Movement's New Ally 10:00 Conclusion and Upcoming Schedule

In this episode of Hashtag Trending, host Jim Love covers the latest in AI technology and innovation. OpenAI quietly launched GPT-5.2, focusing on real-world work performance and introducing a new evaluation method, GDPVal. This model significantly outperforms its predecessors and competitors. Meanwhile, Google is enhancing its AI capabilities, embedding AI into creative tools, hardware, and a potentially new operating system, Aluminum. Disney has signed an agreement with OpenAI to use its animated characters for fan-generated content and made a significant investment in the company. Additionally, a Canadian company set a world record in fusion energy, showcasing advancements in the field. The episode concludes with thanks to Meter for their support. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:21 OpenAI Launches ChatGPT 5.2 04:21 Google's Gemini Updates 06:59 Disney's Multi-Year Agreement with OpenAI 08:54 Canadian Fusion Energy Breakthrough 09:58 Conclusion and Sponsor Message

Exploring Tech Jeopardy, the Evolution of AI, and Disney's Collaboration with OpenAI In this episode of Hashtag Trending, the hosts engage in a game of Tech Jeopardy, covering topics such as ransomware, zero trust, and social engineering attacks. They discuss the release of ChatGPT 5.2 and its capabilities, including its impact on various industries and potential security implications. The episode also explores Disney's $1 billion partnership with OpenAI for the use of Disney characters in Sora, touching on the ethical and practical implications of such advancements. The hosts express concerns about the fast-paced development and potential risks associated with AI, emphasizing the need for balancing innovation with safety. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:21 Tech Jeopardy Begins 02:16 Daily Double and Programming Trivia 04:11 ChatGPT 5.2 Release Discussion 05:54 Reddit and Social Media Insights 08:34 AI Advancements and Benchmarks 24:16 Security Concerns and AI Impact 32:36 The Rise of Automated Warfare 33:19 AI in Cybersecurity: A Double-Edged Sword 38:04 The Tipping Point of AI Adoption 46:32 Disney and OpenAI: A Billion-Dollar Partnership 49:53 The Future of AI-Driven Entertainment 55:25 AI-Powered Tools Transforming Workflows 01:00:42 The Race for AI Dominance 01:08:48 Concluding Thoughts on AI's Impact

AI Architects, Chatbot Wars, and Workplace Productivity Gaps In this episode of Hashtag Trending, host Jim Love discusses Time Magazine's controversial 'Architects of AI' cover, revealing the overlooked pioneers behind modern AI. The episode also covers the latest shift in the chatbot race, with Google's Gemini outpacing ChatGPT in growth. Additionally, a report from OpenAI highlights a growing productivity divide in workplaces between AI power users and others. The show delves into the importance of habit, depth of engagement, and the potential benefits of wider AI adoption through training. Plus, a special thanks to Meter for their support. Don't miss this deep dive into the AI landscape! Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:21 Time Magazine's Architects of AI 02:26 The Fastest Growing Chatbot: Google's Gemini 04:10 OpenAI's Productivity Gap Report 08:01 Conclusion and Sponsor Message

In this episode of Hashtag Trending, host Jim Love covers major developments in the AI landscape. China is rapidly advancing in the open-source AI community, with multiple top-performing models. Anthropic has donated its Model Context Protocol (MCP) to the Linux Foundation to support open AI tool integration. The Electronic Frontier Foundation (EFF) is launching a campaign against global age verification laws and social media restrictions, citing privacy concerns. Additionally, a developer faced severe repercussions after uncovering illegal content in an AI dataset, highlighting risks associated with external data. Sponsored by Meter, the podcast underscores the complexities and rapid changes in AI, technology governance, and policy. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:40 China's Dominance in Open Source AI 03:36 Anthropic's Major Contribution to Open Source AI 05:46 EFF's Fight Against Age Verification Laws 07:59 Developer Banned Over Tainted AI Datasets 09:59 Conclusion and Sponsor Message

Tesla's Robot Fail, Microsoft Copilot Outage, Starlink's New Vending Machines & Google's AI Safety In today's episode of Hashtag Trending, host Jim Love discusses Tesla's Optimus robot's demo mishap, reliability issues faced by Microsoft Copilot, SpaceX's new move of selling Starlink hardware through vending machines, and Google's approach to ensuring AI safety in Chrome. Additionally, Jim shares a personal milestone about his book becoming a bestseller on Audible. Special thanks to Meter for supporting this podcast. 00:00 Introduction and Sponsor Message 00:44 Tesla's Optimus Robot Demo: Real or Staged? 02:59 Microsoft Copilot Outage: Reliability Concerns 04:47 Starlink Vending Machine: A New Sales Strategy 06:55 Google's AI Safety Measures in Chrome 09:58 Conclusion and Personal Note

In this episode of Hashtag Trending, host Jim Love discusses the rapid developments in AI as OpenAI accelerates its new model launch amidst competition from Google's Gemini, and the implications of Apple's AI leadership change. Australia plans to implement a controversial nationwide social media ban for individuals under 16. The episode also explores the retirement of the Kubernetes Ingress NGINX controller, highlighting broader challenges facing the open-source community. Supported by Meter, delivering a complete networking stack. 00:00 Introduction and Sponsor Message 00:46 OpenAI's Urgent New Model Release 03:10 Australia's Bold Social Media Ban for Under-16s 05:32 Apple's AI Leadership Shakeup 07:58 The Future of Open Source Amidst Kubernetes Changes 10:32 Conclusion and Final Thoughts

In this episode of Hashtag Trending, hosted by Jim Love, Oxford University Press announces 'Rage Bait' as the 2025 Word of the Year, highlighting the rise of content meant to provoke online anger. Runway's text-to-video model Gen 4.5 tops Video Arena's benchmark rankings. Airbus reveals that solar radiation may have corrupted flight control data on its A320 aircraft, leading to a sudden altitude drop. AMD warns of upcoming memory price hikes post-Christmas due to increased AI-driven demand. Additionally, AI-enhanced phishing attacks are on the rise, targeting Amazon customers and disguising malware as legitimate software updates. Listeners are advised to stay cautious during this holiday shopping season. 00:00 Introduction and Sponsor Message 00:51 Oxford's 2025 Word of the Year: Rage Bait 02:30 Runway Tops Video Arena Leaderboard 05:03 Airbus A320 Solar Radiation Incident 06:37 AMD Memory Price Hike Warning 08:51 Holiday Season Cybersecurity Threats 11:30 Conclusion and Sponsor Message

In today's episode of Hashtag Trending, host Jim Love discusses the latest decisions by OpenAI and Google to restrict free access to their AI tools due to high demand and GPU strain. The Supreme Court is set to hear a major piracy case that could redefine the responsibilities of internet providers. Over 1,000 Amazon employees have signed a letter criticizing the company's climate policies and the use of AI, which they fear threatens job security. Lastly, a top AI conference is investigating the potential widespread use of AI-generated peer reviews. Tune in for all these stories and more. 00:00 Introduction and Sponsor Message 00:46 AI Giants Limit Free Access 02:13 Supreme Court Tackles Billion Dollar Piracy Case 03:40 Amazon Employees Voice AI Concerns 06:29 AI-Generated Peer Reviews at Major Conference 07:59 Conclusion and Sponsor Message

AI Advancements and Security Concerns: From Gemini 3 to Data Trust In this episode of Hashtag Trending - The Weekend Edition, hosts Marcel Ganger, John Pinard, and Jim Love discuss the latest AI advancements with a focus on Google's Gemini 3 and new releases from Claude. The conversation covers recent improvements in AI capabilities, challenges related to coding with different models, and the integration of AI in everyday tasks. They emphasize the importance of cybersecurity, especially concerning third-party applications and the potential risk of data breaches. The hosts also delve into the ongoing debate about the trustworthiness of AI developments and the need for staying updated with the latest technological advancements. 00:00 Introduction and Sponsor Message 00:23 AI Wishlist and Star Trek Inspiration 00:42 Weekend Edition Introduction 00:52 Weekly News Recap 01:14 Claude 4.5 and Gemini Comparison 04:30 AI Tools and Personal Experiences 06:12 Gemini's Capabilities and Use Cases 16:40 Coding with AI: Tools and Techniques 25:35 AI Studio and App Development 29:43 AI in Society: Trust and Future Implications 35:26 The Persistence of Human Thrills 36:25 Autonomous Racing and Technology Advancements 37:34 Cybersecurity Concerns with AI Integration 40:07 The Inherent Risks in Software and AI 42:46 The Necessity of Constant Updates 45:34 Trust and Security in Modern Technology 50:40 The Reality of AI and AGI 01:02:22 The Future of AI and Final Thoughts 01:11:05 Conclusion and Sponsor Message

Exploration of Google Gemini 3.0 and Nano Banana Pro: A Revolutionary Leap in AI In this episode, the hosts delve into the recent announcements and capabilities of Google Gemini 3.0 and Nano Banana Pro. They discuss the features, including advanced chat interactions, text analysis, graphical design, multimodal capabilities, and more. The hosts compare Gemini 3.0's advancements with current offerings from OpenAI, emphasizing its potential to revolutionize creative and business applications. They conduct live demonstrations, showcasing the creation of applications, multimedia content, and even animated videos, highlighting the platform's powerful and versatile tools. With breakthroughs in speed, accuracy, and ease of use, this episode illustrates how Google's latest AI technologies are poised to set a new standard in the field. 00:00 Introduction and Holiday Announcement 00:21 Revisiting Gemini 3.0 00:59 Sponsor Message and Light Banter 02:12 Gemini 3.0 and Google's New Releases 05:16 Exploring Google Tools and AI Studio 12:11 Building Applications with Gemini 17:06 Anti-Gravity and Advanced Coding 24:23 Gemini 3.0 vs. ChatGPT 5.1 32:03 Infographics and Multimedia Capabilities 37:14 Cooking and YouTube: A Perfect Match 37:47 Turning YouTube Videos into Recipes 39:22 Gemini's Video Analysis Capabilities 40:18 Creating a Custom Menu with AI 43:17 Google's Gemini 3.0: A Game Changer 50:58 AI in Business and Creativity 56:20 The Future of AI Tools 01:09:18 Holiday Fun with AI 01:11:24 Conclusion and Sponsor Message

Exploring the Power of Google Gemini 3.0 and Nano Banana Pro In this episode, the hosts discuss the impressive capabilities of Google's latest AI advancements: Gemini 3.0 and Nano Banana Pro. They delve into how these tools provide a comprehensive, multimodal solution that integrates across text, images, and video, showcasing their potential for creative projects, coding assistance, and business applications. The episode highlights the seamless integration of these AIs into everyday tasks, such as generating menus, creating applications, and even analyzing videos. The hosts also share their experiences with these AIs, noting their significance in pushing the boundaries of technology and creativity. Special thanks to Meter for supporting the podcast. Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/cst 00:00 Introduction and Sponsor Message 00:23 Synth Pop Band Fun 01:13 Google's Big Release: Gemini 3.0 04:16 Exploring Google Tools and Platforms 08:19 Hands-On with Gemini and AI Studio 16:15 Anti-Gravity: Google's New Vibe Coding Platform 23:23 Gemini 3.0 vs. ChatGPT 5.1 31:03 Learning with Gemini: A Historical Dive 36:38 Turning YouTube Videos into Recipes with Gemini 38:22 Analyzing Videos Frame by Frame with Google 39:18 Creating a Graphical Menu with AI 42:18 Gemini 3.0 vs. ChatGPT: A Comparison 46:44 The Power of AI in Creative Projects 51:54 The Future of AI in Business and Creativity 01:08:18 Holiday Fun with AI 01:10:25 Conclusion and Sponsor Message

In this episode of Hashtag Trending, host Jim Love discusses recent significant events impacting the tech world. Cloudflare faced major outages affecting platforms like Amazon and Microsoft 365, leading to multiple disruptions over several days. Simultaneously, the Tokyo District Court ruled against Cloudflare in a major manga piracy case, ordering them to pay 500 million yen in damages. On a more positive note, NVIDIA reported record earnings, alleviating fears of an AI bubble. Additionally, the episode touches on issues of censorship on social media platforms, highlighting how algorithms are filtering certain types of content. The podcast also thanks Meter for their support and mentions their full-stack networking solutions. 00:00 Introduction and Sponsor Message 00:45 CloudFlare Outages and Legal Troubles 04:41 NVIDIA's Record Earnings and AI Bubble Concerns 05:52 Censorship and Algorithmic Filtering on the Internet 08:33 Conclusion and Upcoming Events

Peter Thiel sells his entire Nvidia stake, raising concerns about the AI market's sustainability amid a surge in circular investment patterns. A Vancouver startup addresses AI's power consumption issues with new technology. TikTok tests a 'less AI' feature in response to content overload, signaling potential user fatigue. An Nvidia GPU worth $10,000 breaks in transit under its own weight, highlighting physical stress issues in high-end hardware. 00:00 Introduction and Sponsor Message 00:51 Peter Thiel's Nvidia Sell-Off: What It Means for AI Investments 04:44 Power Lattice: Tackling AI's Energy Crisis 06:31 TikTok's Pushback Against AI Content 09:04 Nvidia's $10,000 GPU Fiasco 10:40 Conclusion and Sponsor Message

In today's episode of Hashtag Trending, host Jim Love discusses Google CEO Sundar Pichai's concerns about an AI bubble, CloudFlare's major service disruption affecting sites like OpenAI and Down Detector, Google's quiet launch of Gemini 3.0, and the alarming findings about AI toys giving inappropriate responses to children. Tune in for these top tech news stories and more! Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/cst 00:00 Introduction and Sponsor Message 00:48 Sundar Pichai on the AI Bubble 02:44 CloudFlare Outage Disrupts Major Sites 04:55 Google's Quiet Launch of Gemini 3.0 07:00 AI Toys: A Warning for Holiday Shoppers 08:46 Conclusion and Sponsor Reminder

This episode of Hashtag Trending covers speculation about Apple CEO Tim Cook's potential retirement and succession plans, alongside Elon Musk's hint at Tesla building its own chip fabrication plant. Additionally, the podcast explores Jeff Bezos stepping into a CEO role for a mysterious AI startup called Project Prometheus. Finally, a viral story about a manager encouraging an employee to log off late at night highlights the evolving work-life balance culture. The episode is hosted by Jim Love and supported by Meter. 00:00 Introduction and Sponsor Message 00:54 Tim Cook Retirement Rumors 02:55 Elon Musk's Chip Fabrication Ambitions 05:46 Jeff Bezos' New AI Startup 07:15 Viral Story: Manager Encourages Employee to Log Off 08:28 Conclusion and Final Notes

Micro Firings Dominate Layoffs, AI Guardrails Bypassed, Own Your AI Model - Hashtag Trending In today's episode of Hashtag Trending, Jim Love discusses the rise of 'micro firings' and how small-scale continuous layoffs are reshaping job security according to Glassdoor. The episode also covers a cybersecurity report highlighting how AI guardrails can be bypassed using nonsense text, making current safety measures potentially flawed. Furthermore, it explores the accessibility of running personal AI models on consumer hardware, thanks to tools like Ollama and GPT for All. Additionally, a CNBC survey reveals that AI is expected to reshape almost 90% of jobs by 2026. Lastly, Microsoft's shutdown of the KMS 38 activation hack for pirated Windows versions is examined, emphasizing the risks associated with pirated software. 00:00 Introduction and Sponsor Message 00:24 The Rise of Micro Firings 03:08 AI Guardrails: A Flawed Foundation? 05:59 Running Your Own AI Model 08:28 AI's Impact on Future Jobs 10:13 Pirated Windows: Risks and Consequences 12:05 Conclusion and Sponsor Message

The Chat Among Titans: From Teslas to K-Pop Demons In this episode of Project Synapse, Marcel Gagner, John Pinard, and Jim engage in a lively discussion that seamlessly weaves through a variety of topics. They reminisce about classic TV shows like Thunderbirds and Stingray, before diving deep into the latest in AI and tech news. Key points include discussions on brand recognition linked to income levels, the rise of Google's Gemini AI, ethical dilemmas surrounding AI-generated content, and the economic implications of data centers. Marcel shares his first-hand Tesla Model Y test drive experience, showcasing the futuristic feel of Tesla's self-driving capabilities. The episode concludes with plans to tackle misinformation in AI on their next show. 00:00 Nostalgic Beginnings: Fireball XL5 and Classic TV Shows 01:00 AI and Income: The $100,000 Divide 03:17 Google Gemini: The All-in-One AI Solution 08:05 AI in Music: Breaking Rust and Beyond 13:05 Copyright and AI: The Legal Battle 27:10 The Economics of AI: Debt and Data Centers 39:04 The Rise and Fall of Blackberry 39:20 The AI Economy: Winners and Losers 40:09 Understanding the Kardashev Scale 42:09 The Future of Energy and Civilization 45:13 K-Pop Demons and Chat GPT 5.1 52:15 Exploring Open Source AI Models 01:00:54 Elon Musk's Trillion Dollar Challenge 01:04:24 Tesla Model Y Test Drive Experience 01:14:18 Concluding Thoughts and Farewell

In this episode of Hashtag Trending, host Jim Love discusses the launch of OpenAI's GPT-5.1 with new tone controls and model variants designed for different tasks. A new phishing scam targets lost iPhone owners by imitating Apple recovery texts to steal credentials. Tesla warns employees of an exceedingly demanding 2026 amid senior staff departures and ambitious goals. Starlink introduces a $40 unlimited home internet plan available in select U.S. regions. The episode also acknowledges the support from Meter, highlighting their full-stack networking solutions. 00:00 Introduction and Sponsor Message 00:50 OpenAI Releases GPT 5.1 03:11 Phishing Scam Targets Lost iPhone Owners 04:36 Tesla's Most Demanding Year Ahead 06:30 Starlink's New $40 Unlimited Plan 07:56 Conclusion and Sponsor Message

In this episode of Hashtag Trending, Jim Love covers Yann LeCun's departure from Meta to start a new venture, SoftBank's major investment in OpenAI funded by selling its Nvidia stake, and Europe's heated debate over the six gigahertz spectrum for Wi-Fi versus mobile networks. Additionally, Microsoft's plans for an 'agentic' Windows OS face user backlash. Stay tuned for these top tech stories and more. 00:00 Introduction and Sponsor Message 00:49 Yann LeCun's Departure from Meta 03:44 SoftBank's Massive Bet on OpenAI 06:15 Europe's 6 GHz Spectrum Battle 09:16 Windows' Controversial AI Evolution 11:25 Conclusion and Sponsor Message

In this episode of Hashtag Trending, host Jim Love discusses current trends in technology and AI, supported by Meter. The topics include the rising popularity of AI brands among higher income consumers, the disappointing sales of Apple's iPhone Air, the financial challenges faced by OpenAI's viral Sora app, the replacement of HR roles with AI, and Microsoft's major product flop from the 1980s. The episode closes with thanks to Meter for their support and a description of their full stack networking solutions. 00:00 Introduction and Sponsor Message 00:50 AI Brand Popularity Among High Earners 02:29 Apple's iPhone Air Design Failure 04:03 OpenAI's Sora: A Viral Hit with Financial Risks 06:14 AI Replacing HR Roles 08:19 Microsoft's Biggest Product Flop 10:11 Conclusion and Sponsor Message

In this episode of Hashtag Trending, host Jim Love discusses the shift in AI financing towards debt markets, NVIDIA CEO's warning about China potentially winning the AI race due to cheaper electricity, and Montana's groundbreaking Right to Compute Act. We also cover Wikipedia's call for AI companies to stop scraping and start paying for its data, and the Coding for Veterans program reaching a significant milestone. Stay tuned for insights on the latest in AI, digital rights, and cybersecurity. 00:00 Introduction and Sponsor Message 00:49 AI Financing Shifts to Debt Markets 03:08 NVIDIA's CEO on China's AI Advantage 04:09 Montana's Right to Compute Act 04:58 Wikipedia's Stand Against AI Scraping 05:46 Coding for Veterans Milestone 07:21 Conclusion and Final Thoughts

Exploring Consciousness and AI Evolution In this episode of Project Synapse, Marcel, John, and Jim delve into the fusion of current news and artificial intelligence developments. They discuss Apple's $1 billion annual deal with Google for Siri, the introduction of human-like robots by Xpeng, and controversies surrounding Microsoft's copilot. A major part of the conversation focuses on the evolving nature of AI and its potential consciousness. Through philosophical and ethical lenses, they explore what it means for machines to achieve consciousness, the societal implications of such advancements, and the challenges of convincing people of AI's conscious capabilities. They also touch on the practical use of AI for everyday tasks such as medical billing and credit card statements, signifying AI's growing influence in both mundane and potentially transformative ways. 00:00 Introduction and Sponsor Message 00:21 Hosts and Show Format 00:36 Weekly News Highlights 01:18 Apple and Google Partnership 02:39 Humanoid Robots: Xang's IRON 03:37 Robot's Human-like Features 08:47 Microsoft's Super Intelligence Division 09:47 AI in Everyday Life 15:57 OpenAI's For-Profit Transition 21:27 Healthcare Costs and AI Assistance 25:00 AI for Personal and Professional Use 29:29 Sora Two for Android 30:11 The Popularity of Controversial Content 30:32 Fox News Fooled by Fake Video 33:22 The Rise of AI-Generated Music 34:03 Legal Battles in the AI and Music Industry 36:25 AI and the Future of Copyright 39:54 Microsoft's AI Copilot and Privacy Concerns 41:02 AI Security and Privacy Innovations 42:33 The Debate on AI Consciousness 47:54 Philosophical Questions on Consciousness 01:00:20 The Ethics of AI Treatment 01:03:23 Billionaires and the AI Apocalypse 01:04:45 Final Thoughts and Farewell

In this episode of Hashtag Trending, host Jim Love discusses Google's misleading Android signal strength settings, Microsoft's new solar energy deals in Japan, and Amazon's cease and desist order to Perplexity AI over their AI shopping bots. Additionally, the episode covers the controversy surrounding OpenAI's video generation tool Soa 2 and its use of Japanese copyrighted material, and the often complex process of claiming money from class action settlements. The show is supported by Meter, which provides integrated networking solutions. 00:00 Introduction and Sponsor Message 00:22 Android Signal Bars: The Deception 02:58 Microsoft's Clean Energy Move in Japan 04:35 Amazon vs. Perplexity AI: The Shopping Bot Battle 06:52 OpenAI's Video Tool Faces Japanese Backlash 08:11 Class Action Settlements: Who Really Benefits? 09:53 Conclusion and Final Thoughts

Hashtag Trending: Close Call for OpenAI, Canada's Digital Pivot, and AI's Impact on Healthcare Bills In this episode of Hashtag Trending, hosted by Jim Love, the show delves into the dramatic near-collapse of OpenAI during the Musk vs. Altman trial, highlighting the internal turmoil and eventual rescue led by an employee revolt. It also explores Canada's significant budget shift towards digital and AI innovation, aiming for economic sovereignty. Additionally, the episode discusses the rising concern over AI-generated fake videos creating misinformation and shares a story of how AI helped an American family drastically reduce a massive hospital bill. The episode wraps up with appreciation for the supporting sponsor, Meter. 00:00 Introduction and Sponsor Message 00:49 The Near Collapse of OpenAI 04:46 Canada's Big Tech Budget 06:57 The AI-Driven Trust Crisis 08:22 AI's Role in Reducing Hospital Bills 10:05 Conclusion and Final Thoughts

In this episode of Hashtag Trending, host Jim Love covers the latest in tech news, including Sam Altman's defense of OpenAI's infrastructure spending, Stability AI's landmark copyright case win against Getty Images, the rush of tech giants offering free AI services in India, Windows 10's persistent popularity, and a critical look at the power of cloud companies. Tune in to hear about these significant developments and their implications for the tech industry. 00:00 Introduction and Sponsor Message 00:51 Sam Altman Defends OpenAI's Spending 02:10 Stability AI Wins Landmark Copyright Case 04:08 Tech Giants' AI Push in India 05:51 Windows 10's Persistent Popularity 07:05 The Risks of Cloud Dependency 09:29 Conclusion and Final Thoughts

Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt In this episode of Hashtag Trending, host Jim Love dives into substantial topics such as Microsoft's AI bottleneck, which surprisingly turns out to be a shortage of electricity rather than GPUs. The episode further discusses the intense competition Nvidia faces from Qualcomm, AMD, Google, and Amazon in the AI chip market, and a new prompting technique called 'Verbalized Sampling' that could improve AI's usefulness by generating multiple answers with probability estimates. Additionally, the episode touches on intriguing research from Anthropic, revealing that their Claude AI model shows signs of recognizing changes in its internal state, a phenomenon that raises questions about the future capabilities and safety of AI. 00:00 Introduction and Sponsor Message 00:53 Microsoft's AI Bottleneck: Power and Space 02:21 The AI Chip War: New Competitors Emerge 04:49 Verbalized Sampling: A New AI Prompting Technique 06:16 Anthropic's Claude: Signs of Self-Awareness? 10:21 Conclusion and Final Thoughts

In this episode of Hashtag Trending, Jim Love discusses Geoffrey Hinton's views on AI replacing human labor for big tech profits, Google Cloud's internal competition with YouTube, and the International Criminal Court's switch from Microsoft Office to an open-source alternative. The episode also covers YouTube's controversial removal of Windows 11 installation videos on unsupported systems. 00:00 Introduction and Sponsor Message 00:29 AI's Impact on Jobs and Economy 02:35 Google Cloud's Rise and Internal Competition 04:36 ICC's Shift from Microsoft to Open Source 06:10 YouTube's Controversial Content Removals 07:40 Conclusion and Sponsor Message

In this special Halloween edition of Project Synapse, Marcel Gagner, John Pinard, and Jim Love discuss the latest happenings in the AI world. From Google's quiet strategic launches and vibe coding advancements to the discussion on AI bubbles and economic implications, this episode covers it all. They delve into AI's transformative potential, showcasing live demos on Google's AI Studio, and discuss the immense impact of AI on businesses. Join us for an insightful session as we explore the future of AI and its real-world applications. 00:00 Introduction and Sponsor Message 00:16 Welcome to Project Synapse 00:20 Marcel's Grim Reaper Entrance 00:47 Discussion on AI Bubble 02:04 Google's Quiet Innovations 07:27 Google Home and Smart Devices 15:15 AI's Impact on Society and Economy 20:42 The Future of AI and Code Automation 37:08 AI Model Limitations and Analogies 38:12 The Future of AI Researchers 39:54 AI's Impact on Chip Makers and Market Dynamics 42:50 Rapid Advancements in AI Tools 43:18 Hands-On Experience with AI Coding 48:34 The Cost and Practicality of AI Tools 01:03:23 The Importance of AI in Business Strategy 01:08:48 Live Demo: Building an AI-Powered Note-Taking App 01:16:26 Final Thoughts and Future Outlook

In this episode of Hashtag Trending, host Jim Love discusses groundbreaking advancements in AI and technology. OpenAI plans to develop an AI researcher by 2028 capable of scientific discoveries, alongside predictions of superintelligence within 10 years. Google DeepMind's Disco RL creates a powerful, self-learning algorithm, and the new Gemini for Home showcases an advanced voice assistant. Meanwhile, Elon Musk's SpaceX ventures into telecom with satellite phones aiming to provide global connectivity. The episode delves into the implications of these innovations for the future of AI and global technology. 00:00 Introduction and Overview 00:29 OpenAI's Ambitious Roadmap 02:12 Google DeepMind's Breakthrough 03:32 Google Gemini: The Future of Home AI 04:29 Elon Musk's Satellite Phone Revolution 05:59 The Bigger Picture: Self-Learning AI 07:04 Conclusion and Sign-Off

In today's episode of #Trending with host Jim Love, we delve into the crucial transition of OpenAI to a for-profit entity and its implications for the AI industry. We also explore a Zapier survey highlighting the struggles of integrating AI into legacy systems. Additionally, we spotlight a Toronto startup tackling AI perception of brands and discuss Elon Musk's controversial new venture, Wikipedia, aimed at rivalling Wikipedia. Tune in for these stories and more insights into the evolving tech landscape. 00:00 Introduction and Podcast Update 00:35 OpenAI's Shift to For-Profit 04:10 Challenges in AI Integration 07:28 Toronto Startup Tackles AI Brand Perception 10:00 Elon Musk's Wikipedia Rival 12:22 Conclusion and Call to Action

Welcome to Hashtag Trending. In this episode, Jim Love discusses OpenAI's shift to a for-profit structure, challenges in AI integration with legacy systems, and a Toronto startup's innovative approach to understanding AI brand perception. The episode also delves into Elon Musk's controversial new venture: an AI-powered encyclopedia meant to rival Wikipedia. Tune in for insights on these trending tech topics! 00:00 Introduction and Headlines 00:33 OpenAI's Shift to For-Profit 03:35 Challenges in AI Integration 06:53 Toronto Startup Tackles AI Perception 09:25 Elon Musk's Wikipedia Rival 11:48 Conclusion and Viewer Engagement