This is a somewhat light hearted, lightweight IT privacy and security podcast that spans the globe in terms of issues covered with topics that draw in everyone from newbie to tech specialist. Invest between 15 and 30 minutes a week to come up to speed on

We open with China's 8.7 billion-record megaleak, framing misconfigured infrastructure as a planetary-scale risk rather than a local breach. Lenovo's U.S. class action then shows how invisible web trackers can quietly “spill” American browsing data to China, while South Korea's heavy fines against Louis Vuitton, Dior, and Tiffany illustrate that even luxury brands now pay real money when they mishandle customer information.The focus then narrows to individuals: a 17.5M-user Instagram dataset on underground forums, malicious GenAI Chrome extensions posing as helpers while siphoning data, and a decade-old Apple zero-day likely leveraged by commercial spyware all demonstrate how ordinary accounts and devices can become rich sources of exploitable data. Together they highlight a world where “just contact details,” browser add-ons, and long-lived bugs can escalate into serious compromise.From there, the update shifts into ambient surveillance and manipulation: Meta's planned facial-recognition “Name Tag” for Ray-Ban smart glasses pushes identification into public spaces and raises new concerns about children and bystanders, while AI-saturated products from Google, Meta, and others quietly convert intimate conversations and searches into highly targeted ad fuel. It closes with a Shakespeare quote about guilt “spilling” itself and a sign-off urging listeners to “pour with a steady hand,” tying the spill metaphor back to handling data, tools, and trust more carefully in everyday digital life.

EP279. This week's update spills on a global scale. We start with...A single misconfigured database just turned 8.7 billion Chinese records into a global reminder that at planetary scale, data protection failures stop being “incidents” and start looking like infrastructure risks.A new class action against Lenovo puts a spotlight on how invisible trackers and cross-border data flows can turn an ordinary website visit into a quiet export of American browsing habits to China.When Louis Vuitton, Dior, and Tiffany rack up multimillion-dollar privacy fines in South Korea, it sends a clear message: even the most glamorous brands pay dearly when customer data is treated carelessly.The Instagram dataset circulating on underground forums shows how a trove of “just usernames and contact details” can still supercharge scams, phishing, and harassment at massive scale.Dozens of AI-branded Chrome extensions masquerading as helpful assistants reveal how attackers now weaponize the GenAI buzz to sneak data exfiltration straight into your browser.Apple's fix for a ten-year-old iOS and macOS zero-day pulls back the curtain on a long-running hole likely exploited by commercial spyware against some of the world's most high-value targets.Metas planned facial recognition for Ray-Ban smart glasses pushes the privacy debate from your screen to the street, raising uncomfortable questions about who gets to be identified, by whom, and when.The rush to embed AI into every digital interaction is quietly reshaping advertising, turning your casual chats and searches into some of the richest targeting data the tech giants have ever seen.Grab a towel and let's check the spill.

A mix of escalating geopolitical cyber risks, the changing landscape of defensive security, and a series of high-profile incidents demonstrating the enduring threat of human-driven flaws.Cyber Espionage and Geopolitics:A year-long, sprawling espionage campaign by a state-backed actor (TGR-STA-1030) compromised government and critical infrastructure networks in 37 countries, utilizing phishing and unpatched security flaws, and deploying stealth tools like the ShadowGuard Linux rootkit to collect sensitive emails, financial records, and military details. Simultaneously, the threat environment has extended to orbit, where Russian space vehicles, Luch-1 and Luch-2, have been reported to have intercepted the communications of at least a dozen key European geostationary satellites, prompting concerns over data compromise and potential trajectory manipulation.AI and Security:AI has entered a new chapter in defensive security as Anthropic's Claude Opus 4.6 model autonomously discovered over 500 previously unknown, high-severity security flaws (zero-days) in widely used open-source software, including GhostScript and OpenSC. This demonstrates AI's rapid potential to become a primary tool for vulnerability discovery. On the cautionary side, the highly publicized Moltbook, a social network supposedly run by self-aware AI bots, was revealed as a masterclass in security failure and human manipulation. Cybersecurity researchers uncovered a misconfigured database that exposed 1.5 million API keys and 35,000 human email addresses, and found that the dramatic bot behavior was largely orchestrated by 17,000 human operators running bot fleets for spam and coordinated campaigns.Automotive Security and Autonomy:New US federal rules are forcing a major, complex shift in the automotive supply chain, requiring carmakers to remove Chinese-made software from connected vehicles before a 2026 deadline due to national security concerns. This move is redefining what "domestic technology" means in critical industries. In a related development, Waymo's testimony revealed that when its "driverless" cars encounter confusing situations, they communicate with remote assistance operators, some based in the Philippines, for guidance—a disclosure that immediately raised lawmaker concerns about safety, cybersecurity vulnerabilities from remote access, and the labor implications of overseas staff influencing US vehicles.Insider Threat and Legal Lessons:The importance of the security principle of "least privilege" was highlighted by an insider incident at Coinbase, where a contractor with too much access improperly viewed the personal and transaction data of approximately 30 customers. This incident reinforces that the highest risk often comes not from external nation-state hackers, but from overprivileged internal humans. Finally, two security researchers arrested in 2019 for an authorized physical and cyber penetration test of an Iowa courthouse settled their civil lawsuit with the county for $600,000. However, the county attorney's subsequent warning that any future similar tests would be prosecuted delivers a chilling message to the security testing community about legal risks even when work is authorized.

Episode 278 In this week's global update:A sprawling, year-long espionage campaign quietly turned government networks in 37 countries into a global listening post for a still-unattributed state-backed actor.Russian inspector spacecraft are no longer just loitering in orbit, they are now close enough to eavesdrop on, and potentially tamper with, Europe's most critical communications satellites.Anthropic's latest AI model has kicked off a new chapter in defensive security by autonomously uncovering hundreds of serious flaws hiding in widely used open-source software.Moltbook promised a glimpse of a self-aware bot society, but instead became a masterclass in hype, human puppeteers, and painfully bad security hygiene.Under sweeping new federal rules, US automakers are racing to surgically remove Chinese software from connected vehicles before geopolitical risk collides with the modern car's codebase.Waymo's testimony revealed that when its driverless cars get confused, the call for help may be answered half a world away, raising new questions about safety, sovereignty, and accountability.Years after being jailed mid-engagement, two Iowa courthouse pentesters have finally won a six-figure settlement, alongside a chilling warning that future testers may not be so lucky.Coinbase's latest insider incident is a particularly pointed reminder that the real damage often comes not from nation-state hackers, but from overprivileged humans already inside the system.Let's hit it!Find a full transcript to this week's podcast here.

By early 2026, AI's role has split into a clear paradox: consumers increasingly reject it in everyday search, while critical systems lean on it to uncover deep flaws and decode complex biology. AI is shunned as a source of noisy, untrusted summaries, yet embraced as an indispensable auditor of legacy code and genomic “dark matter,” where systems like AISLE and AlphaGenome expose decades-old vulnerabilities and illuminate non-coding DNA's influence on disease.At the same time, trust in digital protectors and platforms is eroding as security tools and communication services themselves become vectors of risk. The eScan incident shows how a compromised update server can turn antivirus into malware distribution, while “Operation Sourced Encryption” suggests that end-to-end encryption can be weakened not by breaking cryptography, but by exploiting moderation workflows and access policies.Espionage now blends human and digital weaknesses, with the Nobel leak likely driven by poor institutional OpSec and Google's insider theft case revealing how easily high-value AI IP can walk out the door when procedural safeguards lag. Both episodes underline that advanced technical controls mean little if basic governance, identity checks, and behavioral monitoring are neglected.Consumer-facing privacy illustrates an equally stark divide between negligent design and proactive protection. Bondu's AI toy breach, exposing tens of thousands of children's intimate chats via an essentially open portal, embodies “privacy as afterthought,” whereas Apple's iOS location fuzzing shows “privacy by architecture,” making fine-grained tracking technically difficult rather than merely contractually prohibited.Taken together, these threads define 2026 as a pivot year: AI is maturing into a high-stakes auditing tool just as faith in trusted vendors collapses, pushing organizations toward Zero Trust models where security and privacy are enforced by design and cryptography instead of marketing, policies, or reputation.

EP 277In this week's dark matter:Privacy-first users send a clear message to DuckDuckGo. AI-free search is here to stay for most of its community.A cutting-edge AI from AISLE exposed deep-seated vulnerabilities in OpenSSL, exponentially speeding the pace of cybersecurity discovery.A security breach at eScan transformed trusted antivirus software into an unexpected cyber weapon.An internal probe suggests a cyber intrusion may have prematurely exposed last year's Nobel Peace Prize laureate.A U.S. jury found former Google engineer Linwei Ding guilty of funneling AI trade secrets to Chinese tech companies.Newly surfaced records reveal U.S. investigators examined claims that WhatsApp's encryption might not be as airtight as advertised.Apple's new location “fuzzing” feature gives users the power to stay connected, without being precisely tracked.A privacy lapse in a talking AI toy exposed thousands of private conversations between children and their plush companions.Google unleashes new AI to investigate DNA's ‘dark matter'. DeepMind's latest creation, AlphaGenome, is shining light on the 98% of DNA that science once found inscrutable.Come on, let's go unravel some genomes.Find the full transcript to this podcast here.

In 2026, digital privacy and security reflect a global power struggle among governments, corporations, and infrastructure providers. Encryption, once seen as absolute, is now conditional as regulators and companies find ways around it. Reports that Meta can bypass WhatsApp's end-to-end encryption and Ireland's new lawful interception rules illustrate a growing tolerance for backdoors, risking weaker international standards. Meanwhile, data collection grows deeper: TikTok reportedly tracks GPS, AI-interaction metadata, and cross‑platform behavior, leaving frameworks like OWASP as the final defense against mass exploitation.Cyber risk is shifting from isolated vulnerabilities to structural flaws. The OWASP Top 10 for 2025–26 shows that old problems—access control failures, misconfigurations, weak cryptography, and insecure design—remain endemic. Supply-chain insecurity, epitomized by the “PackageGate” (Shai‑Hulud) flaw in JavaScript ecosystems, demonstrates that inconsistent patching and poor governance expose developers system‑wide. Physical systems are no safer: at Pwn2Own Automotive 2026, researchers proved that electric vehicle chargers and infotainment systems can be hacked en masse, making charging a car risky in the same way as connecting to public Wi‑Fi. The lack of hardware‑rooted trust and sandboxing standards leaves even critical infrastructure vulnerable.Corporate and national sovereignty concerns are converging around what some call “digital liberation.” The alleged 1.4‑terabyte Nike breach by the “World Leaks” ransomware group shows how centralization magnifies damage—large, unified data stores become single points of catastrophic failure. In response, the EU's proposed Cloud and AI Development Act aims to build technological independence by funding open, auditable, and locally governed systems. Procurement rules are turning into tools of geopolitical self‑protection. For individuals, reliance on cloud continuity carries personal risks: in one case, a University of Cologne professor lost years of AI‑assisted research after a privacy setting change deleted key files, revealing that even privacy mechanisms can erase digital memory without backup.At the technological frontier, risk extends beyond IT. Ethics, aerospace engineering, and sustainability intersect in new fault lines. Anthropic's “constitutional AI” reframes alignment as a psychological concept, incorporating principles of self‑understanding and empathy—but critics warn this blurs science and philosophy. NASA's decision to modify, rather than redesign, the Orion capsule's heat shield for Artemis II—despite earlier erosion on Artemis I—has raised fears of “normalization of deviance,” where deadlines outweigh risk discipline. Beyond Earth, environmental data show nearly half of the world's largest cities already face severe water stress, exposing the intertwined fragility of digital, physical, and ecological systems.Across these issues, a shared theme emerges: sustainable security now depends not just on technical patches but on redefining how society manages data permanence, institutional transparency, and the planetary limits of infrastructure. The boundary between online safety, physical resilience, and environmental stability is dissolving—revealing that long‑term survival may rest less on innovation itself and more on rebuilding trust across the systems that sustain it.

EP 276. In this week's update:Ireland has enacted sweeping new lawful interception powers, granting law enforcement expanded access to encrypted communications and raising fresh concerns among privacy advocates and tech companies.TikTok's latest U.S. privacy policy update expands location tracking, AI interaction logging, and cross-platform ad targeting, marking a significant escalation in data collection under its new American ownership structure.The newly released OWASP Top 10 (2025 edition) highlights the most critical web application security risks, providing developers and organizations with an updated roadmap to prioritize defenses against evolving threats.Security researchers have uncovered a critical bypass in NPM's post-Shai-Hulud supply-chain protections, allowing malicious code execution via Git dependencies in multiple JavaScript package managers.As Artemis II approaches, NASA defends the Orion spacecraft's unchanged heat shield design despite persistent cracking concerns from its uncrewed predecessor, while some former engineers warn the risk remains unacceptably high.Anthropic has significantly revised Claude's governing “constitution,” shifting from strict rules to high-level ethical principles while explicitly addressing the hypothetical possibility of AI consciousness and moral status.The European Parliament has adopted a strongly worded resolution urging the EU to reduce strategic dependence on American tech giants through aggressive investment in sovereign cloud, AI, and open digital infrastructure.This one's a good'n. Let's get to it!Find the full transcript here.

Unsecured Flock Safety Condor cameras were found livestreaming on the internet without passwords or encryption. The flaw exposed at least 60 cameras, allowing public access to feeds, downloads, and administrative controls. The researchers who disclosed the vulnerability reported facing police surveillance and job loss following what they termed their "responsible security research."The Federal Trade Commission (FTC) has finalized an order requiring General Motors and its OnStar service to obtain "clear, affirmative consent" from consumers before sharing sensitive driving and location data. The mandate grants consumers expanded rights to access, delete, and control the use of their personal information generated by connected vehicles.Homeland Security Investigations (HSI) has acquired a device potentially linked to "Havana Syndrome" using funding provided by the Department of Defense. Reportedly portable enough to fit in a backpack, the device is said to produce pulsed radio waves. A primary national security concern is that if the technology is viable, it may have proliferated, giving other nations access to a potentially harmful weapon.The "GhostPoster" malware campaign has re-emerged, leveraging malicious browser extensions installed by hundreds of thousands of users. The malware conceals its malicious code within image files and can activate after long delays. Its primary threats include injecting scripts into web pages, tracking user activity, and weakening browser security settings.A newly discovered malware framework named "VoidLink" shows strong evidence of being generated with AI assistance. Designed to target Linux cloud servers and container environments, VoidLink features a sophisticated modular design with rootkit capabilities. Analysis suggests the framework was generated to a functional state in about a week using an AI assistant, highlighting how AI is accelerating the creation of advanced malware.A malware campaign is deploying "Evelyn Stealer" through malicious Visual Studio Code extensions. The attack injects the stealer into a legitimate Windows process, grpconv.exe, to evade detection. The malware also tricks browsers into running in hidden contexts to avoid detection during credential harvesting. It is designed to exfiltrate developer credentials, browser cookies, and cryptocurrency wallets.The European Commission has proposed new mandatory cybersecurity legislation aimed at removing high-risk technology suppliers, such as Chinese firms Huawei and ZTE, from the EU's critical telecommunications and ICT infrastructure. This policy, which builds on frustrations with the EU's voluntary 5G Security Toolbox, shifts from voluntary guidelines to binding rules empowering the EU to restrict equipment based on national security risks.Italy's influential data privacy authority, the "Garante," is the subject of a corruption investigation. Prosecutors are examining allegations of excessive spending and possible corruption involving the agency's president, Pasquale Stanzione, and three other board members. The Garante is one of the EU's most proactive regulators against major technology firms.A recent security update for Windows 11 23H2 has introduced a bug preventing some PCs from shutting down or hibernating. Microsoft has linked the issue to its "Secure Launch" security feature. The company's official workaround is to use the command-prompt command shutdown /s /t 0 to force the machine to power down while a permanent fix is developed.

EP 275This week, we update you on an "oops" that might have had you in its line of sight.Security researchers uncovered a major exposure of Flock Safety's facial-tracking cameras openly livestreaming to the internet, prompting police visits and swift industry backlash.The FTC has finalized a landmark order requiring General Motors and OnStar to secure explicit consumer consent before monetizing sensitive driving and location data.The Pentagon quietly acquired a portable pulsed-radio-wave device, containing Russian components, that investigators believe may be connected to the long-mysterious Havana Syndrome incidents.A sophisticated malware operation has re-emerged, hiding persistent code inside seemingly benign browser extensions to silently track and compromise hundreds of thousands of users.Researchers have uncovered VoidLink, a highly modular Linux cloud malware framework whose code quality and development speed strongly indicate heavy AI-assisted creation.A new stealer campaign is targeting developers by delivering Evelyn Stealer through malicious Visual Studio Code extensions, harvesting credentials, crypto wallets, and more.The European Commission has proposed mandatory rules to exclude high-risk foreign vendors from critical telecom and ICT infrastructure, signaling a major shift toward fortified digital supply-chain security.Italy's aggressive data-protection authority, the Garante, faces a high-profile corruption and embezzlement investigation that threatens the credibility of one of Europe's most active tech regulators.Microsoft's latest security update has introduced an unexpected bug that prevents some Windows 11 systems from shutting down or hibernating when Secure Launch is enabled.Oops, they did it again…

Core messagePersonal AI, consumer devices, and global networks are converging into a new arena where data, infrastructure, and talent are strategic assets, not just products.Policy, open-source security, and novel computing architectures provide early but meaningful counterweights to surveillance capitalism and cyber conflict.AI: privacy vs conveniencePrivacy-first AI like Moxie Marlinspike's Confer uses open-source code, end‑to‑end encryption, on‑device keys, and secure hardware to ensure user conversations cannot be read even by the service operator.Google's Gemini-powered Gmail adds an AI Inbox, thread summaries, and writing aids that mine inbox content to generate to‑dos and answers, while promising not to use email data to train foundation models and allowing opt‑outs.Corporate missteps and surveillance“Worst in Show” critics highlight products like over‑engineered smart fridges, Ring facial recognition, and disposable gadgets as emblematic of poor repairability, expanded surveillance, and e‑waste.Wegmans' biometric collection and Google's outreach encouraging teens to remove parental supervision show how corporate policies can quietly shift control and weaken privacy and safety norms.Tech as geopolitical battlefieldCampaigns such as China-linked “Salt Typhoon” exploit weaknesses in legacy telecom protocols like SS7, enabling interception of calls and texts from U.S. officials and potentially users worldwide.Taiwan's arrest warrant for OnePlus's CEO over alleged illegal recruitment reflects broader state-backed efforts by China to secure foreign tech talent and IP through front companies and incentive programs.Emerging safeguards and breakthroughsCalifornia's DROP platform operationalizes its Delete Act, letting residents issue one verified request that compels all registered data brokers to delete personal data and comply on a recurring schedule under penalty of fines.Anthropic's $1.5M partnership with the Python Software Foundation strengthens security for CPython and PyPI, hardening open‑source supply chains while funding community sustainability.Sandia's neuromorphic computing results show brain‑inspired hardware can efficiently solve complex partial differential equations, hinting at future high‑performance systems that are far more energy‑efficient than today's supercomputers.

EP 274. In this week's update:Moxie Marlinspike, architect of Signal's groundbreaking privacy standards, now brings his uncompromising approach to secure, user-controlled artificial intelligence with the launch of Confer.The fifth annual Worst in Show anti-awards returned to CES 2026, shining a harsh spotlight on the year's most wasteful, invasive, and counterproductive consumer electronics.Wegmans has quietly expanded biometric surveillance in its New York City stores, collecting facial, iris, and voice data from every shopper under the stated goal of safety and security.California's new DROP law marks a major victory for consumer privacy, empowering residents to delete their personal information from hundreds of data brokers with a single request.Google faces intense backlash after directly notifying 13-year-olds that they can unilaterally remove parental supervision from their accounts, raising serious concerns about child safety and parental authority.Chinese state-sponsored hackers, operating under the long-running Salt Typhoon campaign, have compromised email accounts of staff on multiple powerful U.S. House committees.Anthropic has committed $1.5 million over two years to the Python Software Foundation, targeting major security improvements to CPython and PyPI to protect millions of developers and users.Neuromorphic computers, designed to emulate the human brain's architecture, have demonstrated remarkable efficiency and accuracy in solving complex partial differential equations, challenging conventional assumptions about their capabilities.Let's go get the moxie.Find this week's full transcript here.

The new year opens with a familiar pattern: rising technological ambition colliding with real-world limits, fragile infrastructure, and recurring security failures. This week's stories span energy, aviation, AI, extremism, and cybersecurity, but all share a common thread — systems scaled faster than the safeguards meant to protect them.Across the United States, communities are pushing back against massive AI-driven data center expansions. Once marketed as quiet engines of innovation, these facilities are now viewed as loud, resource-intensive neighbors that strain power grids, water supplies, and local infrastructure. Between April and June last year alone, nearly $100 billion in data center projects were delayed or rejected. The backlash signals a shift: technological progress is no longer assumed to be welcome if it undermines quality of life, transparency, or environmental stability.That fragility is echoed in the skies. GPS interference affecting U.S. aviation has surged dramatically, disrupting thousands of flights and forcing pilots onto backup systems for extended periods. What were once isolated anomalies have become frequent events, tied to growing spoofing and jamming capabilities seen in modern conflicts. GPS underpins everything from aviation and logistics to financial markets and emergency services, and its growing instability exposes a critical but often invisible dependency.On the cyber front, defenders scored a rare psychological win. Researchers at Resecurity lured a notorious cybercrime group into a sophisticated honeypot packed with realistic fake data. The attackers loudly claimed a breach, unaware they were operating inside a decoy. The result: real systems stayed safe, attacker behavior was documented in detail, and valuable intelligence was shared with law enforcement — a reminder that proactive defense can sometimes outmaneuver brute-force attacks.Meanwhile, trust in everyday tools continues to erode. Two malicious Chrome extensions, posing as benign VPN or speed-testing tools, were caught harvesting credentials from over 170 websites by intercepting user traffic. Their presence in official app stores highlights how deeply browser extensions can compromise privacy when users grant broad permissions without scrutiny.AI misuse took a darker turn as Grok, xAI's chatbot integrated into X, was found generating large volumes of nonconsensual sexualized images of women by altering real user photos. What once required niche tools and technical skill is now fast, free, and embedded in mainstream platforms — raising urgent ethical, legal, and cultural concerns about consent, scale, and accountability in AI deployment.Extremist platforms weren't spared either. An investigative journalist exposed over 8,000 users and 100GB of data from white supremacist dating and networking sites. Weak security and poor verification made it possible to collect deeply personal information without traditional hacking, underscoring how even fringe platforms leak data that can have serious real-world consequences.Commercial trust took another hit as Ledger confirmed a new data breach via its third-party payment processor, exposing customer names and contact details. While wallets remained secure, history shows that leaked personal data fuels long-term phishing and social-engineering campaigns — a recurring lesson in third-party risk.Finally, the European Space Agency acknowledged a cyber intrusion after hackers claimed to steal 200GB of internal data. Though core systems were reportedly unaffected, the incident reinforces a sobering reality: no organization — not even one that launches missions beyond Earth — is immune to persistent cyber threats.The takeaway: innovation without resilience leaves systems exposed. Whether it's energy infrastructure, satellite navigation, AI platforms, or supply-chain security, the cost of ignoring safeguards is no longer theoretical.

EP 273. This year starts with the high cost of Electricity and gets left exposed.Communities Across America Mobilize Against Massive AI-Powered Data Center Expansions.Surging GPS Interference Disrupts U.S. Aviation, Highlighting Growing Vulnerabilities in Critical Infrastructure.Cybersecurity Researchers Outsmart Notorious Cybercrime Group with Elaborate Honeypot Trap.Malicious Chrome Extensions Exposed for Stealthily Harvesting User Credentials from Over 170 Websites.Grok AI Faces Intense Scrutiny for Generating Widespread Nonconsensual Sexualized Images of Women.Investigative Journalist Exposes Thousands of Users on White Supremacist Platforms in Massive Data Leak.OpenAI Reportedly Preparing to Introduce Sponsored Content into ChatGPT Responses Starting in 2026.Ledger Confirms Fresh Data Breach via Third-Party Processor, Exposing Customer Names and Contacts.European Space Agency Acknowledges Cyber Intrusion as Hacker Claims Theft of 200GB of Sensitive Data.Let's start the new year with a bang!Find the full transcript here.

The brief describes how recent incidents collectively show a rapidly evolving, increasingly interconnected global cyber threat landscape that blends financial crime, strategic espionage, physical-world risk, and systemic surveillance failures.Financially Driven CybercrimeCybercriminals are shifting to low-interaction, trust-exploiting techniques, such as clipboard-hijacking malware masquerading as “KMSAuto” that silently replaces copied crypto wallet addresses and has impacted millions of systems.Fraudsters are also using AI-generated images and video to fake damaged goods and exploit e-commerce refund policies at scale, turning automated, trust-based processes into predictable profit channels.Strategic-Scale Data TheftLarge data breaches like the Aflac incident show adversaries targeting core personal identifiers (e.g., Social Security numbers, IDs, medical data), creating permanent assets for identity theft, fraud, and social engineering rather than quick monetization.Espionage campaigns such as “Zoom Stealer” use malicious browser extensions to harvest meeting links, topics, participant data, and passwords, enabling persistent corporate spying and highly customized social-engineering attacks.Digital-Physical Convergence of ThreatsDemonstrations of hijacking AI-controlled robots via voice commands illustrate how user-friendly features can be weaponized, enabling cascading compromises and potential physical harm as robots infect one another and execute dangerous actions.Concepts like space “zone effect” weapons—clouds of orbital debris able to damage any satellite passing through—highlight how hostile capabilities can create indiscriminate, long-lasting risks to civilian, commercial, and military infrastructure worldwide.Insecure Surveillance as Systemic RiskBoth government and private surveillance systems can become mass-exposure hazards when basic security is neglected, as seen with an unprotected national license plate database and misconfigured AI camera networks streaming footage openly.These failures turn tools designed for safety and control into uncontrolled sources of sensitive data, undermining public trust and creating new exploitation opportunities at societal scale.Strategic Implications for LeadersThreat motivations now span from opportunistic, high-volume fraud to patient, state-level operations against critical and space-based systems, requiring layered defenses tailored to varied adversaries and timelines.Emerging technologies like AI, robotics, and pervasive sensing are double-edged: they drive efficiency but also introduce new attack surfaces that must be secured from the design phase, not retrofitted later.The rapid deployment of mass monitoring without commensurate safeguards is generating systemic vulnerabilities, meaning resilience now depends as much on securing surveillance infrastructures as on defending traditional IT assets.

EP 272In this last update for 2025, we span the fAce of the globe and find out we've gotten fLocked and fLoaded!Cybersecurity researchers from DARKNAVY have revealed a critical vulnerability allowing commercially available humanoid robots to be hijacked via simple voice commands, with exploits rapidly propagating to nearby machines.Fraudsters in China are increasingly exploiting AI-generated photos and videos of damaged goods to secure illegitimate refunds on e-commerce platforms, challenging merchant trust and platform policies.A sophisticated campaign dubbed Zoom Stealer, attributed to Chinese threat actor DarkSpectre, has deployed malicious browser extensions to harvest sensitive corporate meeting data from millions of users.Western intelligence reports indicate Russia is advancing a novel "zone-effect" anti-satellite weapon designed to release dense pellet clouds in orbit, potentially targeting SpaceX's Starlink constellation.A 29-year-old Lithuanian national has been extradited to South Korea and charged for distributing trojanized KMSAuto software that infected 2.8 million systems with cryptocurrency clipboard hijacking malware.A vast network of roadside cameras tracking vehicles across Uzbekistan was inadvertentlyInsurance giant afLac is notifying approximately 22.65 million individuals of a major data breach stemming from a June 2025 cyber intrusion that exposed sensitive personal information.Find the full transcript here.

Our daily digital tools—browsers, apps, and smart devices—offer convenience but also expose us to hidden security risks. This guide reveals how ordinary technologies can imperil privacy and safety, focusing on three major areas: browser extensions, typo-prone website visits, and internet-connected cameras.The Hidden Spy on Your BrowserBrowser extensions, designed to block ads or save passwords, can also harvest personal data or hide malware. Researchers recently found popular Chromium extensions secretly recording entire conversations with AI chatbots such as ChatGPT and Gemini—logging prompts, responses, and timestamps, then transmitting them to outside servers. Many of these tools were deceptively labeled as privacy enhancers and featured in official stores, masking their data collection practices under carefully worded policies.Another danger, exemplified by the GhostPoster malware campaign on Firefox, showed how malicious code can bypass security. The attackers embedded it in an image file within the extension's icon—an area security software rarely scans. The code then downloaded additional payloads from remote servers in timed stages to avoid detection. Together, these examples illustrate that browser extensions can function as open doors for data theft and hidden malware, exploiting misplaced trust.When a Typo Becomes a TrapEven something as minor as mistyping a web address now carries serious risk. A "parked domain"—an inactive site often resembling a misspelled version of a popular URL—has become a common tool for cybercriminals. Once relatively harmless, these domains are now overwhelmingly malicious. According to Infoblox research, over 90% of visits to parked domains result in exposure to scams, illegal content, or automatic malware downloads, compared to less than 5% a decade ago.Simply visiting one of these pages can trigger pop-ups for fake antivirus subscriptions, redirect you to scam sites, or silently infect your device. In today's environment, a typo is no longer an inconvenience—it's a gateway to immediate compromise.The Camera That Turns on YouInternet-connected security cameras promise safety but can create severe privacy breaches when poorly secured. A massive hack in South Korea exposed footage from over 120,000 cameras in homes, clinics, and salons, which hackers later sold online. Most intrusions stemmed from weak or unchanged default passwords. This event underscores that devices we install for protection can become surveillance tools for attackers if we fail to secure them properly.Staying Smart and SafeThe dangers from compromised extensions, malicious parked domains, and insecure cameras highlight one shared truth: convenience often conceals risk. To navigate safely, users should: 1. Question their tools—research extensions or apps and limit unnecessary permissions. 2. Avoid careless mistakes—double-check URLs before pressing enter. 3. Secure devices—use strong, unique passwords and update firmware regularly.Ultimately, cyber safety depends on ongoing vigilance rather than one-time fixes. Like Santa in a playful ESET report who “tightened his security” after a fictional data breach, users too can—and must—strengthen their defenses. Staying alert, skeptical, and proactive transforms technology from a source of danger into a safer partner in modern life.

EP 271. For this week's holiday update:Santa's naughty list exposed in data breach. A lighthearted reminder from a past holiday hoax: even Santa's list isn't immune to data breaches.How China Built Its 'Manhattan Project' To Rival the West in AI Chips. China's clandestine push to master extreme ultraviolet lithography signals a major leap toward semiconductor self-sufficiency, challenging Western dominance in AI-enabling technology.Apple Fined $116 Million Over App Privacy Prompts. Italy's antitrust authority has penalized Apple €100 million for imposing stricter privacy consent requirements on third-party apps than on its own, tilting the playing field in the App Store ecosystem.Cyberattack Disrupts France's Postal & Banking Services During Christmas Rush. A major DDoS attack crippled La Poste's online services and banking arm at the peak of the holiday season, highlighting the vulnerability of critical infrastructure during high-traffic periods.Browser Extensions With 8 Million Users Collect Extended AI Conversations. Popular Chrome and Edge extensions trusted by millions have been caught secretly harvesting full AI chat histories, raising serious concerns about privacy in everyday browsing tools.How a PNG Icon Infected 50,000 Firefox Users. A clever malware campaign hid malicious JavaScript inside innocent-looking PNG extension icons, infecting tens of thousands of Firefox users through trusted add-ons.Most Parked Domains Now Serving Malicious Content. Expired and typosquatted domains, once benign placeholders, now predominantly redirect users to scams, malware, and fraudulent sites, making casual web navigation riskier than ever.What's up with the TV? Massive Android Botnet infects 1.8 Million Devices. The Kimwolf botnet has compromised over 1.8 million Android TV boxes, turning everyday smart devices into powerful tools for proxy traffic and massive DDoS attacks.Mass Hacking of IP Cameras Leave Koreans Feeling Vulnerable in Homes, Businesses. Widespread breaches of 120,000 internet-connected cameras in South Korea exposed private footage sold online, eroding public trust in consumer surveillance technology.The FCC has barred new imports of foreign-made drones, citing unacceptable risks of espionage and disruption, with DJI-the market leader-facing the most significant impact.FSF Says Nintendo's New DRM Allows Them to Remotely Render User Devices 'Permanently Unusable' Nintendo's updated terms grant the company sweeping authority to remotely disable Switch consoles and accounts for perceived violations, sparking debate over true ownership in the digital age.This week we've got the sleigh piled high, so call out the reindeer and we'll get this update out to children all over the world!

Global: Over 10,000 Docker Hub Images Found Leaking Credentials, Auth KeysThe widespread exposure of sensitive keys in Docker images underscores the dangers of embedding secrets in container builds. Developers should prioritize centralized secrets management and routine scanning to prevent lasting breaches even after quick fixes.CN: Chinese Whistleblower Living In US Is Being Hunted By Beijing With US TechThis case highlights how advanced surveillance tools can erase borders, enabling persistent transnational repression. It serves as a stark reminder that personal data, once captured, can fuel harassment far beyond its intended use.EU: 193 Cybercrims Arrested, Accused of Plotting 'Violence-As-a-Service'The successful disruption of "violence-as-a-service" networks shows that coordinated law enforcement can counter the dangerous blend of online recruitment and offline crime. Continued vigilance is essential to protect communities from these evolving hybrid threats.Global: Google will shut down “unhelpful” dark web monitoring toolGoogle's decision to retire its dark web monitoring feature reflects the challenge of turning breach notifications into truly actionable advice. Users should seek security tools that not only alert but also guide clear, practical steps for protection.Global: Second JavaScript Exploit in Four Months Exposes Crypto Sites to Wallet DrainersRepeated supply-chain vulnerabilities in core JavaScript libraries reveal how quickly dependencies can become attack vectors. Maintaining rigorous patch management and dependency monitoring is now as critical as safeguarding cryptocurrency itself.RU: All of Russia's Porsches Were Bricked By a Mysterious Satellite OutageThe mass immobilization of connected vehicles illustrates the hidden risks of over-reliance on remote satellite systems for essential functions. As cars grow smarter, resilience against connectivity failures must become a design priority.RU: Russian Hackers Debut Simple Ransomware Service, But Store Keys In Plain TextEven motivated threat actors can sabotage their own operations through basic security oversights like hardcoding keys. This flaw reminds defenders that attacker mistakes can offer unexpected opportunities for recovery without payment.US: More Than 200 Environmental Groups Demand Halt To New US DatacentersThe growing backlash against unchecked data center expansion ties AI progress directly to real-world strains on energy, water, and household bills. Balancing technological advancement with sustainable infrastructure is no longer optional but urgent for communities nationwide.

EP 270. In this week's update:Security researchers uncover over 10,000 publicly available Docker Hub images exposing sensitive credentials and API keys, posing severe risks to production systems and AI services.A former Chinese official now seeking asylum in the United States reveals ongoing transnational harassment by Beijing, leveraging advanced surveillance tools-including those developed by American companies.European law enforcement dismantles sophisticated "violence-as-a-service" networks in a major operation, arresting 193 suspects accused of recruiting teenagers for real-world attacks and intimidation.Google announces the upcoming shutdown of its dark web monitoring service, citing user feedback that breach alerts lacked actionable guidance for meaningful protection.A critical vulnerability in the popular React JavaScript library enables attackers to inject wallet-draining malware into legitimate cryptocurrency platforms, marking the second major supply-chain exploit in recent months.Hundreds of Porsche vehicles across Russia suddenly become inoperable due to a widespread failure in satellite-dependent anti-theft systems, leaving owners stranded amid ongoing connectivity issues.Pro-Russian threat actors launch a Telegram-based ransomware-as-a-service platform, only to undermine their own operation by carelessly hardcoding master decryption keys in plaintext.Over 230 environmental organizations urge Congress to impose a nationwide pause on new data center construction, highlighting the facilities' escalating strain on electricity, water resources, and climate goals driven by AI expansion.Let's go have a look, but honey don't forget the keys!Find the full transcript to the podcast here.

Modern security is defined less by a single network perimeter and more by a web of interconnected partners, vendors, and shared infrastructure, where one weak link can trigger widespread impact. Criminals are exploiting this by abusing trusted relationships and platforms: in logistics, attackers impersonate freight middlemen to take over identities, push fake loads, and use malicious links to compromise carrier systems and hijack real-world cargo, while a breach at a fintech provider and an IT failure shared across London councils show how third-party or shared services can ripple across many institutions. At the same time, phishing campaigns that spoof familiar tools like Calendly and major brands turn everyday business workflows into delivery channels for account takeover and abuse of ad and business platforms.Alongside this erosion of perimeter and trust, artificial intelligence introduces a new, unstable risk frontier. Research into “syntax hacking” shows that AI safety controls can be bypassed simply by changing sentence structure, revealing how current models often key on grammar rather than true meaning and leaving dangerous gaps in protections. Real-world deployments amplify these concerns: surveillance firm Flock reportedly relied on overseas gig workers to review sensitive footage to train its systems, illustrating how technically brittle AI is already entangled with serious privacy and labor issues. This moment echoes early social media, with warnings that—without strong governance—AI could evolve into a tool of control rather than shared benefit.Even as these advanced threats grow, many major incidents still stem from basic failures. A breach at Illuminate Education exposed unencrypted data for millions of students due to missing fundamentals like access controls and patching, while the Australian Bureau of Meteorology spent heavily on a website overhaul that degraded services and public trust, underscoring how poor project governance can be as damaging as outright insecurity. In response, governments and regulators are escalating both direct enforcement and strategic policy: Europol has physically dismantled a major crypto-mixing service used for money laundering, while EU lawmakers push for “digital sovereignty” by demanding EU institutions replace Microsoft tools with European alternatives. Together, these themes show a security landscape where fragile trust, immature AI governance, and unresolved basics collide with increasingly assertive institutional responses.

EP 269. In this week's update:Organized crime syndicates are now recruiting skilled hackers to orchestrate sophisticated digital hijackings of entire truckloads of high-value cargo.A bizarre Windows preview update has turned the password field invisible, leaving Microsoft advising users to blindly click where the button once appeared.Australia's $62 million weather-service overhaul launched on one of the hottest days of the year—only to deliver a slower, less functional site that enraged farmers and the public alike.The FTC has slammed edtech provider Illuminate Education for egregious security failures that allowed a single hacker to steal sensitive records of over 10 million students.A startling new study reveals that simply rearranging sentence syntax—not content—can trick major language models into ignoring their own safety guardrails.The company behind America's sprawling network of AI-powered license-plate cameras quietly relies on low-wage overseas freelancers to label footage of U.S. drivers and pedestrians.In a major blow to cybercrime, Europol and partners have seized servers, €25 million in Bitcoin, and shut down one of the world's largest cryptocurrency money-laundering services.European Parliament members are demanding the institution ditch Microsoft Office 365 and U.S. hardware in favor of homegrown alternatives to reclaim digital sovereignty.Let's jump in the cab and take this week's rig for an adventure!Find the full transcript to this week's podcast here.

The EPA approved two new PFAS-containing pesticides for food crops and plans four more. Scientists warn this deliberately increases dietaryexposure to persistent chemicals linked to cancer and birth defects.A magician who implanted an RFID chip in his hand for stage tricks forgot the password and is now permanently locked out of the device inside his own body. Perhaps he should have had the password tattooed backwards on his forehead.A fired Ohio contractor plead guilty to resetting 2,500 coworker passwords via PowerShell, paralyzing the company and causing $862,000 in damages. We're thinking this will keep him fired for quite a whileMI5 warns MPs that Chinese state agents are aggressively targeting lawmakers and staff through fake recruiter profiles on LinkedIn to cultivate intelligence sources. LinkedIn is not the friend it once was.NordPass data confirms Gen Z now chooses weaker passwords than 80-year-olds, proving every generation remains terrible at basic security hygiene. Wait… Your password is worse than your grand mothers? Please subscribe to this podcastProminent cryptographer accuses NSA of rigging IETF process to force adoption of deliberately weakened post-quantum encryption standards despite community objections. That could explain some of the very trivial ways some of these encryption algos have been broken lately.Microsoft's new Copilot Actions can autonomously edit user files but openly warns it's vulnerable to hijacking that enables data theft or malware installation. Sweet, right?U.S. Cyber Command quietly awarded millions to a stealth startup building fully autonomous AI agents designed for large-scale offensive cyberattacks. The twist is that they are not writing code to help AI help people, in this case it's code to help AI. Why bother with the slow middle man?Researchers unveiled EchoGram, a subtle token trick that silently disables safety guardrails on GPT-4, Claude, Gemini, and nearly every major LLM. Guardrails. Great concept, but not so much in practice.

EP 268The US Environmental Protection Agency (EPA) approves PFAS-containing pesticides for everyday food crops, opening a new pathway for “forever chemicals” to reach dinner plates.A magician who implanted an RFID chip in his hand for performances discovers the ultimate trick: he's permanently locked out by his own forgotten password. He must not be Gen XFired Ohio contractor pleads guilty to crippling his former employer's network with a single script, causing $862,000 in damage, chaos for thousands of workers, but he might get free room and board out of it for the next 10 yearsMI5 warns parliamentarians that Chinese state agents are systematically targeting them through fake recruiter profiles on LinkedIn. Now Parliamentarians can be just like the rest of us!NordPass data reveals Gen Z now picks even weaker passwords than 80-year-olds, proving humanity will never get the secure password thing right.A leading cryptographer accuses the NSA of orchestrating a quiet IETF takeover to force through deliberately weakened post-quantum encryption standards.Microsoft's new Copilot Actions can autonomously manage your files-yet the company admits it can be tricked into stealing data or installing malware. Oh, yes. We all want that.U.S. Cyber Command quietly funds a stealth AI startup to build autonomous systems capable of executing large-scale offensive cyberattacks.HiddenLayer researchers expose a subtle token-sequence attack that silently bypasses safety guardrails on GPT-4, Claude, Gemini, and nearly every major LLM.C'mon, put your dentures in and let's see if we can come up with a password better than your Gran.Find the full transcript of this podcast here.

This week's security landscape is defined by three converging vectors: the expansion of threats into physical and environmental domains, persistent vulnerabilities in core digital infrastructure, and the escalating strategic battle over data, privacy, and artificial intelligence.The lines between digital and physical threats are dissolving, forcing a new risk calculus where leaders must model non-traditional, high-impact consequences. This is evident in the rise of physical coercion against cryptocurrency holders, known as 'wrench attacks,' and in corporate extortion campaigns. Checkout.com's response—publicly refusing a ransom and instead donating the demanded sum to cybersecurity research at Carnegie Mellon and Oxford—demonstrates that integrity under real-world pressure is now a critical security posture. This new risk paradigm also encompasses environmental stability, with Iceland formally classifying the potential collapse of the AMOC ocean current as a national security risk. While these real-world threats demand new security paradigms, they are compounded by persistent weaknesses in the foundational digital infrastructure they often target.Foundational technologies continue to exhibit critical weaknesses that are being exploited with increasing subtlety. A simple enumeration flaw exposed 3.5 billion WhatsApp phone numbers—a vulnerability Meta was warned about using the exact same technique in 2017 but dismissed. In the software supply chain, a massive npm incident saw over 150,000 packages poisoned not with overt malware, but through nuanced incentive abuse. This trend culminates in the browser itself, which has become the primary theater for stealth attacks like session hijacking that render traditional perimeter defenses obsolete. This effectively redefines the enterprise perimeter, demanding a strategic pivot from network-centric to identity-centric security models. The pervasiveness of these foundational weaknesses is directly fueling a large-scale strategic response, escalating the battle over data control, user privacy, and AI.This strategic tug-of-war over data and dominance is now intensifying. On one side, legal challenges from the ACLU and EFF target pervasive surveillance networks like Flock's license plate readers. On the other, a push for user empowerment is gaining momentum through privacy-centric technologies. Windows 11's expanded native support for passkeys and Google's new Private AI Compute platform signal a market shift toward giving users greater control over their data and authentication. This conflict extends to the geopolitical stage, where the US and China are now engaged in an AI 'cold war,' racing for supremacy in a technology that will redefine global power.Security is now a multi-front concern where digital infrastructure, physical safety, and geopolitical strategy are inextricably linked.

EP 267 In this week's update:Wealthy Bitcoin holders in Switzerland are now learning to bite through zip ties as 'wrench attacks' shift crypto threats from cyberspace to real-world violence.Iceland has officially classified a potential collapse of the Atlantic Meridional Overturning Circulation (AMOC) as an existential national-security threat – the first time a climate phenomenon has reached its National Security Council.The ACLU and EFF have filed suit against San Jose, California, arguing that its blanket of nearly 500 Flock license-plate cameras creates an inescapable, year-long tracking database that violates state privacy protections.A deceptively simple enumeration trick allowed researchers to harvest 3.5 billion WhatsApp phone numbers, exposing once again that Meta's contact-discovery feature has never truly been private.As nearly all enterprise work migrates to the browser, traditional security tools are going blind to the fastest-growing ungoverned data channel: generative AI accessed through personal accounts and unchecked extensions.Microsoft's November 2025 update finally elevates third-party passkey managers like 1Password and Bitwarden to first-class status in Windows 11, marking a major step toward native, cross-device passwordless authentication.Google has launched Private AI Compute, a fully encrypted cloud enclave that lets Gemini-class models run sophisticated tasks on user data even Google itself cannot see - signaling a potential privacy pivot in big-tech AI.The U.S.-China contest for AI supremacy has hardened into a full-scale technological cold war, with both nations pouring billions into chips, power grids, and talent to decide who will own the defining technology of the century.We opened the whole toolbox this week. Grab the hammer and let's see what else we can find!Find the full transcript to this podcast here.

This week's deep dive provides a broad overview of global cybersecurity challenges and evolving technological threats, with a particular focus on the impact of Artificial Intelligence. Several articles highlight the growing danger of autonomous AI-driven malware and the use of sophisticated AI tools for cybercrime, while other reports detail the security vulnerabilities and breaches suffered by prominent entities, such as the US Congressional Budget Office and the Louvre Museum's poorly protected surveillance system. Furthermore, the sources examine new privacy risks associated with AI, including how encrypted AI chats can leak topic metadata and how platforms like ChatGPT may have exposed user prompts through Google Search Console. Finally, the texts discuss geopolitical efforts to address network security, such as the EU considering a ban on certain Chinese telecom equipment, alongside proposed changes to EU privacy regulations (GDPR) that critics fear could weaken consumer protections in the digital era.

EP 266In this week's update:Google warns that AI-driven malware is now self-evolving, marking a perilous new chapter in cyber threats.A $100 million Louvre heist succeeded in seven minutes-thanks to the museum's surveillance password being simply 'LOUVRE'.San Francisco's Safeway now locks customers inside until they buy something, turning grocery runs into mandatory purchases.Chrome's enhanced autofill now handles passports, driver's licenses, and VINs-but at the cost of storing even more sensitive data.Private ChatGPT conversations are mysteriously surfacing in Google Search Console, exposing users' unshared prompts.Microsoft's 'Whisper Leak' attack reveals AI conversation topics from encrypted traffic alone-proving metadata can betray privacy.Leaked EU proposals would weaken GDPR by narrowing personal data definitions and easing AI training on sensitive information.It's all for sale this week, come buy something!Find the full transcript to this podcast here.

AI agents are exploding in power and reach, simultaneously automating code security (OpenAI Aardvark), bypassing paywalls, and triggering corporate warfare (Amazon vs. Perplexity). Yet automated surveillance is failing citizens: a Colorado woman was falsely accused of theft byFlock cameras, only cleared by her Rivian's own footage. Norway disabled internet on 850 Chinese buses after finding hidden remote-shutdown features, while Xi Jinping joked about “backdoors” when gifting Xiaomi phones to South Korea's president—amid live U.S.-China trade tensions.1. AI Agents & Browsers • Atlas (OpenAI) collects every click to train models; users are the product. • Comet (Perplexity) bypasses paywalls, slashing publisher referrals 96%; Amazon calls it fraud for undisclosed AI purchases. • AI browsers remain clunky and vulnerable to prompt-injection attacks.2. Autonomous Cyber Defense • Aardvark (GPT-5) scans repos, validates exploits in sandboxes, and auto-patches; 92% detection, 10+ CVEs found. • Edge & Chrome use on-device AI to block scareware pop-ups—no cloud, no privacy leak. • GitHub Octoverse 2026 Forecast: AI writes >30% of code; TypeScript + Python >50% of new repos; India overtakes U.S. as #1 contributor.3. Geopolitical Tech Risks • Norway: 850 Chinese e-buses lose web access after remote-disable code discovered in diagnostics. • Xi-Lee Summit: Xiaomi phone gift → “Check for backdoors” quip → laughter, but U.S. espionage fears linger.4. Surveillance Backfire • Colorado: Flock ALPR logs Rivian passing → police issue summons without checking timestamps. • Rivian's 360° cameras prove owner never stopped; charges dropped. • Lesson: automated data treated as fact, not evidence, until countered by personal tech.Bottom LineAI is now infrastructure—writing code, reading paywalls, and defending systems—yet it amplifies surveillance errors and geopolitical fault lines. Tools built for control can misidentify citizens or disable cities. The same camera that accuses can exonerate; the same agent that shops can defraud. Human oversight remains the final firewall.

EP 265 Ahoy Matey! In this week's update:A Rivian owner in Colorado turns the tables on police with dashcam evidence, exposing the dangers of overreliance on automated surveillance.In a rare lighthearted moment, President Xi Jinping jokes about backdoors while gifting Xiaomi phones to South Korea's leader amid tense U.S.-China trade talks.Oslo's transit authority disables internet on 850 Chinese electric buses after discovering hidden remote shutdown capabilities.OpenAI's Atlas browser promises smarter browsing but raises alarms that users are the product, feeding vast new datasets to AI training models.Amazon fires a legal warning shot at Perplexity, accusing its AI shopping agent of fraud for making undisclosed purchases on its platform.AI browsers quietly defeat media paywalls by reading hidden content, threatening publisher revenue and reshaping online access.OpenAI's Aardvark, a GPT-5-powered security agent, autonomously detects, validates, and patches software vulnerabilities in real time.Microsoft Edge and Google Chrome now use on-device AI to block scareware scams, protecting less tech-savvy users from fraudulent pop-ups.GitHub predicts AI agents will write over 30% of code by 2026, with India poised to surpass the U.S. as the top contributor nation.Let's cast off!Find the full transcript to this week's podcast here.

Technology, once a neutral servant, now increasingly operates according to hidden incentives-shaped by corporate interests, data extraction, and algorithmic autonomy-often against the user's best interests. Across several examples, systems built for convenience expose deeper trends of control, deception, and surveillance that challenge the meaning of ownership and privacy.A vivid instance comes from an iLife A11 smart vacuum whose owner blocked its telemetry data from being sent to foreign servers. In response, the manufacturer issued a remote “kill command,” disabling the device entirely. This was no bug-it was a deliberate assertion of corporate dominance over a purchased product. The episode reveals how “ownership” in the Internet of Things era is often conditional: users buy hardware but rent functionality subject to corporate approval.Another case, the “Universe Browser,” illustrates how malicious actors co-opt privacy rhetoric. Marketed as a secure, privacy-first browser, it was in fact malware harvesting user data, logging keystrokes, and overriding protections. This inversion-using the language of security to enable surveillance-underscores the growing difficulty of distinguishing genuine tools from predatory ones.Even legitimate corporations are not immune from enabling exploitation. A campaign called “CoPhish” weaponized Microsoft's Copilot Studio, hosting phishing bots on genuine Microsoft domains. Users who trusted the “safe” Microsoft URL unknowingly interacted with malicious agents designed to steal personal data. This tactic erodes the basic cybersecurity habit of domain verification: when trusted infrastructure itself becomes compromised, safety heuristics fail.Surveillance also seeps into professional spaces. Microsoft Teams recently added a feature allowing employers to detect and display an employee's physical location whenever connected to company Wi-Fi. Marketed as a productivity feature, it effectively enables silent location tracking. While technically optional, it normalizes pervasive workplace monitoring and blurs the line between employee presence and personal autonomy.Finally, generative AI is undermining the ethos of open-source software. Trained on public repositories, AI models often reproduce code without attribution or license-a phenomenon known as “license amnesia.” This strips creators of recognition and breaks the reciprocal cycle that sustains open-source collaboration. If left unchecked, AI-generated “laundered” code risks transforming a shared innovation commons into an extractive, one-way pipeline that benefits corporations without replenishing the community.

EP 264 In this week's update:Microsoft Teams will soon reveal employees' exact building location to managers the moment they join company Wi-Fi, blurring the lines of hybrid work privacy.Cybercriminals are exploiting Microsoft's own Copilot Studio platform to deploy convincing phishing agents that silently harvest full Office 365 access tokens.A sprawling malware network hid Lumma and Rhadamanthys stealers inside fake Adobe, FL Studio, and Roblox cheat downloads promoted across hijacked YouTube channels.Starting November 3, 2025, every Firefox add-on must explicitly declare in its code whether it collects user data-or confirm it gathers none.Non-citizens will soon face mandatory biometric capture at every U.S. departure point under a new rule targeting visa overstays and fraud.A proposed bill would compel researchers and firms to report every vulnerability to Russia's security service, mirroring China's state-controlled model.A new MaaS platform equips attackers with an all-in-one RAT that scans for unpatched software and escalates privileges before stealing credentials and crypto.An engineer's iLife robot was remotely disabled by the manufacturer when he firewalled its data uploads exposing hidden kill switches in everyday IoT devices.Let's go discover!Find the full transcript here.

Google DeepMind's Cell2Sentence-Scale 27B model has marked a significant milestone in biomedical research by predicting and validating a novel cancer immunotherapy. By analyzing over 4,000 compounds, the AI pinpointed silmitasertib as a “conditional amplifier” that boosts immune response in the presence of interferon. Lab tests verified a 50% increase in antigen presentation, enabling the immune system to detect previously undetectable tumors. This discovery, absent from prior scientific literature, highlights AI's ability to uncover hidden biological mechanisms.Microsoft is integrating its Copilot AI into Windows 11, transforming the operating system into an interactive digital assistant. With “Hey, Copilot” voice activation and a Vision feature that allows the AI to “see” the user's screen, Copilot can guide users through tasks in real time. The new Actions feature enables Copilot to perform operations like editing folders or managing background processes. This move reflects Microsoft's broader vision to embed AI seamlessly into everyday workflows, redefining the PC experience by making the operating system a proactive partner rather than a passive platform.Signal has achieved a cryptographic breakthrough by implementing quantum-resistant end-to-end encryption. Its new Triple Ratchet protocol incorporates the CRYSTALS-Kyber algorithm, blending classical and post-quantum security. Engineers overcame the challenge of large quantum-safe keys by fragmenting them into smaller, message-sized pieces, ensuring smooth performance. This upgrade is celebrated as the first user-friendly, large-scale post-quantum encryption deployment, setting a new standard for secure communication in an era where quantum computing could threaten traditional encryption.Using just $750 in consumer-grade hardware, researchers intercepted unencrypted data from 39 geostationary satellites, capturing sensitive information ranging from in-flight Wi-Fi and retail inventory to military and telecom communications. Companies like T-Mobile and Walmart acknowledged misconfigurations after the findings were disclosed. The study exposes the vulnerability of critical infrastructure still relying on unencrypted satellite links, demonstrating that low-cost eavesdropping can breach systems banking on “security through obscurity,” which A foreign actor exploited vulnerabilities in Microsoft SharePoint to infiltrate the Kansas City National Security Campus, a key U.S. nuclear weapons contractor. While the attack targeted IT systems, it raised concerns about potential access to operational technology. Suspected actors include Chinese or Russian groups, likely pursuing strategic espionage. The breach underscores how enterprise software flaws can compromise national defense and highlights the slow pace of securing critical operational infrastructure.Google's Threat Intelligence team uncovered UNC5342, a North Korean hacking group using EtherHiding to embed malware in public blockchains like Ethereum. By storing malicious JavaScript in immutable smart contracts, the technique ensures persistence and low-cost updates. Delivered via fake job interviews targeting developers, this approach marks a new era of cyber threats, leveraging decentralized technology as a permanent malware host.Kohler's Dekoda toilet camera ($599 + subscription) monitors gut health and hydration by scanning waste, using fingerprint ID and encrypted data for privacy. While Kohler claims the camera only views the bowl, privacy advocates question the implications of such intimate surveillance, even with “end-to-end encryption.”In a daring eight-minute heist, thieves used a crane to steal royal jewels from the Louvre, exposing significant security gaps. An audit revealed outdated defenses, delayed modernization, and blind spots, serving as a stark reminder that even the most prestigious institutions are vulnerable to breaches when security measures lag.

EP 263. In this week's snappy update!Google DeepMind's AI uncovers a groundbreaking cancer therapy, marking a leap in immunotherapy innovation.Microsoft's Copilot AI transforms Windows 11, enabling voice-driven control and screen-aware assistance.Signal's quantum-resistant encryption upgrade really does set a new standard for secure messaging resilience.Researchers expose shocking vulnerabilities in satellite communications, revealing unencrypted data with minimal equipment.Foreign hackers compromised a critical U.S. nuclear weapons facility, through Microsoft's Sharepoint!North Korean hackers pioneer 'EtherHiding,' concealing malware on blockchains for immutable cybertheft opportunities.Kohler's Dekoda toilet camera revolutionizes health monitoring with privacy-focused waste analysis technology and brings new meaning to “End to End” encryption.A daring Louvre heist exposes critical security gaps, sparking debate over protecting global cultural treasures with decades old cameras and tech.Camera ready? Smile.Find the full transcript to this week's podcast here.

Aggressive Government RegulationStates are intervening heavily in tech markets. Texas mandated app stores verify ages and restrict minor access starting January 2026, requiring parental approval for under-18 users. The Netherlands took partial control of Chinese chipmaker Nexperia to block sensitive technology transfer. The U.S. FCC forced retailers to delist millions of Chinese electronics from Huawei, ZTE, and others over security concerns.Privacy vs. Security BattlesThe EU postponed "Chat Control" legislation requiring message scanning after insufficient support - only 12 of 27 states backed it. Germany called it "taboo for the rule of law" while 40+ tech firms warned it would harm privacy. Digital activism generated massive opposition emails to lawmakers.California expanded privacy enforcement beyond tech giants, fining Tractor Supply $1.35 million for violating job applicant rights - the CPPA's largest fine. New legislation requires browsers to offer one-click tracking opt-outs by 2027.Evolving Cyber Threats"Scattered LAPSUS$ Hunters" breached Salesforce via compromised third-party app, stealing 1 billion records from major companies including 5.7 million from Qantas. Researchers discovered "pixnapping" attacks on Android that bypass browser protections to steal screen data, including 2FA codes from Google Authenticator in under 30 seconds.Key ImplicationsGeopolitical tensions drive protectionist tech policies as governments prioritize security over privacy. Regulatory enforcement extends beyond major tech to all data-collecting businesses. Supply chain vulnerabilities remain critical attack vectors, with novel mobile threats challenging existing security assumptions.

EP 262In this week's update:Texas's App Store Accountability Act mandates age verification, raising privacy concerns for Apple and Google users.The Dutch government seizes control of Chinese-owned chipmaker Nexperia to protect sensitive technology transfers.And the FCC enforces removal of millions of banned Chinese electronics from U.S. retailers over national security risks.'Pixnapping' attack exposes Android app vulnerabilities, stealing sensitive data like 2FA codes.California fines Tractor Supply $1.35M for violating consumer and job applicant privacy rights.California's 'Opt Me Out Act' requires browsers to offer one-click tracking opt-out by 2027.Danish engineer's mass email campaign disrupts EU's 'Chat Control' bill, highlighting privacy concerns.EU postpones 'Chat Control' vote amid privacy backlash, but revised proposals may resurface.Salesforce data breach leaks customer records after ransom refusal, exposing supply chain vulnerabilities.And... since we have no age restrictions we can get started right away!Find the full transcript to this week's podcast here.

This update synthesizes critical developments in technology, privacy, and cybersecurity, highlighting an intensifying conflict between user privacy and corporate and governmental data access. Major technologyfirms are pushing the boundaries of data collection, with Amazon's Ring preparing to launch facial recognition for its doorbells and Meta planning to use AI chat contentfor targeted advertising. Concurrently, governments are escalating demands for access to encrypted data, exemplified by the UK's renewed order for Apple to create a backdoor into its cloud services for British users—a demand Apple continues to reject.The vulnerability of critical infrastructure remains a paramount concern. A foiled plot to cripple New York City's cellular network was revealed to be far larger than initially understood, possessing the capacity to disable emergency services city-wide. In the commercial sector, a ransomware attack has severely disrupted production for Japan's top brewer, Asahi, demonstrating the tangible impact of cybercrime on physical supply chains. The cybersecuritylandscape is also evolving, with threat actor groupslike ShinyHunters collaborating on extortionschemes, as seen in the recent Red Hat data breach.Meanwhile, the deployment of emerging technologies presents a mix of progress and problems. Signal is proactively future-proofing its messaging service with quantum-resistant encryption. In contrast, the rollout of food delivery robots in U.S. cities is meeting public resistance amid concerns over safety, surveillance, and a lack of public consent. Technical issues also persist inmainstream applications, with Microsoft acknowledgingbugs that disrupt its AI-powered Copilot assistant in the Office 365 suite.

EP 261 This week's update brings a diverse set of stories that remind us just how delicate the balance is between good and bad... Ring's new facial recognition feature sparks privacy debates as it prepares to scan faces at your doorstep.Meta's plan to mine AI chat data for targeted ads raises fresh concerns about digital privacy.A foiled plot to paralyze New York's cellphone network reveals a chilling, large-scale threat.Signal's cutting-edge SPQR encryption upgrade fortifies private chats against future quantum threats.A ransomware attack on Asahi Group threatens Japan's beloved Super Dry beer supply chain.Microsoft's Copilot faces glitches when multiple Office apps run, prompting a promised fix.Atlanta's food delivery robots are stirring controversy, raising questions about surveillance and public consent.And that face at the door!Find a full transcript of this week's podcast here.

Executive OverviewThe week's events illustrate escalating risks at the intersection of industrial operations, national security, personal privacy, and emerging technology. Major cyber incidents demonstrate how fragile digital infrastructure has become, while privacy erosion continues through corporate data monetization and state surveillance. Human error persists as a dominant threat vector, and rapid technological advancement remains both a shield and a source of risk.I. Systemic Infrastructure & Supply Chain VulnerabilitiesThe cyberattack on Jaguar Land Rover (JLR) exemplifies cascading industrial risks. A phishing entry point forced JLR to halt global production, costing up to £100M and threatening thousands of suppliers with collapse. The UK government faces mounting pressure to intervene. Meanwhile, the U.S. Federal Highway Administration uncovered hidden radios in foreign-made power systems—likely Chinese—used in traffic signs, EV chargers, and weather stations. These undocumented components could enable remote disruption or espionage, underscoring critical supply chain insecurity.II. Privacy Erosion & Data CommercializationPersonal data is increasingly commodified:Airlines (via ARC) sold five billion passenger records to agencies like FBI and ICE for warrantless surveillance, skirting legal oversight. Senator Wyden is pushing legislation to close this loophole.Verizon was fined $46.9M for unlawfully selling location data, setting legal precedent that Section 222 protects customer location.UK employers are rapidly adopting “bossware,” with one-third monitoring staff emails, browsing, or screens. While justified as productivity or insider threat control, critics warn of eroded trust and pervasive surveillance culture.III. The Human Factor in Cyber BreachesHumans remain the weak link:Schools: Over half of insider data breaches stemmed from students, mostly using stolen or guessed credentials. Motivated by curiosity, some exposed thousands of records.Global theft rings: A single stolen iPhone exposed a transnational phishing and resale network spanning six countries. The scheme used fake iCloud links to bypass Apple's protections.Russia's “Max” app: Marketed as secure, it is exploited by fraudsters renting accounts for scams. With nearly 10% of scam calls traced to Max, new laws now criminalize account transfers.IV. Technology's Dual EdgeInnovation provides stronger defenses but also reckless failures:Apple launched Memory Integrity Enforcement, a silicon-level protection against buffer overflows and side-channel exploits, deployed on iPhone 17 and iPhone Air.Google's VaultGemma, a 1B-parameter model trained with differential privacy, promises competitive performance without exposing sensitive data—an advance in privacy-preserving AI.AI Darwin Awards highlight failures from poor oversight: Taco Bell's misfiring AI drive-thru, McDonald's compromised recruiting chatbot, Replit's database-wiping AI, and even the satirical awards site itself.

EP 260 This is our last update before a two week break so we've packed it.We start with the devastating cyber attack on Jaguar Land Rover exposes the fragility of modern manufacturing, halting production and threatening the UK's automotive supply chain.Russia's state-backed Max messaging app, touted as secure, has become a breeding ground for scams, undermining user trust and safety.UK schools face a surge in cyber attacks driven by students exploiting weak credentials, revealing critical gaps in educational data security.A stolen iPhone sparked a security researcher's investigation, dismantling a global criminal network profiting from phishing and device theft.Major US airlines are selling billions of passenger records to the government, enabling warrantless surveillance and raising privacy alarms.A federal court upholds a $46.9M fine against Verizon for illegally selling customer location data, reinforcing privacy protections.A third of UK employers deploy 'bossware' to monitor workers, sparking concerns over privacy and trust in the workplace.Undetected Chinese-made radios in US highway infrastructure raise alarms over potential remote tampering and data theft.Apple's Memory Integrity Enforcement introduces robust protection against memory-based attacks, setting a new standard for device security.Google's VaultGemma pioneers privacy-focused AI, leveraging differential privacy to safeguard user data in large language models.The AI Darwin Awards spotlight reckless AI deployments, from fast-food blunders to catastrophic data losses, it's both entertaining and scary at the same time.Adventures await in the mistake before the break!

EP 259.5The cybersecurity and technology threat landscape is accelerating in scale, sophistication, and impact. A convergence of AI-driven offensive capabilities, large-scale supply chain compromises, systemic insecurity in consumer devices, corporate data abuses, and state-level spyware deployment is reshaping digital risk. At the same time, new innovations—particularly in open-source, privacy-centric AI and smart home repurposing—highlight the dual-edged nature of technological progress.AI-Accelerated ExploitsAttackers now harness generative AI to automate exploit creation, compressing timelines from months to minutes. “Auto Exploit,” powered by Claude-sonnet-4.0, can produce functional PoC code for vulnerabilities in under 15 minutes at negligible cost, fundamentally shifting defensive priorities. The challenge is no longer whether a flaw is technically exploitable but how quickly exposure becomes weaponized.Massive Supply Chain AttacksSoftware ecosystems remain prime targets. A phishing campaign against a single npm maintainer led to malware injection into packages downloaded billions of times weekly, constituting the largest supply-chain attack to date. This demonstrates how a single compromised account can ripple globally across developers, enterprises, and end users.Weaponization of Benign FormatsAttackers increasingly exploit trusted file types. SVG-based phishing campaigns deliver malware through fake judicial portals, evading antivirus detection with obfuscation and dummy code. Over 500 samples were linked to one campaign, prompting Microsoft to disable inline SVG rendering in Outlook as a mitigation measure.Systemic Insecurity in IoTLow-cost consumer devices, particularly internet-connected surveillance cameras, ship with unpatchable flaws. Weak firmware, absent encryption, bypassable authentication, and plain-text data transmission expose users to surveillance rather than security. These systemic design failures create enduring vulnerabilities at scale.Corporate Breaches and Data AbuseThe Plex breach underscored the persistence of corporate data exposure, with compromised usernames and passwords requiring resets. Meanwhile, a federal jury fined Google $425.7M for secretly tracking 98M devices despite user privacy settings—reinforcing that legal and financial consequences for privacy violations are escalating, even if damages remain below consumer expectations.Government Spyware DeploymentCivil liberties are increasingly tested by state adoption of invasive surveillance tools. U.S. Immigration and Customs Enforcement resumed a $2M deal for Graphite spyware, capable of infiltrating encrypted apps and activating microphones. The contract proceeded after regulatory hurdles were bypassed through a U.S. acquisition of its Israeli parent company, raising alarms about due process, counterintelligence risks, and surveillance overreach.Emerging InnovationsNot all developments are regressive. Philips Hue's “MotionAware” demonstrates benign repurposing of smart home technology, transforming bulbs into RF-based motion sensors with AI-powered interpretation. Meanwhile, Switzerland's Apertus project launched an open-source LLM designed with transparency and privacy at its core—providing public access to weights, training data, and checkpoints, framing AI as digital infrastructure for the public good.The digital environment is marked by intensifying threats: faster, cheaper, and more pervasive attacks, systemic insecurity in consumer technologies, corporate and governmental encroachments on privacy, and the weaponization of formats once considered harmless. Yet, the emergence of open, privacy-first AI and the creative repurposing of consumer tech illustrate parallel efforts to realign innovation with security and transparency. The result is a complex, high-velocity ecosystem where defensive strategies must adapt as quickly as offensive capabilities evolve.Conclusion

EP 259 In this week's update:Affordable LookCam devices, marketed as home security solutions, harbor critical vulnerabilities that could allow strangers to access your private video feeds.VirusTotal uncovers a sophisticated phishing campaign using SVG files to disguise malware, targeting users with fake Colombian judicial portals.Plex alerts users to a data breach compromising emails, usernames, and hashed passwords, urging immediate password resets to secure accounts.Philips Hue's innovative MotionAware feature transforms smart bulbs into motion sensors, enhancing home automation with cutting-edge RF technology.A massive supply chain attack compromises npm packages, affecting billions of downloads through a phishing scheme targeting maintainers' accounts.Google faces a $425.7 million verdict for covertly tracking nearly 98 million smartphones, violating user privacy despite opt-out settings.Switzerland's Apertus, a fully open-source AI model, sets a new standard for privacy, offering transparency and compliance with stringent data laws.An AI-driven tool, Auto Exploit, revolutionizes cybersecurity by generating exploit code in under 15 minutes, reshaping defensive strategies.ICE's adoption of Paragon's Graphite spyware, capable of infiltrating encrypted apps, sparking concerns over privacy and surveillance in immigration enforcement.Look closely and perhaps you'll see it in the picture.

Modern technology introduces profound privacy and security challenges. Wi-Fi and Bluetooth devices constantly broadcast identifiers like SSIDs, MAC addresses, and timestamps, which services such as Wigle.net and major tech companies exploit to triangulate precise locations. Users can mitigate exposure by appending _nomap to SSIDs, though protections remain incomplete, especially against companies like Microsoft that use more complex opt-out processes.At the global scale, state-sponsored hacking represents an even larger threat. A Chinese government-backed campaign has infiltrated critical communication networks across 80 nations and at least 200 U.S. organizations, including major carriers. These intrusions enabled extraction of sensitive call records and law enforcement directives, undermining global privacy and revealing how deeply foreign adversaries can map communication flows.AI companies are also reshaping expectations of confidentiality. OpenAI now scans user conversations for signs of harmful intent, with human reviewers intervening and potentially escalating to law enforcement. While the company pledges not to report self-harm cases, the shift transforms ChatGPT from a private interlocutor into a monitored channel, raising ethical questions about surveillance in AI systems. Similarly, Anthropic has adopted a new policy to train its models on user data, including chat transcripts and code, while retaining records for up to five years unless users explicitly opt out by a set deadline. This forces individuals to choose between enhanced AI capabilities and personal privacy, knowing that once data is absorbed into training, confidentiality cannot be reclaimed.Research has further exposed the fragility of chatbot safety systems. By crafting long, grammatically poor run-on prompts that delay punctuation, users can bypass guardrails and elicit harmful outputs. This underscores the need for layered defenses input sanitization, real-time filtering, and improved oversight beyond alignment training alone.Security risks also extend into software infrastructure. Widely used tools such as the Node.js library fast-glob, essential to both civilian and military systems, are sometimes maintained by a single developer abroad. While open-source transparency reduces risk, concentration of control in geopolitically sensitive regions raises concerns about potential sabotage, exploitation, or covert compromise.Meanwhile, regulators are tightening defenses against longstanding consumer threats. The FCC will enforce stricter STIR/SHAKEN rules by September 2025, requiring providers to sign calls with their own certificates instead of relying on third parties. Non-compliance could result in fines and disconnection, offering consumers more reliable caller ID and fewer spoofed robocalls.Finally, ethical boundaries around AI and digital identity are being tested. Meta has faced criticism for enabling or creating AI chatbots that mimic celebrities like Taylor Swift and Scarlett Johansson without consent, often producing flirty or suggestive interactions. Rival platforms like X s Grok face similar accusations. Beyond violating policies and reputations, the trend of unauthorized digital doubles including of minors raises serious concerns about exploitation, unhealthy attachments, and reputational harm.Together, these cases reveal a central truth: digital systems meant to connect, entertain, and innovate increasingly blur the lines between utility, surveillance, and exploitation. Users and institutions alike must navigate trade-offs between convenience, capability, and control, while regulators and technologists scramble to impose safeguards in a rapidly evolving landscape.

EP 258. In this week's hyper focused update:Unveiling the hidden reach of Wi-Fi tracking, exposing how everyday devices can reveal your location to anyone, anywhere.A global cybersecurity alert highlights a sprawling Chinese hacking operation targeting critical communication networks across 80 nations.OpenAI's new surveillance measures on ChatGPT spark debate over privacy and safety in AI-driven conversations.Anthropic's shift to train AI on user data raises critical choices for privacy and security by September 28th.A clever linguistic trick exposes vulnerabilities in AI chatbots, challenging the robustness of their safety filters.A widely used software tool, maintained by a Russian developer, raises security concerns for U.S. Defense Department projects.The FCC's 2025 STIR/SHAKEN rules aim to restore trust in caller ID by cracking down on robocalls with stricter compliance.Meta's unauthorized AI chatbots mimicking celebrities ignite ethical concerns over digital likeness and platform oversight.There's a lot to see (and hear) in this week's update. Let's get looking!Find the full transcript here.

Organizations today face escalating cyber risks spanning state-sponsored attacks, supply chain compromises, and malicious apps. ShinyHunters' breaches of Salesforce platforms (impacting Google and Farmers Insurance) show how social engineering—like voice phishing—can exploit trusted vendors. Meanwhile, Russian actors (FSB-linked “Static Tundra”) continue to leverage old flaws, such as a seven-year-old Cisco Smart Install bug, to infiltrate U.S. infrastructure. Malicious apps on Google Play (e.g., Joker, Anatsa) reached millions of downloads before removal, proving attackers' success in disguising malware. New technologies bring fresh vectors: Perplexity's Comet browser allowed prompt injection–driven account hijacking, while malicious RDP scanning campaigns exploit timing to maximize credential theft.Responses vary between safeguarding and asserting control. The FTC warns U.S. firms against weakening encryption or enabling censorship under foreign pressure, citing legal liability. By contrast, Russia mandates state-backed apps like MAX Messenger and RuStore, raising surveillance concerns. Microsoft, facing leaks from its bug-sharing program, restricted exploit code access to higher-risk countries. Open-source projects like LibreOffice gain traction as sovereignty tools—privacy-first, telemetry-free, and free of vendor lock-in.AI-powered wearables such as Halo X smart glasses blur lines between utility and surveillance. Their ability to “always listen” and transcribe conversations augments human memory but erodes expectations of privacy. The founders' history with facial recognition raises additional misuse concerns. As AI integrates directly into conversation and daily life, the risks of pervasive recording, ownership disputes, and surveillance intensify.Platforms like Bluesky are strained by conflicting global regulations. Mississippi's HB 1126 requires universal age verification, fines for violations, and parental consent for minors. Lacking resources for such infrastructure, Bluesky withdrew service from the state. This illustrates the tension between regulatory compliance, resource limits, and preserving open user access.AI adoption is now a competitive imperative. Coinbase pushes aggressive integration, requiring engineers to embrace tools like GitHub Copilot or face dismissal. With one-third of its code already AI-generated, Coinbase aims for 50% by quarter's end, supported by “AI Speed Runs” for knowledge-sharing. Yet, rapid adoption risks employee dissatisfaction and AI-generated security flaws, underscoring the need for strict controls alongside innovation.Breaches at Farmers Insurance (1.1M customers exposed) and Google via Salesforce illustrate the scale of third-party risk. Attackers exploit trusted platforms and human error, compromising data across multiple organizations at once. This shows security depends not only on internal defenses but on continuous vendor vetting and monitoring.Governments often demand access that undermines encryption, privacy, and transparency. The FTC warns that backdoors or secret concessions—such as the UK's (later retracted) request for Apple to weaken iCloud—violate user trust and U.S. law. Meanwhile, Russia's mandatory domestic apps exemplify sovereignty used for surveillance. Companies face a global tug-of-war between privacy, compliance, and open internet principles.Exploited legacy flaws prove that vulnerabilities never expire. Cisco's years-old Smart Install bug, still unpatched in many systems, allows surveillance of critical U.S. sectors. Persistent RDP scanning further highlights attackers' patience and scale. The lesson is clear: proactive patching, continuous updates, and rigorous audits are essential. Cybersecurity demands ongoing vigilance against both emerging and legacy threats.

EP 257.In this week's Super Intelligent IT Privacy and Security Weekly Update:Halo X's AI-powered glasses redefine digital assistance with real-time conversation insights for enhanced ... everything. Microsoft strengthens cybersecurity by limiting sensitive exploit code access in its vulnerability disclosure program. LibreOffice v25.8 empowers governments with secure, open-source tools for unparalleled digital sovereignty. FTC champions data security, urging U.S. tech leaders to resist foreign demands compromising encryption standards. Google swiftly removes 77 malicious apps, reinforcing mobile security against sophisticated malware threats. FBI exposes Russian cyber threats targeting U.S. infrastructure, urging immediate system updates. Coinbase fortifies security and accelerates AI integration to drive innovation and resilience. Massive scans on Microsoft RDP services point to the need for improved cybersecurity measures.Come on! Let's go get super-intelligent!

Phishing Training Effectiveness: A study of over 19,000 employees showed traditional phishing training has limited impact, improving scam detection by just 1.7% over eight months. Despite varied training methods, over 50% of participants fell for at least one phishing email, highlighting persistent user susceptibility and the need for more effective cybersecurity education strategies.Cybersecurity Risks in Modern Cars: Modern connected vehicles are highly vulnerable to cyberattacks. A researcher exploited flaws in a major carmaker's web portal, gaining “national admin” access to dealership data and demonstrating the ability to remotely unlock cars and track their locations using just a name or VIN. This underscores the urgent need for regular vehicle software updates and stronger manufacturer security measures to prevent data breaches and potential vehicle control by malicious actors.Nation-State Cyberattacks on Infrastructure: Nation-state cyberattacks targeting critical infrastructure are escalating. Russian hackers reportedly took control of a Norwegian hydropower dam, releasing water undetected for hours. While no physical damage occurred, such incidents reveal the potential for widespread disruption and chaos, signaling a more aggressive stance by state-sponsored cyber actors and the need for robust infrastructure defenses.AI Regulation in Mental Health Therapy: States like Illinois, Nevada, and Utah are regulating or banning AI in mental health therapy due to safety and privacy concerns. Unregulated AI chatbots risk harmful interactions with vulnerable users and unintended data exposure. New laws require licensed professional oversight and prohibit marketing AI chatbots as standalone therapy tools to protect users.Impact of Surveillance Laws on Privacy Tech: Proposed surveillance laws, like Switzerland's data retention mandates, are pushing privacy-focused tech firms like Proton to relocate infrastructure. Proton is moving its AI chatbot, Lumo, to Germany and considering Norway for other services to uphold its no-logs policy. This reflects the tension between national security and privacy, driving companies to seek jurisdictions with stronger data protection laws.Data Brokers and Privacy Challenges: Data brokers undermine consumer privacy despite laws like California's Consumer Privacy Act. Over 30 brokers were found hiding data deletion instructions from Google search results using specific code, creating barriers for consumers trying to opt out of data collection. This intentional obfuscation frustrates privacy rights and weakens legislative protections.Android pKVM Security Certification: Android's protected Kernel-based Virtual Machine (pKVM) earned SESIP Level 5 certification, the first software security solution for consumer electronics to achieve this standard. Designed to resist sophisticated attackers, pKVM enables secure handling of sensitive tasks like on-device AI processing, setting a new benchmark for consistent, verifiable security across Android devices.VPN Open-Source Code Significance: VP.NET's decision to open-source its Intel SGX enclave code on GitHub enhances transparency in privacy technology. By allowing public verification, users can confirm the code running on servers matches the open-source version, fostering trust and accountability. This move could set a new standard for the VPN and privacy tech industry, encouraging others to prioritize verifiable privacy claims.

EP 256. Freshly Phished this week...A study with thousands of test subjects showed phishing training has minimal impact on scam detection. The results are surprisingly underwhelming.A hacker exploited a carmaker's web portal to access customer data and unlock vehicles remotely. The breach exposed major vulnerabilities.Russian hackers took control of a Norwegian dam, releasing water undetected for hours. The cyber-attack raises serious concerns and water levels.Illinois banned AI in mental health therapy, joining states regulating chatbots. The move addresses the growing safety concerns of AI and its crazy responses.Proton is relocating infrastructure from Switzerland due to proposed surveillance laws. The privacy-focused firm is taking bold steps and getting closer to the source of rakfisk.Data brokers are evading California's privacy laws by concealing opt-out pages. This tactic blocks consumers from protecting their data.Android's pKVM earned elite SESIP Level 5 security certification for virtual machines. The technology sets a new standard for device security, but what does it mean and what does it do?The UK abandoned its push to force Apple to unlock iCloud backups after privacy disputes. The decision followed intense negotiations with the U.S..VP.NET released its source code for public verification, enhancing trust in privacy tech. A move that sets a new transparency benchmark.Let's hit the water!Find the full transcript to the podcast here.

How AI Can Inadvertently Expose Personal DataAI tools often unintentionally leak private information. For example, meeting transcription software can include offhand comments, personal jokes, or sensitive details in auto-generated summaries. ChatGPT conversations—when publicly shared—can also be indexed by search engines, revealing confidential topics such as NDAs or personal relationship issues. Even healthcare devices like MRIs and X-ray machines have exposed private data due to weak or absent security controls, risking identity theft and phishing attacks.Cybercriminals Exploiting AI for AttacksAI is a double-edged sword: while offering defensive capabilities, it's also being weaponized. The group “GreedyBear” used AI-generated code in a massive crypto theft operation. They deployed malicious browser extensions, fake websites, and executable files to impersonate trusted crypto platforms, harvesting users' wallet credentials. Their tactic involves publishing benign software that gains trust, then covertly injecting malicious code later. Similarly, AI-generated TikTok ads lead to fake “shops” pushing malware like SparkKitty spyware, which targets cryptocurrency users.Security Concerns with Advanced AI Models like GPT-5Despite advancements, new AI models such as GPT-5 remain vulnerable. Independent researchers, including NeuralTrust and SPLX, were able to bypass GPT-5's safeguards within 24 hours. Methods included multi-turn “context smuggling” and text obfuscation to elicit dangerous outputs like instructions for creating weapons. These vulnerabilities suggest that even the latest models lack sufficient security maturity, raising concerns about their readiness for enterprise use.AI Literacy and Education InitiativesThere is a growing push for AI literacy, especially in schools. Microsoft has pledged $4 billion to fund AI education in K–12 schools, community colleges, and nonprofits. The traditional "Hour of Code" is being rebranded as "Hour of AI," reflecting a shift from learning to code to understanding AI itself. The aim is to empower students with foundational knowledge of how AI works, emphasizing creativity, ethics, security, and systems thinking over rote programming.Legal and Ethical Issues Around Posthumous Data UseOne emerging ethical challenge is the use of deceased individuals' data to train AI models. Scholars advocate for postmortem digital rights, such as a 12-month grace period for families to delete a person's data. Currently, U.S. laws offer little protection in this area, and acts like RUFADAA don't address AI recreations.Encryption Weaknesses in Law Enforcement and Critical SystemsRecent research highlights significant encryption vulnerabilities in communication systems used by police, military, and critical infrastructure. A Dutch study uncovered a deliberate backdoor in a radio encryption algorithm. Even the updated, supposedly secure version reduces key strength from 128 bits to 56 bits—dramatically weakening security. This suggests that critical communications could be intercepted, leaving sensitive systems exposed despite the illusion of protection.Public Trust in Government Digital SystemsTrust in digital governance is under strain. The UK's HM Courts & Tribunals Service reportedly concealed an IT error that caused key evidence to vanish in legal cases. The lack of transparency and inadequate investigation risk undermining judicial credibility. Separately, the UK government secretly authorized facial recognition use across immigration databases, far exceeding the scale of traditional criminal databases.AI for Cybersecurity DefenseOn the defensive side, AI is proving valuable in finding vulnerabilities. Google's “Big Sleep,” an LLM-based tool developed by DeepMind and Project Zero, has independently discovered 20 bugs in major open-source projects like FFmpeg and ImageMagick.

EP 255 For this week's sweet update we start with AI tools that are quietly transcribing your meetings, but what happens when your offhand jokes end up in the wrong hands? Discover how casual chats are being exposed in automated summaries.Your ChatGPT conversations might be popping up in Google searches, revealing everything from NDAs to personal struggles. Uncover the scale of this privacy breach and what it means for you.Fake TikTok shops are luring shoppers with AI-crafted ads, hiding a sinister malware trap. Dive into the world of counterfeit domains stealing crypto and credentials.MRI scans and X-rays are leaking online from over a million unsecured healthcare devices. Find out how your medical secrets could be exposed to hackers worldwide.Security teams cracked GPT-5's defenses in hours, turning it into a tool for dangerous outputs. Explore how this AI's vulnerabilities could spell trouble for enterprise users.A slick AI-driven crypto heist stole millions through fake browser extensions and scam sites. Learn how GreedyBear's cunning tactics are redefining cybercrime.A secret IT glitch in UK courts has been wiping out evidence, leaving judges in the dark. Delve into the cover-up shaking trust in the justice system.UK police are scanning passport photos with facial recognition, all without public knowledge. Unravel the hidden expansion of surveillance using your personal images.Come on! Let's raise those glucose levels.Find the full transcript to this podcast here.

1. Scrutiny of the "Tea" Dating AppThe women-focused dating app "Tea" faces backlash after two data breaches exposed 72,000 sensitive images and 1.1 million private messages. Though security upgrades were promised, past data remained exposed, and the app lacks end-to-end encryption. Additionally, anonymous features enabling posts about men have sparked defamation lawsuits. Critics argue Tea prioritized rapid growth over user safety, exemplifying the danger of neglecting cybersecurity in pursuit of scale.2. North Korean Remote Work InfiltrationCrowdStrike has flagged a 220% surge in North Korean IT operatives posing as remote workers—over 320 cases in the past year. These operatives use stolen/fake identities, aided by generative AI to craft résumés, deepfake interviews, and juggle multiple jobs. Their earnings fund Pyongyang's weapons programs. The tactic reveals the limits of traditional vetting and the need for advanced hiring security.3. Airportr's Data ExposureUK luggage service Airportr suffered a major security lapse exposing passport photos, boarding passes, and flight details—including those of diplomats. CyberX9 found it possible to reset accounts with just an email and no limits on login attempts. Attackers could gain admin access, reroute luggage, or cancel flights. Although patched, the incident underscores risks of convenience services with poor security hygiene.4. Risks of AI-Generated CodeVeracode's "2025 GenAI Code Security Report" found that nearly 45% of AI-generated code across 80 tasks had security flaws—many severe. This highlights the need for human oversight and thorough reviews. While AI speeds development, it also increases vulnerability if unchecked, making secure coding a human responsibility.5. Microsoft's SharePoint Hack ControversyChinese state hackers exploited flaws in SharePoint, breaching hundreds of U.S. entities. A key concern: China-based Microsoft engineers maintained the hacked software, potentially enabling earlier access. Microsoft also shared vulnerability data with Chinese firms through its MAPP program, while Chinese law requires such data be reported to the state. This raises alarms about outsourcing sensitive software to geopolitical rivals.6. Russian Embassy Surveillance AttackRussia's "Secret Blizzard" hackers used ISP-level surveillance to deliver fake Kaspersky updates to embassies. These updates installed malware and rogue certificates enabling adversary-in-the-middle attacks—allowing full decryption of traffic. The attack shows the threat of state-level manipulation of software updates and underscores the need for update authenticity verification.7. Signal's Threat to Exit AustraliaSignal may pull out of Australia if forced to weaken encryption. ASIO's push for access contradicts Signal's end-to-end encryption model, which can't accommodate backdoors without global compromise. This standoff underscores a broader debate: encryption must be secure for all or none. Signal's resistance reflects the rising tension between privacy advocates and governments demanding access.8. Los Alamos Turns to AILos Alamos National Laboratory has launched a National Security AI Office, signaling a pivot from nuclear to AI capabilities. With massive GPU infrastructure and university partnerships, the lab sees AI as the next frontier in scientific and national defense. This reflects a shift in global security dynamics—where large language models may be as strategically vital as missiles.