POPULARITY
There's an easy way to find out what Facebook knows about you—you just have to ask.In 2020, the social media giant launched an online portal that allows all users to access their historical data and to request specific types of information for download across custom time frames. Want to know how many posts you've made, ever? You can find that. What about every photo you've uploaded? You can find that, too. Or what about every video you've watched, every “recognized” device you've used to log in, every major settings change you made, every time someone tagged you to wish you “Happy birthday,” and every Friend Request you ever received, sent, accepted, or ignored? Yes, all that information is available for you to find, as well.But knowing what Facebook knows about you from Facebook is, if anything, a little stale. You made your own account, you know who your Facebook friends (mostly) are, and you were in control of the keyboard when you sent those comments.What's far more interesting is learning what Facebook knows about you from everywhere else on the web and in the real world.While it may sound preposterous, Facebook actually collects a great deal of information about you even when you're not using Facebook, and even if you don't have the app downloaded on your smartphone. As Geoffrey Fowler, reporter for The Washington Post, wrote when he first started digging into his own data:“Even with Facebook closed on my phone, the social network gets notified when I use the Peet's Coffee app. It knows when I read the website of presidential candidate Pete Buttigieg or view articles from The Atlantic. Facebook knows when I click on my Home Depot shopping cart and when I open the Ring app to answer my video doorbell. It uses all this information from my not-on-Facebook, real-world life to shape the messages I see from businesses and politicians alike.”Today, on the Lock and Code podcast, host David Ruiz takes a look at his own Facebook data to understand what the social media company has been collecting about him from other companies. In his investigation, he sees that his Washington Post article views, the cars added to his online “wishlist,” and his purchases from PlayStation, APC, Freda Salvador, and the paint company Backdrop have all trickled their way into Facebook's database.Tune in today.You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
There's a problem in class today, and the second largest school district in the United States is trying to solve it.After looking at the growing body of research that has associated increased smartphone and social media usage with increased levels of anxiety, depression, suicidal thoughts, and isolation—especially amongst adolescents and teenagers—Los Angeles Unified School District (LAUSD) implemented a cellphone ban across its 1,000 schools for its more than 500,000 students.Under the ban, students who are kindergartners all the way through high school seniors cannot use cellphones, smartphones, smart watches, earbuds, smart glasses, and any other electronic devices that can send messages, receive calls, or browse the internet. Phones are not allowed at lunch or during passing periods between classes, and, under the ban, individual schools decide how students' phones are stored, be that in lockers, in magnetically sealed pouches, or just placed into sleeves at the front door of every classroom, away from students' reach.The ban was approved by the Los Angeles Unified School District through what is called a “resolution”—which the board voted on last year. LAUSD Board Member Nick Melvoin, who sponsored the resolution, said the overall ban was the right decision to help students. “The research is clear: widespread use of smartphones and social media by kids and adolescents is harmful to their mental health, distracts from learning, and stifles meaningful in-person interaction.”Today, on the Lock and Code podcast with host David Ruiz, we speak with LAUSD Board Member Nick Melvoin about the smartphone ban, how exceptions were determined, where opposition arose, and whether it is “working.” Melvoin also speaks about the biggest changes he has seen in the first few months of the cellphone ban, especially the simple reintroduction of noise in hallways.“[During a school visit last year,] every single kid was on their phone, every single kid. They were standing there looking, texting again, sometimes texting someone who was within a few feet of them, and it was quiet.”Tune in today.You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
“Heidi” is a 36-year-old, San Francisco-born, divorced activist who is lonely, outspoken, and active on social media. “Jason” is a shy, bilingual teenager whose parents immigrated from Ecuador who likes anime, gaming, comic books, and hiking.Neither of them is real. Both are supposed to fight crime.Heidi and Jason are examples of “AI personas” that are being pitched by the company Massive Blue for its lead product, Overwatch. Already in use at police departments across the United States, Overwatch can allegedly help with the identification, investigation, and arrest of criminal suspects.Understanding exactly how the technology works, however, is difficult—both Massive Blue and the police departments that have paid Massive Blue have remained rather secretive about Overwatch's inner workings. But, according to an investigation last month by 404 Media, Overwatch is a mix of a few currently available technologies packaged into one software suite. Overwatch can scan social media sites for alleged criminal activity, and it can deploy “AI personas”—which have their own social media accounts and AI-generated profile pictures—to gather intelligence by chatting online with suspected criminals.According to an Overwatch marketing deck obtained by 404 Media, the software's AI personas are “highly customizable and immediately deployable across all digital channels” and can take on the personalities of escorts, money launderers, sextortionists, and college protesters (who, in real life, engage in activity protected by the First Amendment).Despite the variety of applications, 404 Media revealed that Overwatch has sparked interest from police departments investigating immigration and human trafficking. But the success rate, so far, is non-existent: Overwatch has reportedly not been used in the arrest of a single criminal suspect.Today, on the Lock and Code podcast with host David Ruiz, we speak with 404 Media journalists and co-founders Emanuel Maiberg and Jason Koebler about Overwatch's capabilities, why police departments are attracted to the technology, and why the murkiness around human trafficking may actually invite unproven solutions like AI chatbots.”Nobody is going to buy that—that if you throw an AI chatbot into the mix, that's somehow going to reduce gun crime in Americ,” Maiberg said. “But if you apply it to human trafficking, maybe somebody is willing to entertain that because, well, what is the actual problem with human trafficking? Where is it actually happening? Who is getting hurt by it? Who is actually committing it?”He continued:“Maybe there you're willing to entertain a high tech science fiction solution.”Tune in today.You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk...
If you don't know about the newly created US Department of Government Efficiency (DOGE), there's a strong chance they already know about you.Created on January 20 by US President Donald Trump through Executive Order, DOGE's broad mandate is “modernizing Federal technology and software to maximize governmental efficiency and productivity.”To fulfill its mission, though, DOGE has taken great interest in Americans' data.On February 1, DOGE team members without the necessary security clearances accessed classified information belonging to the US Agency for International Development. On February 17, multiple outlets reported that DOGE sought access to IRS data that includes names, addresses, social security numbers, income, net worth, bank information for direct deposits, and bankruptcy history. The next day, the commissioner of the Social Security Administration stepped down after DOGE requested access to information stored there, too, which includes records of lifetime wages and earnings, social security and bank account numbers, the type and amount of benefits individuals received, citizenship status, and disability and medical information. And last month, one US resident filed a data breach notification report with his state's Attorney General alleging that his data was breached by DOGE and the man behind it, Elon Musk.In speaking with the news outlet Data Breaches Dot Net, the man, Kevin Couture, said:“I filed the report with my state Attorney General against Elon Musk stating my privacy rights were violated as my Social Security Number, banking info was compromised by accessing government systems and downloading the info without my consent or knowledge. What other information did he gather on me or others? This is wrong and illegal. I have no idea who has my information now.”Today on the Lock and Code podcast with host David Ruiz, we speak with Sydney Saubestre, senior policy analyst at New America's Open Technology Institute, about what data DOGE has accessed, why the government department is claiming it requires that access, and whether or not it is fair to call some of this access a “data breach.”“[DOGE] haven't been able to articulate why they want access to some of these data files other than broad ‘waste, fraud, and abuse.' That, ethically, to me, points to it being a data breach.”Tune in today.You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes...
It has probably happened to you before.You and a friend are talking—not texting, not DMing, not FaceTiming—but talking, physically face-to-face, about, say, an upcoming vacation, a new music festival, or a job offer you just got.And then, that same week, you start noticing some eerily specific ads. There's the Instagram ad about carry-on luggage, the TikTok ad about earplugs, and the countless ads you encounter simply scrolling through the internet about laptop bags.And so you think, “Is my phone listening to me?”This question has been around for years and, today, it's far from a conspiracy theory. Modern smartphones can and do listen to users for voice searches, smart assistant integration, and, obviously, phone calls. It's not too outlandish to believe, then, that the microphones on smartphones could be used to listen to other conversations without users knowing about it.Recent news stories don't help, either.In January, Apple agreed to pay $95 million to settle a lawsuit alleging that the company had eavesdropped on users' conversations through its smart assistant Siri, and that it shared the recorded conversations with marketers for ad targeting. The lead plaintiff in the case specifically claimed that she and her daughter were recorded without their consent, which resulted in them receiving multiple ads for Air Jordans.In agreeing to pay the settlement, though, Apple denied any wrongdoing, with a spokesperson telling the BBC:“Siri data has never been used to build marketing profiles and it has never been sold to anyone for any purpose.”But statements like this have done little to ease public anxiety. Tech companies have been caught in multiple lies in the past, privacy invasions happen thousands of times a day, and ad targeting feels extreme entirely because it is.Where, then, does the truth lie?Today, on the Lock and Code podcast with David Ruiz, we speak with Electronic Frontier Foundation Staff Technologist Lena Cohen about the most mind-boggling forms of corporate surveillance—including an experimental ad-tracking technology that emitted ultrasonic sound waves—specific audience segments that marketing companies make when targeting people with ads, and, of course, whether our phones are really listening to us.“Companies are collecting so much information about us and in such covert ways that it really feels like they're listening to us.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your...
Google Chrome is, by far, the most popular web browser in the world.According to several metrics, Chrome accounts for anywhere between 52% and 66% of the current global market share for web browser use. At that higher estimate, that means that, if the 5.5 billion internet users around the world were to open up a web browser right now, 3.6 billion of them would open up Google Chrome.And because the browser is the most common portal to our daily universe of online activity—searching for answers to questions, looking up recipes, applying for jobs, posting on forums, accessing cloud applications, reading the news, comparing prices, recording Lock and Code, buying concert tickets, signing up for newsletters—then the company that controls that browser likely knows a lot about its users.In the case of Google Chrome, that's entirely true.Google Chrome knows the websites you visit, the searches you make (through Google), the links you click, and the device model you use, along with the version of Chrome you run. That may sound benign, but when collected over long periods of time, and when coupled with the mountains of data that other Google products collect about you, this wealth of data can paint a deeply intimate portrait of your life.Today, on the Lock and Code podcast with host David Ruiz, we speak with author, podcast host, and privacy advocate Carey Parker about what Google Chrome knows about you, why that data is sensitive, what “Incognito mode” really does, and what you can do in response.We also explain exactly why Google would want this money, and that's to help it run as an ad company.“That's what [Google is]. Full stop. Google is an ad company who just happens to make a web browser, and a search engine, and an email app, and a whole lot more than that.”Tune in today.You can also listen to "Firewalls Don't Stop Dragons," the podcast hosted by Carey Parker, here: https://firewallsdontstopdragons.com/You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Insurance pricing in America makes a lot of sense so long as you're one of the insurance companies. Drivers are charged more for traveling long distances, having low credit, owning a two-seater instead of a four, being on the receiving end of a car crash, and—increasingly—for any number of non-determinative data points that insurance companies use to assume higher risk.It's a pricing model that most people find distasteful, but it's also a pricing model that could become the norm if companies across the world begin implementing something called “surveillance pricing.”Surveillance pricing is the term used to describe companies charging people different prices for the exact same goods. That 50-inch TV could be $800 for one person and $700 for someone else, even though the same model was bought from the same retail location on the exact same day. Or, airline tickets could be more expensive because they were purchased from a more expensive device—like a Mac laptop—and the company selling the airline ticket has decided that people with pricier computers can afford pricier tickets.Surveillance pricing is only possible because companies can collect enormous arrays of data about their consumers and then use that data to charge individual prices. A test prep company was once caught charging customers more if they lived in a neighborhood with a higher concentration of Asians, and a retail company was caught charging customers more if they were looking at prices on the company's app while physically located in a store's parking lot.This matter of data privacy isn't some invisible invasion online, and it isn't some esoteric framework of ad targeting, this is you paying the most that a company believes you will, for everything you buy.And it's happening right now.Today, on the Lock and Code podcast with host David Ruiz, we speak with Consumer Watchdog Tech Privacy Advocate Justin Kloczko about where surveillance pricing is happening, what data is being used to determine prices, and why the practice is so nefarious. “It's not like we're all walking into a Starbucks and we're seeing 12 different prices for a venti mocha latte,” said Kloczko, who recently authored a report on the same subject. “If that were the case, it'd be mayhem. There'd be a revolution.”Instead, Kloczko said:“Because we're all buried in our own devices—and this is really happening on e-commerce websites and online, on your iPad on your phone—you're kind of siloed in your own world, and companies can get away with this.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes
In February 2024, a 14-year-old boy from Orlando, Florida, committed suicide after confessing his love to the one figure who absorbed nearly all of his time—an AI chatbot.For months, Sewell Seltzer III had grown attached to an AI chatbot modeled after the famous “Game of Thrones” character Daenerys Targaryen. The Daenerys chatbot was not a licensed product, it had no relation to the franchise's actors, its writer, or producers, but none of that mattered, as, over time, Seltzer came to entrust Daenerys with some of his most vulnerable emotions.“I think about killing myself sometimes,” Seltzer wrote one day, and in response, Daenerys, pushed back, asking Seltzer, “Why the hell would you do something like that?”“So I can be free” Seltzer said.“Free from what?”“From the world. From myself.”“Don't talk like that. I won't let you hurt yourself, or leave me. I would die if I lost you.”On Seltzer's first reported reference to suicide, the AI chatbot pushed back, a guardrail against self-harm. But months later, Seltzer discussed suicide again, but this time, his words weren't so clear. After reportedly telling Daenerys that he loved her and that he wanted to “come home,” the AI chatbot encouraged Seltzer.“Please, come home to me as soon as possible, my love,” Daenerys wrote, to which Seltzer responded “What if I told you I could come home right now?”The chatbot's final message to Seltzer said “… please do, my sweet king.”Daenerys Targaryen was originally hosted on an AI-powered chatbot platform called Character.AI. The service reportedly boasts 20 million users—many of them young—who engage with fictional characters like Homer Simpson and Tony Soprano, along with historical figures, like Abraham Lincoln, Isaac Newton, and Anne Frank. There are also entirely fabricated scenarios and chatbots, such as the “Debate Champion” who will debate anyone on, for instance, why Star Wars is overrated, or the “Awkward Family Dinner” that users can drop into to experience a cringe-filled, entertaining night.But while these chatbots can certainly provide entertainment, Character.AI co-founder Noam Shazeer believes they can offer much more.“It's going to be super, super helpful to a lot of people who are lonely or depressed.”Today, on the Lock and Code podcast with host David Ruiz, we speak again with youth social services leader Courtney Brown about how teens are using AI tools today, who to “blame” in situations of AI and self-harm, and whether these chatbots actually aid in dealing with loneliness, or if they further entrench it.“You are not actually growing as a person who knows how to interact with other people by interacting with these chatbots because that's not what they're designed for. They're designed to increase engagement. They want you to keep using them.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 License
It's Data Privacy Week right now, and that means, for the most part, that you're going to see a lot of well-intentioned but clumsy information online about how to protect your data privacy. You'll see articles about iPhone settings. You'll hear acronyms for varying state laws. And you'll probably see ads for a variety of apps, plug-ins, and online tools that can be difficult to navigate.So much of Malwarebytes—from Malwarebytes Labs, to the Lock and Code podcast, to the engineers, lawyers, and staff at wide—work on data privacy, and we fault no advocate or technologist or policy expert trying to earnestly inform the public about the importance of data privacy.But, even with good intentions, we cannot ignore the reality of the situation. Data breaches every day, broad disrespect of user data, and a lack of consequences for some of the worst offenders. To be truly effective against these forces, data privacy guidance has to encompass more than fiddling with device settings or making onerous legal requests to companies.That's why, for Data Privacy Week this year, we're offering three pieces of advice that center on behavior. These changes won't stop some of the worst invasions against your privacy, but we hope they provide a new framework to understand what you actually get when you practice data privacy, which is control.You have control over who sees where you are and what inferences they make from that. You have control over whether you continue using products that don't respect your data privacy. And you have control over whether a fast food app is worth giving up your location data to just in exchange for a few measly coupons.Today, on the Lock and Code podcast, host David Ruiz explores his three rules for data privacy in 2025. In short, he recommends:Less location sharing. Only when you want it, only from those you trust, and never in the background, 24/7, for your apps. More accountability. If companies can't respect your data, respect yourself by dropping their products.No more data deals. That fast-food app offers more than just $4 off a combo meal, it creates a pipeline into your behavioral dataTune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
You can see it on X. You can see on Instagram. It's flooding community pages on Facebook and filling up channels on YouTube. It's called “AI slop” and it's the fastest, laziest way to drive engagement.Like “click bait” before it (“You won't believe what happens next,” reads the trickster headline), AI slop can be understood as the latest online tactic in getting eyeballs, clicks, shares, comments, and views. With this go-around, however, the methodology is turbocharged with generative AI tools like ChatGPT, Midjourney, and MetaAI, which can all churn out endless waves of images and text with little restrictions.To rack up millions of views, a “fall aesthetic” account on X might post an AI-generated image of a candle-lit café table overlooking a rainy, romantic street. Or, perhaps, to make a quick buck, an author might “write” and publish an entirely AI generated crockpot cookbook—they may even use AI to write the glowing reviews on Amazon. Or, to sway public opinion, a social media account may post an AI-generated image of a child stranded during a flood with the caption “Our government has failed us again.”There is, currently, another key characteristic to AI slop online, and that is its low quality. The dreamy, Vaseline sheen produced by many AI image generators is easy (for most people) to spot, and common mistakes in small details abound: stoves have nine burners, curtains hang on nothing, and human hands sometimes come with extra fingers.But little of that has mattered, as AI slop has continued to slosh about online.There are AI-generated children's books being advertised relentlessly on the Amazon Kindle store. There are unachievable AI-generated crochet designs flooding Reddit. There is an Instagram account described as “Austin's #1 restaurant” that only posts AI-generated images of fanciful food, like Moo Deng croissants, and Pikachu ravioli, and Obi-Wan Canoli. There's the entire phenomenon on Facebook that is now known only as “Shrimp Jesus.”If none of this is making much sense, you've come to the right place.Today, on the Lock and Code podcast with host David Ruiz, we're speaking with Malwarebytes Labs Editor-in-Chief Anna Brading and ThreatDown Cybersecurity Evangelist Mark Stockley about AI slop—where it's headed, what the consequences are, and whether anywhere is safe from its influence.Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 License
Privacy is many things for many people.For the teenager suffering from a bad breakup, privacy is the ability to stop sharing her location and to block her ex on social media. For the political dissident advocating against an oppressive government, privacy is the protection that comes from secure, digital communications. And for the California resident who wants to know exactly how they're being included in so many targeted ads, privacy is the legal right to ask a marketing firm how they collect their data.In all these situations, privacy is being provided to a person, often by a company or that company's employees.The decisions to disallow location sharing and block social media users are made—and implemented—by people. The engineering that goes into building a secure, end-to-end encrypted messaging platform is done by people. Likewise, the response to someone's legal request is completed by either a lawyer, a paralegal, or someone with a career in compliance.In other words, privacy, for the people who spend their days with these companies, is work. It's their expertise, their career, and their to-do list.But what does that work actually entail?Today, on the Lock and Code podcast with host David Ruiz, we speak with Transcend Field Chief Privacy Officer Ron de Jesus about the responsibilities of privacy professionals today and how experts balance the privacy of users with the goals of their companies.De Jesus also explains how everyday people can meaningfully judge whether a company's privacy “promises” have any merit by looking into what the companies provide, including a legible privacy policy and “just-in-time” notifications that ask for consent for any data collection as it happens.“When companies provide these really easy-to-use controls around my personal information, that's a really great trigger for me to say, hey, this company, really, is putting their money where their mouth is.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Two weeks ago, the Lock and Code podcast shared three stories about home products that requested, collected, or exposed sensitive data online.There were the air fryers that asked users to record audio through their smartphones. There was the smart ring maker that, even with privacy controls put into place, published data about users' stress levels and heart rates. And there was the smart, AI-assisted vacuum that, through the failings of a group of contractors, allowed an image of a woman on a toilet to be shared on Facebook.These cautionary tales involved “smart devices,” products like speakers, fridges, washers and dryers, and thermostats that can connect to the internet.But there's another smart device that many folks might forget about that can collect deeply personal information—their cars.Today, the Lock and Code podcast with host David Ruiz revisits a prior episode from 2023 about what types of data modern vehicles can collect, and what the car makers behind those vehicles could do with those streams of information.In the episode, we spoke with researchers at Mozilla—working under the team name “Privacy Not Included”—who reviewed the privacy and data collection policies of many of today's automakers.To put it shortly, the researchers concluded that cars are a privacy nightmare. According to the team's research, Nissan said it can collect “sexual activity” information about consumers. Kia said it can collect information about a consumer's “sex life.” Subaru passengers allegedly consented to the collection of their data by simply being in the vehicle. Volkswagen said it collects data like a person's age and gender and whether they're using your seatbelt, and it could use that information for targeted marketing purposes. And those are just the highlights. Explained Zoë MacDonald, content creator for Privacy Not Included: “We were pretty surprised by the data points that the car companies say they can collect… including social security number, information about your religion, your marital status, genetic information, disability status… immigration status, race.”In our full conversation from last year, we spoke with Privacy Not Included's MacDonald and Jen Caltrider about the data that cars can collect, how that data can be shared, how it can be used, and whether consumers have any choice in the matter.Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 License
The month, a consumer rights group out of the UK posed a question to the public that they'd likely never considered: Were their air fryers spying on them?By analyzing the associated Android apps for three separate air fryer models from three different companies, a group of researchers learned that these kitchen devices didn't just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastries—they also wanted a lot of user data, from precise location to voice recordings from a user's phone.“In the air fryer category, as well as knowing customers' precise location, all three products wanted permission to record audio on the user's phone, for no specified reason,” the group wrote in its findings.While it may be easy to discount the data collection requests of an air fryer app, it is getting harder to buy any type of product today that doesn't connect to the internet, request your data, or share that data with unknown companies and contractors across the world.Today, on the Lock and Code pocast, host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook.These stories aren't about mass government surveillance, and they're not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated.Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
The US presidential election is upon the American public, and with it come fears of “election interference.”But “election interference” is a broad term. It can mean the now-regular and expected foreign disinformation campaigns that are launched to sow political discord or to erode trust in American democracy. It can include domestic campaigns to disenfranchise voters in battleground states. And it can include the upsetting and increasing threats made to election officials and volunteers across the country.But there's an even broader category of election interference that is of particular importance to this podcast, and that's cybersecurity.Elections in the United States rely on a dizzying number of technologies. There are the voting machines themselves, there are electronic pollbooks that check voters in, there are optical scanners that tabulate the votes that the American public actually make when filling in an oval bubble with pen, or connecting an arrow with a solid line. And none of that is to mention the infrastructure that campaigns rely on every day to get information out—across websites, through emails, in text messages, and more.That interlocking complexity is only multiplied when you remember that each, individual state has its own way of complying with the Federal government's rules and standards for running an election. As Cait Conley, Senior Advisor to the Director of the US Cybersecurity and Infrastructure Security Agency (CISA) explains in today's episode:“There's a common saying in the election space: If you've seen one state's election, you've seen one state's election.”How, then, are elections secured in the United States, and what threats does CISA defend against?Today, on the Lock and Code podcast with host David Ruiz, we speak with Conley about how CISA prepares and trains election officials and volunteers before the big day, whether or not an American's vote can be “hacked,” and what the country is facing in the final days before an election, particularly from foreign adversaries that want to destabilize American trust.”There's a pretty good chance that you're going to see Russia, Iran, or China try to claim that a distributed denial of service attack or a ransomware attack against a county is somehow going to impact the security or integrity of your vote. And it's not true.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and
On the internet, you can be shown an online ad because of your age, your address, your purchase history, your politics, your religion, and even your likelihood of having cancer.This is because of the largely unchecked “data broker” industry.Data brokers are analytics and marketing companies that collect every conceivable data point that exists about you, packaging it all into profiles that other companies use when deciding who should see their advertisements.Have a new mortgage? There are data brokers that collect that information and then sell it to advertisers who believe new homeowners are the perfect demographic to purchase, say, furniture, dining sets, or other home goods. Bought a new car? There are data brokers that collect all sorts of driving information directly from car manufacturers—including the direction you're driving, your car's gas tank status, its speed, and its location—because some unknown data model said somewhere that, perhaps, car drivers in certain states who are prone to speeding might be more likely to buy one type of product compared to another.This is just a glimpse of what is happening to essentially every single adult who uses the Internet today.So much of the information that people would never divulge to a stranger—like their addresses, phone numbers, criminal records, and mortgage payments—is collected away from view by thousands of data brokers. And while these companies know so much about people, the public at large likely know very little in return.Today, on the Lock and Code podcast with host David Ruiz, we speak with Cody Venzke, senior policy counsel with the ACLU, about how data brokers collect their information, what data points are off-limits (if any), and how people can protect their sensitive information, along with the harms that come from unchecked data broker activity—beyond just targeted advertising.“We're seeing data that's been purchased from data brokers used to make decisions about who gets a house, who gets an employment opportunity, who is offered credit, who is considered for admission into a university.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Online scammers were seen this August stooping to a new low—abusing local funerals to steal from bereaved family and friends.Cybercrime has never been a job of morals (calling it a “job” is already lending it too much credit), but, for many years, scams wavered between clever and brusque. Take the “Nigerian prince” email scam which has plagued victims for close to two decades. In it, would-be victims would receive a mysterious, unwanted message from alleged royalty, and, in exchange for a little help in moving funds across international borders, would be handsomely rewarded.The scam was preposterous but effective—in fact, in 2019, CNBC reported that this very same “Nigerian prince” scam campaign resulted in $700,000 in losses for victims in the United States.Since then, scams have evolved dramatically.Cybercriminals today willl send deceptive emails claiming to come from Netflix, or Google, or Uber, tricking victims into “resetting” their passwords. Cybercriminals will leverage global crises, like the COVID-19 pandemic, and send fraudulent requests for donations to nonprofits and hospital funds. And, time and again, cybercriminals will find a way to play on our emotions—be they fear, or urgency, or even affection—to lure us into unsafe places online.This summer, Malwarebytes social media manager Zach Hinkle encountered one such scam, and it happened while attending a funeral for a friend. In a campaign that Malwarebytes Labs is calling the “Facebook funeral live stream scam,” attendees at real funerals are being tricked into potentially signing up for a “live stream” service of the funerals they just attended.Today on the Lock and Code podcast with host David Ruiz, we speak with Hinkle and Malwarebytes security researcher Pieter Arntz about the Facebook funeral live stream scam, what potential victims have to watch out for, and how cybercriminals are targeting actual, grieving family members with such foul deceit. Hinkle also describes what he felt in the moment of trying to not only take the scam down, but to protect his friends from falling for it.“You're grieving… and you go through a service and you're feeling all these emotions, and then the emotion you feel is anger because someone is trying to take advantage of friends and loved ones, of somebody who has just died. That's so appalling”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code...
On August 15, the city of San Francisco launched an entirely new fight against the world of deepfake porn—it sued the websites that make the abusive material so easy to create.“Deepfakes,” as they're often called, are fake images and videos that utilize artificial intelligence to swap the face of one person onto the body of another. The technology went viral in the late 2010s, as independent film editors would swap the actors of one film for another—replacing, say, Michael J. Fox in Back to the Future with Tom Holland.But very soon into the technology's debut, it began being used to create pornographic images of actresses, celebrities, and, more recently, everyday high schoolers and college students. Similar to the threat of “revenge porn,” in which abusive exes extort their past partners with the potential release of sexually explicit photos and videos, “deepfake porn” is sometimes used to tarnish someone's reputation or to embarrass them amongst friends and family.But deepfake porn is slightly different from the traditional understanding of “revenge porn” in that it can be created without any real relationship to the victim. Entire groups of strangers can take the image of one person and put it onto the body of a sex worker, or an adult film star, or another person who was filmed having sex or posing nude.The technology to create deepfake porn is more accessible than ever, and it's led to a global crisis for teenage girls.In October of 2023, a reported group of more than 30 girls at a high school in New Jersey had their likenesses used by classmates to make sexually explicit and pornographic deepfakes. In March of this year, two teenage boys were arrested in Miami, Florida for allegedly creating deepfake nudes of male and female classmates who were between the ages of 12 and 13. And at the start of September, this month, the BBC reported that police in South Korea were investigating deepfake pornography rings at two major universities.While individual schools and local police departments in the United States are tackling deepfake porn harassment as it arises—with suspensions, expulsions, and arrests—the process is slow and reactive.Which is partly why San Francisco City Attorney David Chiu and his team took aim at not the individuals who create and spread deepfake porn, but at the websites that make it so easy to do so.Today, on the Lock and Code podcast with host David Ruiz, we speak with San Francisco City Attorney David Chiu about his team's lawsuit against 16 deepfake porn websites, the city's history in protecting Californians, and the severity of abuse that these websites offer as a paid service.“At least one of these websites specifically promotes the non-consensual nature of this. I'll just quote: ‘Imagine wasting time taking her out on dates when you can just use website X to get her nudes.'”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 License
On August 24, at an airport just outside of Paris, a man named Pavel Durov was detained for questioning by French investigators. Just days later, the same man was charged in crimes related to the distribution of child pornography and illicit transactions, such as drug trafficking and fraud.Durov is the CEO and founder of the messaging and communications app Telegram. Though Durov holds citizenship in France and the United Arab Emirates—where Telegram is based—he was born and lived for many years in Russia, where he started his first social media company, Vkontakte. The Facebook-esque platform gained popularity in Russia, not just amongst users, but also the watchful eye of the government.Following a prolonged battle regarding the control of Vkontake—which included government demands to deliver user information and to shut down accounts that helped organize protests against Vladimir Putin in 2012—Durov eventually left the company and the country all together.But more than 10 years later, Durov is once again finding himself a person of interest for government affairs, facing several charges now in France where, while he is not in jail, he has been ordered to stay.After Durov's arrest, the X account for Telegram responded, saying:“Telegram abides by EU laws, including the Digital Services Act—its moderation is within industry standards and constantly improving. Telegram's CEO Pavel Durov has nothing to hide and travels frequently in Europe. It is absurd to claim that a platform or its owner are responsible for abuse of the platform.”But how true is that?In the United States, companies themselves, such as YouTube, X (formerly Twitter), and Facebook often respond to violations of “copyright”—the protection that gets violated when a random user posts clips or full versions of movies, television shows, and music. And the same companies get involved when certain types of harassment, hate speech, and violent threats are posted on public channels for users to see.This work, called “content moderation,” is standard practice for many technology and social media platforms today, but there's a chance that Durov's arrest isn't related to content moderation at all. Instead, it may be related to the things that Telegram users say in private to one another over end-to-end encrypted chats.Today, on the Lock and Code podcast with host David Ruiz, we speak with Electronic Frontier Foundation Director of Cybersecurity Eva Galperin about Telegram, its features, and whether Durov's arrest is an escalation of content moderation gone wrong or the latest skirmish in government efforts to break end-to-end encryption.“Chances are that these are requests around content that Telegram can see, but if [the requests] touch end-to-end encrypted content, then I have to flip tables.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 License
Every age group uses the internet a little bit differently, and it turns out for at least one Gen Z teen in the Bay Area, the classic approach to cyberecurity—defending against viruses, ransomware, worms, and more—is the least of her concerns. Of far more importance is Artificial Intelligence (AI).Today, the Lock and Code podcast with host David Ruiz revisits a prior episode from 2023 about what teenagers fear the most about going online. The conversation is a strong reminder that when America's youngest generations experience online is far from the same experience that Millennials, Gen X'ers, and Baby Boomers had with their own introduction to the internet.Even stronger proof of this is found in recent research that Malwarebytes debuted this summer about how people in committed relationships share their locations, passwords, and devices with one another. As detailed in the larger report, “What's mine is yours: How couples share an all-access pass to their digital lives,” Gen Z respondents were the most likely to say that they got a feeling of safety when sharing their locations with significant others.But a wrinkle appeared in that behavior, according to the same research: Gen Z was also the most likely to say that they only shared their locations because their partners forced them to do so.In our full conversation from last year, we speak with Nitya Sharma about how her “favorite app” to use with friends is “Find My” on iPhone, the dangers are of AI “sneak attacks,” and why she simply cannot be bothered about malware. “I know that there's a threat of sharing information with bad people and then abusing it, but I just don't know what you would do with it. Show up to my house and try to kill me?” Tune in today to listen to the full conversation.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Somewhere out there is a romantic AI chatbot that wants to know everything about you. But in a revealing overlap, other AI tools—which are developed and popularized by far larger companies in technology—could crave the very same thing.For AI tools of any type, our data is key.In the nearly two years since OpenAI unveiled ChatGPT to the public, the biggest names in technology have raced to compete. Meta announced Llama. Google revealed Gemini. And Microsoft debuted Copilot.All these AI features function in similar ways: After having been trained on mountains of text, videos, images, and more, these tools answer users' questions in immediate and contextually relevant ways. Perhaps that means taking a popular recipe and making it vegetarian friendly. Or maybe that involves developing a workout routine for someone who is recovering from a new knee injury.Whatever the ask, the more data that an AI tool has already digested, the better it can deliver answers.Interestingly, romantic AI chatbots operate in almost the same way, as the more information that a user gives about themselves, the more intimate and personal the AI chatbot's responses can appear.But where any part of our online world demands more data, questions around privacy arise.Today, on the Lock and Code podcast with host David Ruiz, we speak with Zoë MacDonald, content creator for Privacy Not Included at Mozilla about romantic AI tools and how users can protect their privacy from ChatGPT and other AI chatbots.When in doubt, MacDonald said, stick to a simple rule:“I would suggest that people don't share their personal information with an AI chatbot.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
In the world of business cybersecurity, the powerful technology known as “Security Information and Event Management” is sometimes thwarted by the most unexpected actors—the very people setting it up.Security Information and Event Management—or SIEM—is a term used to describe data-collecting products that businesses rely on to make sense of everything going on inside their network, in the hopes of catching and stopping cyberattacks. SIEM systems can log events and information across an entire organization and its networks. When properly set up, SIEMs can collect activity data from work-issued devices, vital servers, and even the software that an organization rolls out to its workforce. The purpose of all this collection is to catch what might easily be missed.For instance, SIEMs can collect information about repeated login attempts occurring at 2:00 am from a set of login credentials that belong to an employee who doesn't typically start their day until 8:00 am. SIEMs can also collect whether the login credentials of an employee with typically low access privileges are being used to attempt to log into security systems far beyond their job scope. SIEMs must also take in the data from an Endpoint Detection and Response (EDR) tool, and they can hoover up nearly anything that a security team wants—from printer logs, to firewall logs, to individual uses of PowerShell.But just because a SIEM can collect something, doesn't necessarily mean that it should.Log activity for an organization of 1,000 employees is tremendous, and the collection of frequent activity could bog down a SIEM with noise, slow down a security team with useless data, and rack up serious expenses for a company.Today, on the Lock and Code podcast with host David Ruiz, we speak with Microsoft cloud solution architect Jess Dodson about how companies and organizations can set up, manage, and maintain their SIEMs, along with what advertising pitfalls to avoid when doing their shopping. Plus, Dodson warns about one of the simplest mistakes in trying to save budget—setting up arbitrary data caps on collection that could leave an organization blind.“A small SMB organization … were trying to save costs, so they went and looked at what they were collecting and they found their biggest ingestion point,” Dodson said. “And what their biggest ingestion point was was their Windows security events, and then they looked further and looked for the event IDs that were costing them the most, and so they got rid of those.”Dodson continued:“Problem was the ones they got rid of were their Log On/Log Off events, which I think most people would agree is kind of important from a security perspective.”Tune in today to listen to the full conversation.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good...
Full-time software engineer and part-time Twitch streamer Ali Diamond is used to seeing herself on screen, probably because she's the one who turns the camera on.But when Diamond received a Direct Message (DM) on Twitter earlier this year, she learned that her likeness had been recreated across a sample of AI-generated images, entirely without her consent.On the AI art sharing platform Civitai, Diamond discovered that a stranger had created an “AI image model” that was fashioned after her. The model was available for download so that, conceivably, other members of the community could generate their own images of Diamond—or, at least, the AI version of her. To show just what the AI model was capable of, its creator shared a few examples of what he'd made: There was AI Diamond standing what looked at a music festival, AI Diamond with her head tilted up and smiling, and AI Diamond wearing, what the real Diamond would later describe, as an “ugly ass ****ing hat.”AI image generation is seemingly lawless right now.Popular AI image generators, like Stable Diffusion, Dall-E, and Midjourney, have faced valid criticisms from human artists that these generators are copying their labor to output derivative works, a sort of AI plagiarism. AI image moderation, on the other hand, has posed a problem not only for AI art communities, but for major social media networks, too, as anyone can seemingly create AI-generated images of someone else—without that person's consent—and distribute those images online. It happened earlier this year when AI-generated, sexually explicit images of Taylor Swift were seen by millions of people on Twitter before the company took those images down.In that instance, Swift had the support of countless fans who reported each post they found on Twitter that shared the images.But what happens when someone has to defend themselves against an AI model made of their likeness, without their consent?Today, on the Lock and Code podcast with host David Ruiz, we speak with Ali Diamond about finding an AI model of herself, what the creator had to say about making the model, and what the privacy and security implications are for everyday people whose likenesses have been stolen against their will.For Diamond, the experience was unwelcome and new, as she'd never experimented using AI image generation on herself.“I've never put my face into any of those AI services. As someone who has a love of cybersecurity and an interest in it… you're collecting faces to do what?”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 License
More than 20 years ago, a law that the United States would eventually use to justify the warrantless collection of Americans' phone call records actually started out as a warning sign against an entirely different target: Libraries.Not two months after terrorists attacked the United States on September 11, 2001, Congress responded with the passage of The USA Patriot Act. Originally championed as a tool to fight terrorism, The Patriot Act, as introduced, allowed the FBI to request “any tangible things” from businesses, organizations, and people during investigations into alleged terrorist activity. Those “tangible things,” the law said, included “books, records, papers, documents, and other items.”Or, to put it a different way: things you'd find in a library and records of the things you'd check out from a library. The concern around this language was so strong that this section of the USA Patriot Act got a new moniker amongst the public: “The library provision.”The Patriot Act passed, and years later, the public was told that, all along, the US government wasn't interested in library records.But those government assurances are old.What remains true is that libraries and librarians want to maintain the privacy of your records. And what also remains true is that the government looks anywhere it can for information to aid investigations into national security, terrorism, human trafficking, illegal immigration, and more.What's changed, however, is that companies that libraries have relied on for published materials and collections—Thomson Reuters, Reed Elsevier, Lexis Nexis—have reimagined themselves as big data companies. And they've lined up to provide newly collected data to the government, particularly to agencies like Immigrations and Customers Enforcement, or ICE.There are many layers to this data web, and libraries are seemingly stuck in the middle.Today, on the Lock and Code podcast with host Davd Ruiz, we speak with Sarah Lamdan, deputy director Office of Intellectual Freedom at the American Library Association, about library privacy in the digital age, whether police are legitimately interested in what the public is reading, and how a small number of major publishing companies suddenly started aiding the work of government surveillance:“Because to me, these companies were information providers. These companies were library vendors. They're companies that we work with because they published science journals and they published court reporters. I did not know them as surveillance companies.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your...
This is a story about how the FBI got everything it wanted.For decades, law enforcement and intelligence agencies across the world have lamented the availability of modern technology that allows suspected criminals to hide their communications from legal scrutiny. This long-standing debate has sometimes spilled into the public view, as it did in 2016, when the FBI demanded that Apple unlock an iPhone used during a terrorist attack in the California city of San Bernardino. Apple pushed back on the FBI's request, arguing that the company could only retrieve data from the iPhone in question by writing new software with global consequences for security and privacy.“The only way to get information—at least currently, the only way we know,” said Apple CEO Tim Cook, “would be to write a piece of software that we view as sort of the equivalent of cancer.”The standoff held the public's attention for months, until the FBI relied on a third party to crack into the device.But just a couple of years later, the FBI had obtained an even bigger backdoor into the communication channels of underground crime networks around the world, and they did it almost entirely off the radar.It all happened with the help of Anom, a budding company behind an allegedly “secure” phone that promised users a bevvy of secretive technological features, like end-to-end encrypted messaging, remote data wiping, secure storage vaults, and even voice scrambling. But, unbeknownst to Anom's users, the entire company was a front for law enforcement. On Anom phones, every message, every photo, every piece of incriminating evidence, and every order to kill someone, was collected and delivered, in full view, to the FBI.Today, on the Lock and Code podcast with host David Ruiz, we speak with 404 Media cofounder and investigative reporter Joseph Cox about the wild, true story of Anom. How did it work, was it “legal,” where did the FBI learn to run a tech startup, and why, amidst decades of debate, are some people ignoring the one real-life example of global forces successfully installing a backdoor into a company?The public…and law enforcement, as well, [have] had to speculate about what a backdoor in a tech product would actually look like. Well, here's the answer. This is literally what happens when there is a backdoor, and I find it crazy that not more people are paying attention to it.Joseph Cox, author, Dark Wire, and 404 Media cofounderTune in today.Cox's investigation into Anom, presented in his book titled Dark Wire, publishes June 4.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music:...
The irrigation of the internet is coming.For decades, we've accessed the internet much like how we, so long ago, accessed water—by traveling to it. We connected (quite literally), we logged on, and we zipped to addresses and sites to read, learn, shop, and scroll. Over the years, the internet was accessible from increasingly more devices, like smartphones, smartwatches, and even smart fridges. But still, it had to be accessed, like a well dug into the ground to pull up the water below.Moving forward, that could all change.This year, several companies debuted their vision of a future that incorporates Artificial Intelligence to deliver the internet directly to you, with less searching, less typing, and less decision fatigue. For the startup Humane, that vision includes the use of the company's AI-powered, voice-operated wearable pin that clips to your clothes. By simply speaking to the AI pin, users can text a friend, discover the nutritional facts about food that sits directly in front of them, and even compare the prices of an item found in stores with the price online.For a separate startup, Rabbit, that vision similarly relies on a small, attractive smart-concierge gadget, the R1. With the bright-orange slab designed in coordination by the company Teenage Engineering, users can hail an Uber to take them to the airport, play an album on Spotify, and put in a delivery order for dinner.Away from physical devices, The Browser Company of New York is also experimenting with AI in its own web browser, Arc. In February, the company debuted its endeavor to create a “browser that browses for you” with a snazzy video that showed off Arc's AI capabilities to create unique, individualized web pages in response to questions about recipes, dinner reservations, and more.But all these small-scale projects, announced in the first month or so of 2024, had to make room a few months later for big-money interest from the first ever internet conglomerate of the world—Google. At the company's annual Google I/O conference on May 14, VP and Head of Google Search Liz Reid pitched the audience on an AI-powered version of search in which “Google will do the Googling for you.”Now, Reid said, even complex, multi-part questions can be answered directly within Google, with no need to click a website, evaluate its accuracy, or flip through its many pages to find the relevant information within.This, it appears, could be the next phase of the internet… and our host David Ruiz has a lot to say about it.Today, on the Lock and Code podcast, we bring back Director of Content Anna Brading and Cybersecurity Evangelist Mark Stockley to discuss AI-powered concierges, the value of human choice when so many small decisions could be taken away by AI, and, as explained by Stockley, whether the appeal of AI is not in finding the “best” vacation, recipe, or dinner reservation, but rather the best of anything for its user.“It's not there to tell you what the best chocolate chip cookie in the world is for everyone. It's there to help you figure out what the best chocolate chip cookie is for you, on a Monday evening, when the weather's hot, and you're hungry.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at
Our Lock and Code host, David Ruiz, has a bit of an apology to make:“Sorry for all the depressing episodes.”When the Lock and Code podcast explored online harassment and abuse this year, our guest provided several guidelines and tips for individuals to lock down their accounts and remove their sensitive information from the internet, but larger problems remained. Content moderation is failing nearly everywhere, and data protection laws are unequal across the world.When we told the true tale of a virtual kidnapping scam in Utah, though the teenaged victim at the center of the scam was eventually found, his family still lost nearly $80,000.And when we asked Mozilla's Privacy Not Included team about what types of information modern cars can collect about their owners, we were entirely blindsided by the policies from Nissan and Kia, which claimed the companies can collect data about their customers' “sexual activity” and “sex life.”(Let's also not forget about that Roomba that took a photo of someone on a toilet and how that photo ended up on Facebook.)In looking at these stories collectively, it can feel like the everyday consumer is hopelessly outmatched against modern companies. What good does it do to utilize personal cybersecurity best practices, when the companies we rely on can still leak our most sensitive information and suffer few consequences? What's the point of using a privacy-forward browser to better obscure my online behavior from advertisers when the machinery that powers the internet finds new ways to surveil our every move?These are entirely relatable, if fatalistic, feelings. But we are here to tell you that nihilism is not the answer.Today, on the Lock and Code podcast, we speak with Justin Brookman, director of technology policy at Consumer Reports, about some of the most recent, major consumer wins in the tech world, what it took to achieve those wins, and what levers consumers can pull on today to have their voices heard.Brookman also speaks candidly about the shifting priorities in today's legislative landscape. “One thing we did make the decision about is to focus less on Congress because, man, I'll meet with those folks so we can work on bills, [and] there'll be a big hearing, but they've just failed to do so much.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and...
A digital form of protest could become the go-to response for the world's largest porn website as it faces increased regulations: Not letting people access the site.In March, PornHub blocked access to visitors connecting to its website from Texas. It marked the second time in the past 12 months that the porn giant shut off its website to protest new requirements in online age verification.The Texas law, which was signed in June 2023, requires several types of adult websites to verify the age of their visitors by either collecting visitors' information from a government ID or relying on a third party to verify age through the collection of multiple streams of data, such as education and employment status.PornHub has long argued that these age verification methods do not keep minors safer and that they place undue onus on websites to collect and secure sensitive information.The fact remains, however, that these types of laws are growing in popularity.Today, Lock and Code revisits a prior episode from 2023 with guest Alec Muffett, discussing online age verification proposals, how they could weaken security and privacy on the internet, and whether these efforts are oafishly trying to solve a societal problem with a technological solution.“The battle cry of these people have has always been—either directly or mocked as being—'Could somebody think of the children?'” Muffett said. “And I'm thinking about the children because I want my daughter to grow up with an untracked, secure private internet when she's an adult. I want her to be able to have a private conversation. I want her to be able to browse sites without giving over any information or linking it to her identity.”Muffett continued:“I'm trying to protect that for her. I'd like to see more people grasping for that.”Alec MuffettTune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Few words apply as broadly to the public—yet mean as little—as “home network security.”For many, a “home network” is an amorphous thing. It exists somewhere between a router, a modem, an outlet, and whatever cable it is that plugs into the wall. But the idea of a “home network” doesn't need to intimidate, and securing that home network could be simpler than many folks realize.For starters, a home network can be simply understood as a router—which is the device that provides access to the internet in a home—and the other devices that connect to that router. That includes obvious devices like phones, laptops, and tablets, and it includes “Internet of Things” devices, like a Ring doorbell, a Nest thermostat, and any Amazon Echo device that come pre-packaged with the company's voice assistant, Alexa. There are also myriad “smart” devices to consider: smartwatches, smart speakers, smart light bulbs, don't forget the smart fridges.If it sounds like we're describing a home network as nothing more than a “list,” that's because a home network is pretty much just a list. But where securing that list becomes complicated is in all the updates, hardware issues, settings changes, and even scandals that relate to every single device on that list.Routers, for instance, provide their own security, but over many years, they can lose the support of their manufacturers. IoT devices, depending on the brand, can be made from cheap parts with little concern for user security or privacy. And some devices have scandals plaguing their past—smart doorbells have been hacked and fitness trackers have revealed running routes to the public online.This shouldn't be cause for fear. Instead, it should help prove why home network security is so important.Today, on the Lock and Code podcast with host David Ruiz, we're speaking with cybersecurity and privacy advocate Carey Parker about securing your home network.Author of the book Firewalls Don't Stop Dragons and host to the podcast of the same name, Parker chronicled the typical home network security journey last year and distilled the long process into four simple categories: Scan, simplify, assess, remediate.In joining the Lock and Code podcast yet again, Parker explains how everyone can begin their home network security path—where to start, what to prioritize, and the risks of putting this work off, while also emphasizing the importance of every home's router:Your router is kind of the threshold that protects all the devices inside your house. But, like a vampire, once you invite the vampire across the threshold, all the things inside the house are now up for grabs.Carey ParkerTune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen...
A disappointing meal at a restaurant. An ugly breakup between two partners. A popular TV show that kills off a beloved, main character.In a perfect world, these are irritations and moments of vulnerability. But online today, these same events can sometimes be the catalyst for hate. That disappointing meal can produce a frighteningly invasive Yelp review that exposes a restaurant owner's home address for all to see. That ugly breakup can lead to an abusive ex posting a video of revenge porn. And even a movie or videogame can enrage some individuals into such a fury that they begin sending death threats to the actors and cast mates involved.Online hate and harassment campaigns are well-known and widely studied. Sadly, they're also becoming more frequent.In 2023, the Anti-Defamation League revealed that 52% of American adults reported being harassed online at least some time in their life—the highest rate ever recorded by the organization and a dramatic climb from the 40% who responded similarly just one year earlier. When asking teens about recent harm, 51% said they'd suffered from online harassment in strictly the 12 months prior to taking the survey itself—a radical 15% increase from what teens said the year prior.The proposed solutions, so far, have been difficult to implement.Social media platforms often deflect blame—and are frequently shielded from legal liability—and many efforts to moderate and remove hateful content have either been slow or entirely absent in the past. Popular accounts with millions of followers will, without explicitly inciting violence, sometimes draw undue attention to everyday people. And the increasing need to have an online presence for teens—even classwork is done online now—makes it near impossible to simply “log off.”Today, on the Lock and Code podcast with host David Ruiz, we speak with Tall Poppy CEO and co-founder Leigh Honeywell, about the evolution of online hate, personal defense strategies that mirror many of the best practices in cybersecurity, and the modern risks of accidentally becoming viral in a world with little privacy.“It's not just that your content can go viral, it's that when your content goes viral, five people might be motivated enough to call in a fake bomb threat at your house.”Leigh Honeywell, CEO and co-founder of Tall PoppyTune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself
If your IT and security teams think malware is bad, wait until they learn about everything else.In 2024, the modern cyberattack is a segmented, prolonged, and professional effort, in which specialists create strictly financial alliances to plant malware on unsuspecting employees, steal corporate credentials, slip into business networks, and, for a period of days if not weeks, simply sit and watch and test and prod, escalating their privileges while refraining from installing any noisy hacking tools that could be flagged by detection-based antivirus scans. In fact, some attacks have gone so "quiet" that they involve no malware at all. Last year, some ransomware gangs refrained from deploying ransomware in their own attacks, opting to steal sensitive data and then threaten to publish it online if their victims refused to pay up—a method of extracting a ransom that is entirely without ransomware. Understandably, security teams are outflanked. Defending against sophisticated, multifaceted attacks takes resources, technologies, and human expertise. But not every organization has that at hand. What, then, are IT-constrained businesses to do? Today, on the Lock and Code podcast with host David Ruiz, we speak with Jason Haddix, the former Chief Information Security Officer at the videogame developer Ubisoft, about how he and his colleagues from other companies faced off against modern adversaries who, during a prolonged crime spree, plundered employee credentials from the dark web, subverted corporate 2FA protections, and leaned heavily on internal web access to steal sensitive documentation. Haddix, who launched his own cybersecurity training and consulting firm Arcanum Information Security this year, said he learned so much during his time at Ubisoft that he and his peers in the industry coined a new, humorous term for attacks that abuse internet-connected platforms: "A browser and a dream." "When you first hear that, you're like, 'Okay, what could a browser give you inside of an organization?'" But Haddix made it clear: "On the internal LAN, you have knowledge bases like SharePoint, Confluence, MediaWiki. You have dev and project management sites like Trello, local Jira, local Redmine. You have source code managers, which are managed via websites—Git, GitHub, GitLab, Bitbucket, Subversion. You have repo management, build servers, dev platforms, configuration, management platforms, operations, front ends. These are all websites."Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)LLM Prompt Injection Game: https://gandalf.lakera.ai/Overwhelmed by modern cyberthreats? ThreatDown can...
If the internet helped create the era of mass surveillance, then artificial intelligence will bring about an era of mass spying.That's the latest prediction from noted cryptographer and computer security professional Bruce Schneier, who, in December, shared a vision of the near future where artificial intelligence—AI—will be able to comb through reams of surveillance data to answer the types of questions that, previously, only humans could. “Spying is limited by the need for human labor,” Schneier wrote. “AI is about to change that.”As theorized by Schneier, if fed enough conversations, AI tools could spot who first started a rumor online, identify who is planning to attend a political protest (or unionize a workforce), and even who is plotting a crime.But “there's so much more,” Schneier said.“To uncover an organizational structure, look for someone who gives similar instructions to a group of people, then all the people they have relayed those instructions to. To find people's confidants, look at whom they tell secrets to. You can track friendships and alliances as they form and break, in minute detail. In short, you can know everything about what everybody is talking about.”Today, on the Lock and Code podcast with host David Ruiz, we speak with Bruce Schneier about artificial intelligence, Soviet era government surveillance, personal spyware, and why companies will likely leap at the opportunity to use AI on their customers.“Surveillance-based manipulation is the business model [of the internet] and anything that gives a company an advantage, they're going to do.”Tune in today to listen to the full conversation.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Hackers want to know everything about you: Your credit card number, your ID and passport info, and now, your DNA.On October 1 2023, on a hacking website called BreachForums, a group of cybercriminals claimed that they had stolen—and would soon sell—individual profiles for users of the genetic testing company 23andMe.23andMe offers direct-to-consumer genetic testing kits that provide customers with different types of information, including potential indicators of health risks along with reports that detail a person's heritage, their DNA's geographical footprint, and, if they opt in, a service to connect with relatives who have also used 23andMe's DNA testing service.The data that 23andMe and similar companies collect is often seen as some of the most sensitive, personal information that exists about people today, as it can expose health risks, family connections, and medical diagnoses. This type of data has also been used to exonerate the wrongfully accused and to finally apprehend long-hidden fugitives.In 2018, deputies from the Sacramento County Sherriff's department arrested a serial killer known as the Golden State Killer, after investigators took DNA left at decades-old crime scenes and compared it to a then-growing database of genetic information, finding the Golden State Killer's relatives, and then zeroing in from there.And while the story of the Golden State Killer involves the use of genetic data to solve a crime, what happens when genetic data is part of a crime? What law enforcement agency, if any, gets involved? What rights do consumers have? And how likely is it that consumer complaints will get heard?For customers of 23andMe, those are particularly relevant questions. After an internal investigation from the genetic testing company, it was revealed that 6.9 million customers were impacted by the October breach.What do they do?Today on the Lock and Code podcast with host David Ruiz, we speak with Suzanne Bernstein, a law fellow at Electronic Privacy Information Center (EPIC) to understand the value of genetic data, the risks of its exposure, and the unfortunate reality that consumers face in having to protect themselves while also trusting private corporations to secure their most sensitive data.“We live our lives online and there's certain risks that are unavoidable or that are manageable relative to the benefit that a consumer might get from it,” Bernstein said.“Ultimately, while it's not the consumer's responsibility, an informed consumer can make the best choices about what kind of risks to take online.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at
Like the grade-school dweeb who reminds their teacher to assign tonight's homework, or the power-tripping homeowner who threatens every neighbor with an HOA citation, the ransomware group ALPHV can now add itself to a shameful roster of pathetic, little tattle-tales.In November, the ransomware gang ALPHV, which also goes by the name Black Cat, notified the US Securities and Exchange Commission about the Costa Mesa-based software company MeridianLink, alleging that the company had failed to notify the government about a data breach. Under newly announced rules by the US Securities and Exchange Commission (SEC), public companies will be expected to notify the government agency about “material cybersecurity incidents” within four days of determining whether such an incident could have impacted the company's stock prices or any investment decisions from the public.According to ALPHV, MeridianLink had violated that rule. But how did ALPHV know about this alleged breach?Simple. They claimed to have done it.“It has come to our attention that MeridianLink, in light of a significant breach compromising customer data and operational information, has failed to file the requisite disclosure under Item 1.05 of Form 8-K within the stipulated four business days, as mandated by the new SEC rules,” wrote ALPHV in a complaint that the group claimed to have filed with the US government.The victim, MeridianLink, refuted the claims. According to a MeridianLink spokesperson, while the company confirmed a cybersecurity incident, it denied the severity of the incident.“Based on our investigation to date, we have identified no evidence of unauthorized access to our production platforms, and the incident has caused minimal business interruption,” a MeridianLink spokesperson said at the time. “If we determine that any consumer personal information was involved in this incident, we will provide notifications as required by law.”This week on the Lock and Code podcast with host David Ruiz, we speak to Recorded Future intelligence analyst Allan Liska about what ALPHV could hope to accomplish with its SEC complaint, whether similar threats have been made in the past under other regulatory regime, and what organizations everywhere should know about ransomware attacks going into the new year. One big takeaway, Liska said, is that attacks are getting bigger, bolder, and brasher.“There are no protections anymore,” Liska said. “For a while, some ransomware actors were like, ‘No, we won't go after hospitals, or we won't do this, or we won't do that.' Those protections all seem to have flown out the window, and they'll go after anything and anyone that will make them money. It doesn't matter how small they are or how big they are.”Liska continued:“We've seen ransomware actors go after food banks. You're not going to get a ransom from a food bank. Don't do that.”Tune in today to listen to the full conversation.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0...
What are you most worried about online? And what are you doing to stay safe? Depending on who you are, those could be very different answers, but for teenagers and members of Generation Z, the internet isn't so scary because of traditional threats like malware and viruses. Instead, the internet is scary because of what it can expose. To Gen Z, a feared internet is one that is vindictive and cruel—an internet that reveals private information that Gen Z fears could harm their relationships with family and friends, damage their reputations, and even lead to their being bullied and physically harmed. Those are some of the findings from Malwarebytes' latest research into the cybersecurity and online privacy beliefs and behaviors of people across the United States and Canada this year.Titled "Everyone's afraid of the internet and no one's sure what to do about it," Malwarebytes' new report shows that 81 percent of Gen Z worries about having personal, private information exposed—like their sexual orientations, personal struggles, medical history, and relationship issues (compared to 75 percent of non-Gen Zers). And 61 percent of Gen Zers worry about having embarrassing or compromising photos or videos shared online (compared to 55% of non Gen Zers). Not only that, 36 percent worry about being bullied because of that info being exposed, while 34 percent worry about being physically harmed. For those outside of Gen Z, those numbers are a lot lower—only 22 percent worry about bullying, and 27 percent worry about being physically harmed.Does this mean Gen Z is uniquely careful to prevent just that type of information from being exposed online? Not exactly. They talk more frequently to strangers online, they more frequently share personal information on social media, and they share photos and videos on public forums more than anyone—all things that leave a trail of information that could be gathered against them.Today, on the Lock and Code podcast with host David Ruiz, we drill down into what, specifically, a Bay Area teenager is afraid of when using the internet, and what she does to stay safe. Visiting the Lock and Code podcast for the second year in the row is Nitya Sharma, discussing AI "sneak attacks," political disinformation campaigns, the unannounced location tracking of Snapchat, and why she simply cannot be bothered about malware. "I know that there's a threat of sharing information with bad people and then abusing it, but I just don't know what you would do with it. Show up to my house and try to kill me?" Tune in today for the full conversation.You can read our full report here: "Everyone's afraid of the internet and no one's sure what to do about it."You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at
In 2022, Malwarebytes investigated the blurry, shifting idea of “identity” on the internet, and how online identities are not only shaped by the people behind them, but also inherited by the internet's youngest users, children. Children have always inherited some of their identities from their parents—consider that two of the largest indicators for political and religious affiliation in the US are, no surprise, the political and religious affiliations of someone's parents—but the transfer of online identity poses unique risks. When parents create email accounts for their kids, do they also teach their children about strong passwords? When parents post photos of their children online, do they also teach their children about the safest ways to post photos of themselves and others? When parents create a Netflix viewing profile on a child's iPad, are they prepared for what else a child might see online? Are parents certain that a kid is ready to watch before they can walk?Those types of questions drove a joint report that Malwarebytes published last year, based on a survey of 2,000 people in North America. That research showed that, broadly, not enough children and teenagers trust their parents to support them online, and not enough parents know exactly how to give the support their children need.But stats and figures can only tell so much of the story, which is why last year, Lock and Code host David Ruiz spoke with a Bay Area high school graduate about her own thoughts on the difficulties of growing up online. Lock and Code is re-airing that episode this week because, in less than one month, Malwarebytes is releasing a follow-on report about behaviors, beliefs, and blunders in online privacy and cybersecurity. And as part of that report, Lock and Code is bringing back the same guest as last year, Nitya Sharma. Before then, we are sharing with listeners our prior episode that aired in 2022 about the difficulties that an everyday teenager faces online, including managing her time online, trying to meet friends and complete homework, the traps of trading online interaction with in-person socializing, and what she would do differently with her children, if she ever started a family, in preparing them for the Internet.Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
"Freedom" is a big word, and for many parents today, it's a word that includes location tracking. Across America, parents are snapping up Apple AirTags, the inexpensive location tracking devices that can help owners find lost luggage, misplaced keys, and—increasingly so—roving toddlers setting out on mini-adventures. The parental fear right now, according to The Washington Post technology reporter Heather Kelly, is that "anybody who can walk, therefore can walk away." Parents wanting to know what their children are up to is nothing new. Before the advent of the Internet—and before the creation of search history—parents read through diaries. Before GPS location tracking, parents called the houses that their children were allegedly staying at. And before nearly every child had a smart phone that they could receive calls on, parents relied on a much simpler set of tools for coordination: Going to the mall, giving them a watch, and saying "Be at the food court at noon." But, as so much parental monitoring has moved to the digital sphere, there's a new problem: Children become physically mobile far faster than they become responsible enough to own a mobile. Enter the AirTag: a small, convenient device for parents to affix to toddlers' wrists, place into their backpacks, even sew into their clothes, as Kelly reported in her piece for The Washington Post. In speaking with parents, families, and childcare experts, Kelly also uncovered an interesting dynamic. Parents, she reported, have started relying on Apple AirTags as a means to provide freedom, not restrictions, to their children. Today, on the Lock and Code podcast with host David Ruiz, we speak with Kelly about why parents are using AirTags, how childcare experts are reacting to the recent trend, and whether the devices can actually provide a balm to increasingly stressed parents who may need a moment to sit back and relax. Or, as Kelly said:"In the end, parents need to chill—and if this lets them chill, and if it doesn't impact the kids too much, and it lets them go do silly things like jumping in some puddles with their friends or light, really inconsequential shoplifting, good for them."Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
Earlier this month, a group of hackers was spotted using a set of malicious tools—that originally gained popularity with online video game cheaters—to hide their Windows-based malware from being detected.Sounds unique, right? Frustratingly, it isn't, as the specific security loophole that was abused by the hackers has been around for years, and Microsoft's response, or lack thereof, is actually a telling illustration of the competing security environments within Windows and macOS. Even more perplexing is the fact that Apple dealt with a similar issue nearly 10 years ago, locking down the way that certain external tools are given permission to run alongside the operating system's critical, core internals. Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes' own Director of Core Tech Thomas Reed about everyone's favorite topic: Windows vs. Mac. But this isn't a conversation about the original iPod vs. Microsoft's Zune (we're sure you can find countless, 4-hour diatribes on YouTube for that), but instead about how the companies behind these operating systems can respond to security issues in their own products. Because it isn't fair to say that Apple or Microsoft are wholesale "better" or "worse" about security. Instead, they're hampered by their users and their core market segments—Apple excels in the consumer market, whereas Microsoft excels with enterprises. And when your customers include hospitals, government agencies, and pretty much any business over a certain headcount, well, it comes with complications in deciding how to address security problems that won't leave those same customers behind. Still, there's little excuse in leaving open the type of loophole that Windows has, said Reed:"Apple has done something that was pretty inconvenient for developers, but it really secured their customers because it basically meant we saw a complete stop in all kernel-level malware. It just shows you [that] it can be done. You're gonna break some eggs in the process, and Microsoft has not done that yet... They're gonna have to."Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
The language of a data breach, no matter what company gets hit, is largely the same. There's the stolen data—be it email addresses, credit card numbers, or even medical records. There are the users—unsuspecting, everyday people who, through no fault of their own, mistakenly put their trust into a company, platform, or service to keep their information safe. And there are, of course, the criminals. Some operate in groups. Some act alone. Some steal data as a means of extortion. Others steal it as a point of pride. All of them, it appears, take something that isn't theirs. But what happens if a cybercriminal takes something that may have already been stolen? In late June, a mobile app that can, without consent, pry into text messages, monitor call logs, and track GPS location history, warned its users that its services had been hacked. Email addresses, telephone numbers, and the content of messages were swiped, but how they were originally collected requires scrutiny. That's because the app itself, called LetMeSpy, is advertised as a parental and employer monitoring app, to be installed on the devices of other people that LetMeSpy users want to track. Want to read your child's text messages? LetMeSpy says it can help. Want to see where they are? LetMeSpy says it can do that, too. What about employers who are interested in the vague idea of "control and safety" of their business? Look no further than LetMeSpy, of course. While LetMeSpy's website tells users that "phone control without your knowledge and consent may be illegal in your country," (it is in the US and many, many others) the app also claims that it can hide itself from view from the person being tracked. And that feature, in particular, is one of the more tell-tale signs of "stalkerware." Stalkerware is a term used by the cybersecurity industry to describe mobile apps, primarily on Android, that can access a device's text messages, photos, videos, call records, and GPS locations without the device owner knowing about said surveillance. These types of apps can also automatically record every phone call made and received by a device, turn off a device's WiFi, and take control of the device's camera and microphone to snap photos or record audio—all without the victim knowing that their phone has been compromised. Stalkerware poses a serious threat—particularly to survivors of domestic abuse—and Malwarebytes has defended users against these types of apps for years. But the hacking of an app with similar functionality raises questions. Today, on the Lock and Code podcast with host David Ruiz, we speak with the hacktivist and security blogger maia arson crimew about the data that was revealed in LetMeSpy's hack, the almost-clumsy efforts by developers to make and market these apps online, and whether this hack—and others in the past—are "good." "I'm the person on the podcast who can say 'We should hack things,' because I don't work for Malwarebytes. But the thing is, I don't think there really is any other way to get info in this industry."Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and...
In the United States, when the police want to conduct a search on a suspected criminal, they must first obtain a search warrant. It is one of the foundational rights given to US persons under the Constitution, and a concept that has helped create the very idea of a right to privacy at home and online. But sometimes, individualized warrants are never issued, never asked for, never really needed, depending on which government agency is conducting the surveillance, and for what reason. Every year, countless emails, social media DMs, and likely mobile messages are swept up by the US National Security Agency—even if those communications involve a US person—without any significant warrant requirement. Those digital communications can be searched by the FBI. The information the FBI gleans from those searches can be used can be used to prosecute Americans for crimes. And when the NSA or FBI make mistakes—which they do—there is little oversight. This is surveillance under a law and authority called Section 702 of the FISA Amendments Act. The law and the regime it has enabled are opaque. There are definitions for "collection" of digital communications, for "queries" and "batch queries," rules for which government agency can ask for what type of intelligence, references to types of searches that were allegedly ended several years ago, "programs" that determine how the NSA grabs digital communications—by requesting them from companies or by directly tapping into the very cables that carry the Internet across the globe—and an entire, secret court that, only has rarely released its opinions to the public. Today, on the Lock and Code podcast, with host David Ruiz, we speak with Electronic Frontier Foundation Senior Policy Analyst Matthew Guariglia about what the NSA can grab online, whether its agents can read that information and who they can share it with, and how a database that was ostensibly created to monitor foreign intelligence operations became a tool for investigating Americans at home. As Guariglia explains:"In the United States, if you collect any amount of data, eventually law enforcement will come for it, and this includes data that is collected by intelligence communities."Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
When you think about the word "cyberthreat," what first comes to mind? Is it ransomware? Is it spyware? Maybe it's any collection of the infamous viruses, worms, Trojans, and botnets that have crippled countless companies throughout modern history. In the future, though, what many businesses might first think of is something new: Disinformation. Back in 2021, in speaking about threats to businesses, the former director of the US Cybersecurity and Infrastructure Security Agency, Chris Krebs, told news outlet Axios: “You've either been the target of a disinformation attack or you are about to be.”That same year, the consulting and professional services firm Price Waterhouse Coopers released a report on disinformation attacks against companies and organizations, and it found that these types of attacks were far more common than most of the public realized. From the report: “In one notable instance of disinformation, a forged US Department of Defense memo stated that a semiconductor giant's planned acquisition of another tech company had prompted national security concerns, causing the stocks of both companies to fall. In other incidents, widely publicized unfounded attacks on a businessman caused him to lose a bidding war, a false news story reported that a bottled water company's products had been contaminated, and a foreign state's TV network falsely linked 5G to adverse health effects in America, giving the adversary's companies more time to develop their own 5G network to compete with US businesses.”Disinformation is here, and as much of it happens online—through coordinated social media posts and fast-made websites—it can truly be considered a "cyberthreat." But what does that mean for businesses? Today, on the Lock and Code podcast with host David Ruiz, we speak with Lisa Kaplan, founder and CEO of Alethea, about how organizations can prepare for a disinformation attack, and what they should be thinking about in the intersection between disinformation, malware, and cybersecurity. Kaplan said:"When you think about disinformation in its purest form, what we're really talking about is people telling lies and hiding who they are in order to achieve objectives and doing so in a deliberate and malicious life. I think that this is more insidious than malware. I think it's more pervasive than traditional cyber attacks, but I don't think that you can separate disinformation from cybersecurity."Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
In May, a lawyer who was defending their client in a lawsuit against Columbia's biggest airline, Avianca, submitted a legal filing before a court in Manhattan, New York, that listed several previous cases as support for their main argument to continue the lawsuit.But when the court reviewed the lawyer's citations, it found something curious: Several were entirely fabricated. The lawyer in question had gotten the help of another attorney who, in scrounging around for legal precedent to cite, utilized the "services" of ChatGPT. ChatGPT was wrong. So why do so many people believe it's always right? Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes security evangelist Mark Stockley and Malwarebytes Labs editor-in-chief Anna Brading to discuss the potential consequences of companies and individuals embracing natural language processing tools—like ChatGPT and Google's Bard—as arbiters of truth. Far from being understood simply as chatbots that can produce remarkable mimicries of human speech and dialogue, these tools are becoming sources of truth for countless individuals, while also gaining attraction amongst companies that see artificial intelligence (AI) and large language models (LLM) as the future, no matter what industry they operate in. The future could look eerily similar to an earlier change in translation services, said Stockley, who witnessed the rapid displacement of human workers in favor of basic AI tools. The tools were far, far cheaper, but the quality of the translations—of the truth, Stockley said—was worse. "That is an example of exactly this technology coming in and being treated as the arbiter of truth in the sense that there is a cost to how much truth we want."Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
Ransomware is becoming bespoke, and that could mean trouble for businesses and law enforcement investigators. It wasn't always like this. For a few years now, ransomware operators have congregated around a relatively new model of crime called "Ransomware-as-a-Service." In the Ransomware-as-a-Service model, or RaaS model, ransomware itself is not delivered to victims by the same criminals that make the ransomware. Instead, it is used almost "on loan" by criminal groups called "affiliates" who carry out attacks with the ransomware and, if successful, pay a share of their ill-gotten gains back to the ransomware's creators.This model allows ransomware developers to significantly increase their reach and their illegal hauls. By essentially leasing out their malicious code to smaller groups of cybercriminals around the world, the ransomware developers can carry out more attacks, steal more money from victims, and avoid any isolated law enforcement action that would put their business in the ground, as the arrest of one affiliate group won't stop the work of dozens of others. And not only do ransomware developers lean on other cybercriminals to carry out attacks, they also rely on an entire network of criminals to carry out smaller, specialized tasks. There are "Initial Access Brokers" who break into company networks and then sell that illegal method of access online. "You also have coders that you can contract out to," Liska said. "You have pen testers that you can contract out to. You can contract negotiators if you want. You can contract translators if you want."But as Liska explained, as the ransomware "business" spreads out, so do new weak points: disgruntled criminals. "This whole underground marketplace that exists to serve ransomware means that your small group can do a lot," Liska said. "But that also means that you are entrusting the keys to your kingdom to these random contractors that you're paying in Bitcoin every now and then. And that, for example, is why the LockBit code got leaked—dude didn't pay his contractor."With plenty of leaked code now circulating online, some smaller cybercriminals gangs have taken to making minor alterations and then sending that new variant of ransomware out into the world—no affiliate model needed. "Most of what we see is just repurposed code and we see a lot of what I call 'Franken-ransomware.'" Today, on the Lock and Code podcast with host David Ruiz, Liska explains why Franken-ransomware poses unique challenges to future victims, cybersecurity companies, and law enforcement investigators. Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
In January, a mental health nonprofit admitted that it had used Artificial Intelligence to help talk to people in distress. Prompted first by a user's longing for personal improvement—and the difficulties involved in that journey—the AI tool generated a reply, which, with human intervention, could be sent verbatim in a chat box, or edited and fine-tuned to better fit the situation. The AI said:“I hear you. You're trying to become a better person and it's not easy. It's hard to make changes in our lives, especially when we're trying to do it alone. But you're not alone. There are people here who care about you and want to help you. I'm proud of you for making the decision to try to improve your life. It takes a lot of courage and strength. I send you love and support as you go through this journey.”This was experimental work from Koko, a mental health nonprofit that integrated the GPT-3 large language model into its product for a short period of time that is now over. In a video demonstration posted on Twitter earlier this year, Koko co-founder Rob Morris revealed that the nonprofit had used AI to provide "mental health support to about 4,000 people" across "about 30,000 messages." Though Koko pulled GPT-3 from its system after a reportedly short period of time, Morris said on Twitter that there are several questions left from the experience. "The implications here are poorly understood," Morris said. "Would people eventually seek emotional support from machines, rather than friends and family?"Today, on the Lock and Code podcast with host David Ruiz, we speak with Courtney Brown, a social services administrator with a history in research and suicidology, to dig into the ethics, feasibility, and potential consequences of relying increasingly on AI tools to help people in distress. For Brown, the immediate implications draw up several concerns. "It disturbed me to see AI using 'I care about you,' or 'I'm concerned,' or 'I'm proud of you.' That made me feel sick to my stomach. And I think it was partially because these are the things that I say, and it's partially because I think that they're going to lose power as a form of connecting to another human."But, importantly, Brown is not the only voice in today's podcast with experience in crisis support. For six years and across 1,000 hours, Ruiz volunteered on his local suicide prevention hotline. He, too, has a background to share. Tune in today as Ruiz and Brown explore the boundaries for deploying AI on people suffering from emotional distress, whether the "support" offered by any AI will be as helpful and genuine as that of a human, and, importantly, whether they are simply afraid of having AI encroach on the most human experiences. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0
The list of people and organizations that are hungry for your location data—collected so routinely and packaged so conveniently that it can easily reveal where you live, where you work, where you shop, pray, eat, and relax—includes many of the usual suspects.Advertisers, obviously, want to send targeted ads to you and they believe those ads have a better success rate if they're sent to, say, someone who spends their time at a fast-food drive-through on the way home from the office, as opposed to someone who doesn't, or someone whose visited a high-end department store, or someone who, say, vacations regularly at expensive resorts. Hedge funds, interestingly, are also big buyers of location data, constantly seeking a competitive edge in their investments, which might mean understanding whether a fast food chain's newest locations are getting more foot traffic, or whether a new commercial real estate development is walkable from nearby homes. But perhaps unexpected on this list is police.According to a recent investigation from Electronic Frontier Foundation and The Associated Press, a company called Fog Data Science has been gathering Americans' location data and selling it exclusively to local law enforcement agencies in the United States. Fog Data Science's tool—a subscription-based platform that charges clients for queries of the company's database—is called Fog Reveal. And according to Bennett Cyphers, one of the investigators who uncovered Fog Reveal through a series of public record requests, it's rather powerful. "What [Fog Data Science] sells is, I would say, like a God view mode for the world... It's a map and you draw a shape on the map and it will show you every device that was in that area during a specified timeframe."Today, on the Lock and Code podcast with host David Ruiz, we speak to Cyphers about how he and his organization uncovered a massive data location broker that seemingly works only with local law enforcement, how that data broker collected Americans' data in the first place, where this data comes from, and why it is so easy to sell. Tune in now. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
Becky Holmes knows how to throw a romance scammer off script—simply bring up cannibalism. In January, Holmes shared on Twitter that an account with the name "Thomas Smith" had started up a random chat with her that sounded an awful lot like the beginnins stages of a romance scam. But rather than instantly ignoring and blocking the advances—as Holmes recommends everyone do in these types of situations—she first had a little fun. "I was hoping that you'd let me eat a small part of you when we meet," Holmes said. "No major organs or anything obviously. I'm not weird lol." By just a few messages later, "Thomas Smith" had run off, refusing to respond to Holmes' follow-up requests about what body part she fancied, along with her preferred seasoning (paprika). Romance scams are a serious topic. In 2022, the US Federal Trade Commission reported that, in the five years prior, victims of romance scams had reported losing a collective $1.3 billion. In just 2021, that number was $547 million, and the average amount of money reported stolen per person was $2,400. Worse, romance scammers themselves often target vulnerable people, including seniors, widows, and the recently divorced, and they show no remorse when developing long-lasting online relationships, all bit on lies, so that they can emotionally manipulate their victims into handing over hundreds or thousands of dollars. But what would you do if you knew a romance scammer had contacted you and you, like our guest on today's Lock and Code podcast with host David Ruiz, had simply had enough? If you were Becky Holmes, you'd push back. For a couple of years now, Holmes has teased, mocked, strung along, and shut down online romance scammers, much of her work in public view as she shares some of her more exciting stories on Twitter. There's the romance scammer who she scared by not only accepting an invitation to meet, but ratcheting up the pressure by pretending to pack her bags, buy a ticket to Stockholm, and research venues for a perhaps too-soon wedding. There's the scammer she scared off by asking to eat part of his body. And, there's the story of the fake Brad Pitt:" My favorite story is Brad Pitt and the the dead tumble dryer repairman. And I honestly have to say, I don't think I'm ever going to top that. Every time ...I put a new tweet up, I think, oh, if only it was Brad Pitt and the dead body. I'm just never gonna get better."Tune in today to hear about Holmes' best stories, her first ever effort to push back, her insight into why she does what she does, and what you can do to spot a romance scam—and how to safely respond to one. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. And you can read our most recent report, the 2023...
When did technology last excite you? If Douglas Adams, author of The Hitchhiker's Guide to the Galaxy, is to be believed, your own excitement ended, simply had to end, after turning 35 years old. Decades ago, at first writing privately and later having those private writings published after his death, Adams had come up with "a set of rules that describe our reactions to technologies." They were simple and short: Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you're thirty-five is against the natural order of things. Today, on the Lock and Code podcast with host David Ruiz, we explore why technology seemingly no longer excites us. It could be because every annual product release is now just an iterative improvement from the exact same product release the year prior. It could be because just a handful of companies now control innovation. It could even be because technology is now fatally entangled with the business of money-making, and so, with every one money-making idea, dozens of other companies flock to the same idea, giving us the same product, but with a different veneer—Snapchat recreated endlessly across the social media landscape, cable television subscriptions "disrupted" by so many streaming services that we recreate the same problem we had before. Or, it could be because, as was first brought up by Shannon Vallor, director of the Centre for Technomoral Futures in the Edinburgh Futures Institute, that the promise of technology is not what it once was, or at least, not what we once thought it was. As Vallor wrote on Twitter in August of this year: "There's no longer anything being promised to us by tech companies that we actually need or asked for. Just more monitoring, more nudging, more draining of our data, our time, our joy." For our first episode of Lock and Code in 2023—and our first episode of our fourth season (how time flies)—we bring back Malwarebytes Labs editor-in-chief Anna Brading and Malwarebytes Labs writer Mark Stockley to ask: Why does technology no longer excite them? Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)
At the end of 2021, Lock and Code invited the folks behind our news-driven cybersecurity and online privacy blog, Malwarebytes Labs, to discuss what upset them most about cybersecurity in the year prior. Today, we're bringing those same guests back to discuss the other, biggest topic in this space and on this show: Data privacy. You see, in 2021, a lot has happened. Most recently, with the US Supreme Court's decision to remove the national right to choose to have an abortion, individual states have now gained control to ban abortion, which has caused countless individuals to worry about whether their data could be handed over to law enforcement for investigations into alleged criminal activity. Just months prior, we also learned about a mental health nonprofit that had taken the chat messages of at-times suicidal teenagers and then fed those messages to a separate customer support tool that was being sold to corporate customers to raise money for the nonprofit itself. And we learned about how difficult it can be to separate yourself from Google's all-encompassing, data-tracking empire. None of this is to mention more recent, separate developments: Facebook finding a way to re-introduce URL tracking, facial recognition cameras being installed in grocery stores, and Google delaying its scheduled plan to remove cookie tracking from Chrome. Today, on Lock and Code with host David Ruiz, we speak with Malwarebytes Labs editor-in-chief Anna Brading and Malwarebytes Labs writer Mark Stockley to answer one, big question: Have we lost the fight to meaningfully preserve data privacy? Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)
Sanctions, blockades, and their effects on the world economy. Western nations remain on alert for Russian cyber attacks. REvil prosecution has reached a dead end. Microsoft issues mitigations for a recent zero-day. John Pescatore's Mr. Security Answer Person is back, looking at authentication. Joe Carrigan looks at new browser vulnerabilities. Notes from the underworld. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/11/104 Selected reading. In big bid to punish Moscow, EU bans most Russia oil imports (AP NEWS) EU, resolving a deadlock, in deal to cut most Russia oil imports (Reuters The E.U.'s embargo will bruise Russia's oil industry, but for now it is doing fine. (New York Times) Russia's Black Sea Blockade Will Turbocharge the Global Food Crisis (Foreign Policy) Russia's Invasion Unleashes ‘Perfect Storm' in Global Agriculture (Foreign Policy) ‘War in Ukraine Means Hunger in Africa' (Foreign Policy) Afghanistan's Hungry Will Pay the Price for Putin's War (Foreign Policy) Remote bricking of Ukrainian tractors raises agriculture security concerns (CSO Online) Major supermarkets 'uniquely vulnerable' as Russian cyber attacks rise (ABC) Italy warns organizations to brace for incoming DDoS attacks (BleepingComputer) Whitepaper - PIPEDREAM: CHERNOVITE's Emerging Malware Targeting Industrial Environments (Dragos). Experts believe that Russian Gamaredon APT could fuel a new round of DDoS attacks (IT Security News) Putin horror warning over 'own goal' attack on UK coming back to haunt Kremlin (Express.co.uk) Putin plot: UK hospitals at risk of chilling ‘sleeper cell' attack by Russia (Express) Will Russia Launch a New Cyber Attack on America? (The National Interest) Hackers wage war on Russia's largest bank (The Telegraph) REvil prosecutions reach a 'dead end,' Russian media reports (CyberScoop) Microsoft Office zero-day "Follina"—it's not a bug, it's a feature! (It's a bug) (Malwarebytes Labs). Microsoft Word struck by zero-day vulnerability (Register) Clop ransomware gang is back, hits 21 victims in a single month (BleepingComputer) Conti ransomware explained: What you need to know about this aggressive criminal group (CSO Online)
We are only days into 2022, which means what better time for a 2021 retrospective? But rather than looking at the biggest cyberattacks of last year—which we already did—or the most surprising—like we did a couple of years ago—we wanted to offer something different for readers and listeners. On today's episode of Lock and Code, with host David Ruiz, we spoke with Malwarebytes Labs' editor-in-chief Anna Brading and Labs' writer Mark Stockley about what upset them the most about cybersecurity in 2021.
The CyberWire's UK correspondent Carole Theriault returns to share an interview with Geoff White, reporter from the BBC and co-host of the Lazarus Heist podcast, Joe has some listener follow-up from Mike looking for advice on certifications for getting into cybersecurity, Dave's story is from Brian Krebs about catching an ATM shimmer gang, Joe's got a piece from MalwareBytes Labs about phishing for Bitcoin recovery codes, and our Catch of the Day is from listener Rohit with a pretty genuine-looking snail mail scam. Links to stories: How Cyber Sleuths Cracked an ATM Shimmer Gang Bitcoin scammers phish for wallet recovery codes on Twitter Have a Catch of the Day you'd like to share? Email it to us at hackinghumans@thecyberwire.com or hit us up on Twitter.
This week on Lock and Code, we tune in to a special presentation from Adam Kujawa about the 2021 State of Malware report, which analyzed the top cybercrime goals of 2020 amidst the global pandemic. If you just pay attention to the numbers from last year, you might get the wrong idea. After all, malware detections for both consumers and businesses decreased in 2020 compared to 2019. That sounds like good news, but it wasn't. Behind those lowered numbers were more skillful, more precise attacks that derailed major corporations, hospitals, and schools with record-setting ransom demands. You can read the full 2021 State of Malware report here, and you can follow along with everyday cybersecurity coverage from Malwarebytes Labs here.
Guest Hossein Jazi of Malwarebytes joins us to take a deep dive into North Korea's APT37 (aka ScarCruft, Reaper and Group123) toolkit. On December 7 2020 the Malwarebytes Labs threat team identified a malicious document uploaded to Virus Total which was purporting to be a meeting request likely used to target the government of South Korea. The meeting date mentioned in the document was 23 Jan 2020, which aligns with the document compilation time of 27 Jan 2020, indicating that this attack took place almost a year ago. The file contains an embedded macro that uses a VBA self decoding technique to decode itself within the memory spaces of Microsoft Office without writing to the disk. It then embeds a variant of the RokRat into Notepad. Based on the injected payload, the Malwarebytes team believes that this sample is associated with APT37. This North Korean group is also known as ScarCruft, Reaper and Group123 and has been active since at least 2012, primarily targeting victims in South Korea. The research can be found here: Retrohunting APT37: North Korean APT used VBA self decode technique to inject RokRat
Guest Hossein Jazi of Malwarebytes joins us to take a deep dive into North Korea's APT37 (aka ScarCruft, Reaper and Group123) toolkit. On December 7 2020 the Malwarebytes Labs threat team identified a malicious document uploaded to Virus Total which was purporting to be a meeting request likely used to target the government of South Korea. The meeting date mentioned in the document was 23 Jan 2020, which aligns with the document compilation time of 27 Jan 2020, indicating that this attack took place almost a year ago. The file contains an embedded macro that uses a VBA self decoding technique to decode itself within the memory spaces of Microsoft Office without writing to the disk. It then embeds a variant of the RokRat into Notepad. Based on the injected payload, the Malwarebytes team believes that this sample is associated with APT37. This North Korean group is also known as ScarCruft, Reaper and Group123 and has been active since at least 2012, primarily targeting victims in South Korea. The research can be found here: Retrohunting APT37: North Korean APT used VBA self decode technique to inject RokRat
Ask yourself, right now, on a scale from one to ten, how cybersecure are you? Are you maybe inflating that answer?Our main story today concerns “security hubris,” the simple, yet difficult-to-measure phenomenon in which businesses, and the people inside them, are less secure than they actually believe.To better understand security hubris—how businesses can identify it and what they can do to protect against it—we’re talking today to Adam Kujawa, security evangelist and director for Malwarebytes Labs and security evangelist.
In Episode S2E7 we have a delightful conversation with Adam Kujawa, Director of Malwarebytes Labs. Adam talks about Malwarebytes' insightful new report that was released on August 20, 2020. This report, titled "Enduring from Home: COVID-19's Impact on Business Security," combines Malwarebytes telemetry with survey results from 200 IT and cybersecurity decision-makers from small businesses to large enterprises to unearth new security concerns in remote work environments. You'll not want to miss this episode, as Adam lays out some of the more interesting findings from this important report. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
With shelter-in-place orders now in full effect to prevent the spread of coronavirus, countless businesses find themselves this year in mandatory work-from-home situations. To break down today’s enterprise threats—and our own responses at Malwarebytes—we’re talking today to John Donovan, head of security for Malwarebytes, and Adam Kujawa, director for Malwarebytes Labs.
Today, our data can leave our hands and end up in the databases of countless companies, many of which we've never heard of, packaging and selling our data for reasons we could never imagine. To better understand how to protect ourselves online, we're talking to Adam Kujawa, a director of Malwarebytes Labs.
Hola que tal! Es con un gran placer que nos reunimos hoy con ustedes, como todos los viernes para presentarles nuestro programa semanal Canadá en las Américas Café también conocido por su alias: El Castor Cibernético de este viernes 14 de FEBRERO, en vivo y en directo por Facebook Live, Youtube, en nuestro sitio rcinet.ca y en nuestra aplicación. Hoy Día de la San Valentín, en el estudio se encuentran conmigo, Luis Laborda y Leonardo Gimeno. También contamos con la presencia de 2 invitados. Se trata de Marie-Christine Doran y Ricardo Peñafiel que forman parte de una Delegación canadiense de observación sobre violaciones de Derechos Humanos en Chile. La delegación formada por 9 personas de diversas especializaciones y campos de intervención, diputados del Parlamento de Quebec y de Canadá, representantes sindicales y de la sociedad civil e investigadores. Un caluroso saludo, a todos los que no están escucha-viendo y oyendo! Allá del otro lado de la cámara y de los micrófonos. Y muchísimas gracias por su agradable compañía! NUESTROS INVITADOS HOY: MARIE-CHRISTINE DORAN y RICARDO PEÑAFIEL Ricardo Peñafiel y Marie-Christine Doran. (Foto: RCI) MARIE-CHRISTINE DORAN es profesora, investigadora académica de la Escuela de Estudios Políticos de la Universidad de Ottawa y Directora del Centro de Investigaciónen Criminalización de la Protesta Social de la misma universidad. Marie-Christine Doran (Foto: RCI) RICARDO PEÑAFIEL es profesor e investigador en el Departamento de Ciencias Políticas de la Universidad de Quebec en Montreal, UQÀM. Y en el marco de esta delegación fue Representante sindical del Consejo Central de Montreal Metropolitano de la Central Sindical Nacional, CSN y de la Federación Nacional de Docentes de Quebec. Ricardo Peñafiel. (Foto: RCI) La delegación redactará en las próximas semana un informe más detallado que ponga en perspectiva y profundice las problemáticas abordadas durante esta misión en la que se realizaron 65 horas de entrevistas en Santiago, Antofagasta y Valparaíso, entre el 18 y el 26 de enero 2020, con 99 personas y 51 organizaciones. ESCUCHE EL PROGRAMA Escuche ES_Canada_en_las_Americas-20200214-WES15 VEA EL PROGRAMA https://www.youtube.com/watch?v=MYAss8oI-0s TEMAS QUE RESALTAMOS ESTA SEMANA Luis Laborda nos habla de la extensión de los servicios de salud virtuales en Canadá. (iStockphoto) Un grupo de trabajo en servicios de salud virtuales presentó un informe en el que recomienda extender la modalidad de atención a distancia y en línea en todo el territorio canadiense. Según los autores del documento, se entiende como atención médica virtual a “toda interacción entre pacientes, entre personas que participan en sus cuidados o entre miembros de ambos grupos que se lleva a cabo a distancia, usando una forma de tecnología de la información o de comunicaciones virtuales, con el objetivo de mejorar u optimizar la calidad y la eficacia del servicio al paciente”. Leonardo Gimeno nos habla de un nuevo informe sobre los programas malignos o "Malware" (iStpckPhoto.com) La tradición en seguridad informática, ha tenido históricamente un ganador, y es la empresa de la manzana. Según gran parte de los especialistas, y sobre todo de los usuarios, los ordenadores Mac son más seguros que los PC. Los entendidos aseguran que la base de esta creencia se debe al diseño del sistema operativo que corre en cada una de las máquinas; el Mac OS está diseñado a partir de una base estructural de Unix, un sistema operativo creado en 1969 por los laboratorios Bell, del grupo AT&T que “resultaría más seguro” desde el punto de vista de la arquitectura del sistema. Un nuevo informe de la compañía Malwarebytes Labs, de febrero del 2020 desafía la histórica lógica asegurando que los ordenadores Mac son más vulnerables a los malwares que los PC. Y yo por mi parte los invito a escuchar un programa especial con 3 radios asociadas de RCI par conmemorar el Día Mundial de la Ra...
Hola que tal! Es con un gran placer que nos reunimos hoy con ustedes, como todos los viernes para presentarles nuestro programa semanal Canadá en las Américas Café también conocido por su alias: El Castor Cibernético de este viernes 14 de FEBRERO, en vivo y en directo por Facebook Live, Youtube, en nuestro sitio rcinet.ca y en nuestra aplicación. Hoy Día de la San Valentín, en el estudio se encuentran conmigo, Luis Laborda y Leonardo Gimeno. También contamos con la presencia de 2 invitados. Se trata de Marie-Christine Doran y Ricardo Peñafiel que forman parte de una Delegación canadiense de observación sobre violaciones de Derechos Humanos en Chile. La delegación formada por 9 personas de diversas especializaciones y campos de intervención, diputados del Parlamento de Quebec y de Canadá, representantes sindicales y de la sociedad civil e investigadores. Un caluroso saludo, a todos los que no están escucha-viendo y oyendo! Allá del otro lado de la cámara y de los micrófonos. Y muchísimas gracias por su agradable compañía! NUESTROS INVITADOS HOY: MARIE-CHRISTINE DORAN y RICARDO PEÑAFIEL Ricardo Peñafiel y Marie-Christine Doran. (Foto: RCI) MARIE-CHRISTINE DORAN es profesora, investigadora académica de la Escuela de Estudios Políticos de la Universidad de Ottawa y Directora del Centro de Investigaciónen Criminalización de la Protesta Social de la misma universidad. Marie-Christine Doran (Foto: RCI) RICARDO PEÑAFIEL es profesor e investigador en el Departamento de Ciencias Políticas de la Universidad de Quebec en Montreal, UQÀM. Y en el marco de esta delegación fue Representante sindical del Consejo Central de Montreal Metropolitano de la Central Sindical Nacional, CSN y de la Federación Nacional de Docentes de Quebec. Ricardo Peñafiel. (Foto: RCI) La delegación redactará en las próximas semana un informe más detallado que ponga en perspectiva y profundice las problemáticas abordadas durante esta misión en la que se realizaron 65 horas de entrevistas en Santiago, Antofagasta y Valparaíso, entre el 18 y el 26 de enero 2020, con 99 personas y 51 organizaciones. ESCUCHE EL PROGRAMA Escuche ES_Canada_en_las_Americas-20200214-WES15 VEA EL PROGRAMA https://www.youtube.com/watch?v=MYAss8oI-0s TEMAS QUE RESALTAMOS ESTA SEMANA Luis Laborda nos habla de la extensión de los servicios de salud virtuales en Canadá. (iStockphoto) Un grupo de trabajo en servicios de salud virtuales presentó un informe en el que recomienda extender la modalidad de atención a distancia y en línea en todo el territorio canadiense. Según los autores del documento, se entiende como atención médica virtual a “toda interacción entre pacientes, entre personas que participan en sus cuidados o entre miembros de ambos grupos que se lleva a cabo a distancia, usando una forma de tecnología de la información o de comunicaciones virtuales, con el objetivo de mejorar u optimizar la calidad y la eficacia del servicio al paciente”. Leonardo Gimeno nos habla de un nuevo informe sobre los programas malignos o "Malware" (iStpckPhoto.com) La tradición en seguridad informática, ha tenido históricamente un ganador, y es la empresa de la manzana. Según gran parte de los especialistas, y sobre todo de los usuarios, los ordenadores Mac son más seguros que los PC. Los entendidos aseguran que la base de esta creencia se debe al diseño del sistema operativo que corre en cada una de las máquinas; el Mac OS está diseñado a partir de una base estructural de Unix, un sistema operativo creado en 1969 por los laboratorios Bell, del grupo AT&T que “resultaría más seguro” desde el punto de vista de la arquitectura del sistema. Un nuevo informe de la compañía Malwarebytes Labs, de febrero del 2020 desafía la histórica lógica asegurando que los ordenadores Mac son más vulnerables a los malwares que los PC. Y yo por mi parte los invito a escuchar un programa especial con 3 radios asociadas de RCI par conmemorar el Día Mundial de la Ra...
Is it possible to hide your tracks online? Is it even worth the effort to try? How do you know which companies, products and services you can trust? Is government regulation the answer? We'll address all of these questions today in part 2 of my interview with David Ruiz. David will give you several great resources for getting more informed and also for getting more involved in the fight for privacy. David Ruiz is a pro-privacy, pro-security writer for Malwarebytes Labs, where he covers online privacy, legislation, and the interplay between technology and the law. Further Info Who Has Your Back? https://www.eff.org/who-has-your-back-2018Privacy Not Included: https://foundation.mozilla.org/en/privacynotincluded/Terms of Service; Didn't Read: https://tosdr.org/Malwarebytes poll on privacy: https://blog.malwarebytes.com/security-world/2019/03/labs-survey-finds-privacy-concerns-distrust-of-social-media-rampant-with-all-age-groups/Top 6 Takeaways from poll: https://blog.malwarebytes.com/101/2019/05/the-top-six-takeaways-for-user-privacy/Help me to help you! https://www.patreon.com/FirewallsDontStopDragons
In January of this year, Malwarebytes (a world-class antivirus software maker) conducted a massive poll on privacy that included 4000 people from 66 different countries. On today's show, I will delve into the key takeaways from this poll and some rather (pleasantly) surprising results. (Tune in next week for part 2.) David Ruiz is a pro-privacy, pro-security writer for Malwarebytes Labs, where he covers online privacy, legislation, and the interplay between technology and the law. Further Info Malwarebytes poll on privacy: https://blog.malwarebytes.com/security-world/2019/03/labs-survey-finds-privacy-concerns-distrust-of-social-media-rampant-with-all-age-groups/Top 6 Takeaways from poll: https://blog.malwarebytes.com/101/2019/05/the-top-six-takeaways-for-user-privacy/
Adam Kujawa is the Director of Malwarebytes Labs.
Jovi Umawing is a malware intelligence analyst at Malwarebytes Labs where she researches and blogs about online threats, scammers, email spam, phishing, and social media and gaming threats.