POPULARITY
There's a problem in class today, and the second largest school district in the United States is trying to solve it.After looking at the growing body of research that has associated increased smartphone and social media usage with increased levels of anxiety, depression, suicidal thoughts, and isolation—especially amongst adolescents and teenagers—Los Angeles Unified School District (LAUSD) implemented a cellphone ban across its 1,000 schools for its more than 500,000 students.Under the ban, students who are kindergartners all the way through high school seniors cannot use cellphones, smartphones, smart watches, earbuds, smart glasses, and any other electronic devices that can send messages, receive calls, or browse the internet. Phones are not allowed at lunch or during passing periods between classes, and, under the ban, individual schools decide how students' phones are stored, be that in lockers, in magnetically sealed pouches, or just placed into sleeves at the front door of every classroom, away from students' reach.The ban was approved by the Los Angeles Unified School District through what is called a “resolution”—which the board voted on last year. LAUSD Board Member Nick Melvoin, who sponsored the resolution, said the overall ban was the right decision to help students. “The research is clear: widespread use of smartphones and social media by kids and adolescents is harmful to their mental health, distracts from learning, and stifles meaningful in-person interaction.”Today, on the Lock and Code podcast with host David Ruiz, we speak with LAUSD Board Member Nick Melvoin about the smartphone ban, how exceptions were determined, where opposition arose, and whether it is “working.” Melvoin also speaks about the biggest changes he has seen in the first few months of the cellphone ban, especially the simple reintroduction of noise in hallways.“[During a school visit last year,] every single kid was on their phone, every single kid. They were standing there looking, texting again, sometimes texting someone who was within a few feet of them, and it was quiet.”Tune in today.You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
“Heidi” is a 36-year-old, San Francisco-born, divorced activist who is lonely, outspoken, and active on social media. “Jason” is a shy, bilingual teenager whose parents immigrated from Ecuador who likes anime, gaming, comic books, and hiking.Neither of them is real. Both are supposed to fight crime.Heidi and Jason are examples of “AI personas” that are being pitched by the company Massive Blue for its lead product, Overwatch. Already in use at police departments across the United States, Overwatch can allegedly help with the identification, investigation, and arrest of criminal suspects.Understanding exactly how the technology works, however, is difficult—both Massive Blue and the police departments that have paid Massive Blue have remained rather secretive about Overwatch's inner workings. But, according to an investigation last month by 404 Media, Overwatch is a mix of a few currently available technologies packaged into one software suite. Overwatch can scan social media sites for alleged criminal activity, and it can deploy “AI personas”—which have their own social media accounts and AI-generated profile pictures—to gather intelligence by chatting online with suspected criminals.According to an Overwatch marketing deck obtained by 404 Media, the software's AI personas are “highly customizable and immediately deployable across all digital channels” and can take on the personalities of escorts, money launderers, sextortionists, and college protesters (who, in real life, engage in activity protected by the First Amendment).Despite the variety of applications, 404 Media revealed that Overwatch has sparked interest from police departments investigating immigration and human trafficking. But the success rate, so far, is non-existent: Overwatch has reportedly not been used in the arrest of a single criminal suspect.Today, on the Lock and Code podcast with host David Ruiz, we speak with 404 Media journalists and co-founders Emanuel Maiberg and Jason Koebler about Overwatch's capabilities, why police departments are attracted to the technology, and why the murkiness around human trafficking may actually invite unproven solutions like AI chatbots.”Nobody is going to buy that—that if you throw an AI chatbot into the mix, that's somehow going to reduce gun crime in Americ,” Maiberg said. “But if you apply it to human trafficking, maybe somebody is willing to entertain that because, well, what is the actual problem with human trafficking? Where is it actually happening? Who is getting hurt by it? Who is actually committing it?”He continued:“Maybe there you're willing to entertain a high tech science fiction solution.”Tune in today.You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk...
If you don't know about the newly created US Department of Government Efficiency (DOGE), there's a strong chance they already know about you.Created on January 20 by US President Donald Trump through Executive Order, DOGE's broad mandate is “modernizing Federal technology and software to maximize governmental efficiency and productivity.”To fulfill its mission, though, DOGE has taken great interest in Americans' data.On February 1, DOGE team members without the necessary security clearances accessed classified information belonging to the US Agency for International Development. On February 17, multiple outlets reported that DOGE sought access to IRS data that includes names, addresses, social security numbers, income, net worth, bank information for direct deposits, and bankruptcy history. The next day, the commissioner of the Social Security Administration stepped down after DOGE requested access to information stored there, too, which includes records of lifetime wages and earnings, social security and bank account numbers, the type and amount of benefits individuals received, citizenship status, and disability and medical information. And last month, one US resident filed a data breach notification report with his state's Attorney General alleging that his data was breached by DOGE and the man behind it, Elon Musk.In speaking with the news outlet Data Breaches Dot Net, the man, Kevin Couture, said:“I filed the report with my state Attorney General against Elon Musk stating my privacy rights were violated as my Social Security Number, banking info was compromised by accessing government systems and downloading the info without my consent or knowledge. What other information did he gather on me or others? This is wrong and illegal. I have no idea who has my information now.”Today on the Lock and Code podcast with host David Ruiz, we speak with Sydney Saubestre, senior policy analyst at New America's Open Technology Institute, about what data DOGE has accessed, why the government department is claiming it requires that access, and whether or not it is fair to call some of this access a “data breach.”“[DOGE] haven't been able to articulate why they want access to some of these data files other than broad ‘waste, fraud, and abuse.' That, ethically, to me, points to it being a data breach.”Tune in today.You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes...
It has probably happened to you before.You and a friend are talking—not texting, not DMing, not FaceTiming—but talking, physically face-to-face, about, say, an upcoming vacation, a new music festival, or a job offer you just got.And then, that same week, you start noticing some eerily specific ads. There's the Instagram ad about carry-on luggage, the TikTok ad about earplugs, and the countless ads you encounter simply scrolling through the internet about laptop bags.And so you think, “Is my phone listening to me?”This question has been around for years and, today, it's far from a conspiracy theory. Modern smartphones can and do listen to users for voice searches, smart assistant integration, and, obviously, phone calls. It's not too outlandish to believe, then, that the microphones on smartphones could be used to listen to other conversations without users knowing about it.Recent news stories don't help, either.In January, Apple agreed to pay $95 million to settle a lawsuit alleging that the company had eavesdropped on users' conversations through its smart assistant Siri, and that it shared the recorded conversations with marketers for ad targeting. The lead plaintiff in the case specifically claimed that she and her daughter were recorded without their consent, which resulted in them receiving multiple ads for Air Jordans.In agreeing to pay the settlement, though, Apple denied any wrongdoing, with a spokesperson telling the BBC:“Siri data has never been used to build marketing profiles and it has never been sold to anyone for any purpose.”But statements like this have done little to ease public anxiety. Tech companies have been caught in multiple lies in the past, privacy invasions happen thousands of times a day, and ad targeting feels extreme entirely because it is.Where, then, does the truth lie?Today, on the Lock and Code podcast with David Ruiz, we speak with Electronic Frontier Foundation Staff Technologist Lena Cohen about the most mind-boggling forms of corporate surveillance—including an experimental ad-tracking technology that emitted ultrasonic sound waves—specific audience segments that marketing companies make when targeting people with ads, and, of course, whether our phones are really listening to us.“Companies are collecting so much information about us and in such covert ways that it really feels like they're listening to us.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your...
Google Chrome is, by far, the most popular web browser in the world.According to several metrics, Chrome accounts for anywhere between 52% and 66% of the current global market share for web browser use. At that higher estimate, that means that, if the 5.5 billion internet users around the world were to open up a web browser right now, 3.6 billion of them would open up Google Chrome.And because the browser is the most common portal to our daily universe of online activity—searching for answers to questions, looking up recipes, applying for jobs, posting on forums, accessing cloud applications, reading the news, comparing prices, recording Lock and Code, buying concert tickets, signing up for newsletters—then the company that controls that browser likely knows a lot about its users.In the case of Google Chrome, that's entirely true.Google Chrome knows the websites you visit, the searches you make (through Google), the links you click, and the device model you use, along with the version of Chrome you run. That may sound benign, but when collected over long periods of time, and when coupled with the mountains of data that other Google products collect about you, this wealth of data can paint a deeply intimate portrait of your life.Today, on the Lock and Code podcast with host David Ruiz, we speak with author, podcast host, and privacy advocate Carey Parker about what Google Chrome knows about you, why that data is sensitive, what “Incognito mode” really does, and what you can do in response.We also explain exactly why Google would want this money, and that's to help it run as an ad company.“That's what [Google is]. Full stop. Google is an ad company who just happens to make a web browser, and a search engine, and an email app, and a whole lot more than that.”Tune in today.You can also listen to "Firewalls Don't Stop Dragons," the podcast hosted by Carey Parker, here: https://firewallsdontstopdragons.com/You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Insurance pricing in America makes a lot of sense so long as you're one of the insurance companies. Drivers are charged more for traveling long distances, having low credit, owning a two-seater instead of a four, being on the receiving end of a car crash, and—increasingly—for any number of non-determinative data points that insurance companies use to assume higher risk.It's a pricing model that most people find distasteful, but it's also a pricing model that could become the norm if companies across the world begin implementing something called “surveillance pricing.”Surveillance pricing is the term used to describe companies charging people different prices for the exact same goods. That 50-inch TV could be $800 for one person and $700 for someone else, even though the same model was bought from the same retail location on the exact same day. Or, airline tickets could be more expensive because they were purchased from a more expensive device—like a Mac laptop—and the company selling the airline ticket has decided that people with pricier computers can afford pricier tickets.Surveillance pricing is only possible because companies can collect enormous arrays of data about their consumers and then use that data to charge individual prices. A test prep company was once caught charging customers more if they lived in a neighborhood with a higher concentration of Asians, and a retail company was caught charging customers more if they were looking at prices on the company's app while physically located in a store's parking lot.This matter of data privacy isn't some invisible invasion online, and it isn't some esoteric framework of ad targeting, this is you paying the most that a company believes you will, for everything you buy.And it's happening right now.Today, on the Lock and Code podcast with host David Ruiz, we speak with Consumer Watchdog Tech Privacy Advocate Justin Kloczko about where surveillance pricing is happening, what data is being used to determine prices, and why the practice is so nefarious. “It's not like we're all walking into a Starbucks and we're seeing 12 different prices for a venti mocha latte,” said Kloczko, who recently authored a report on the same subject. “If that were the case, it'd be mayhem. There'd be a revolution.”Instead, Kloczko said:“Because we're all buried in our own devices—and this is really happening on e-commerce websites and online, on your iPad on your phone—you're kind of siloed in your own world, and companies can get away with this.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes
✨In dieser Folge✨ Ich versuche, zu jedem Buchstaben im Alphabet eine 2.Gestalt zu nennen, die es in WoWa gibt. Wenn ihr noch mehr wisst bitte schreiben! - ✨Infos✨ Vor genau 155 Folgen habe ich das schon mal mit Charakteren gemacht
It's Data Privacy Week right now, and that means, for the most part, that you're going to see a lot of well-intentioned but clumsy information online about how to protect your data privacy. You'll see articles about iPhone settings. You'll hear acronyms for varying state laws. And you'll probably see ads for a variety of apps, plug-ins, and online tools that can be difficult to navigate.So much of Malwarebytes—from Malwarebytes Labs, to the Lock and Code podcast, to the engineers, lawyers, and staff at wide—work on data privacy, and we fault no advocate or technologist or policy expert trying to earnestly inform the public about the importance of data privacy.But, even with good intentions, we cannot ignore the reality of the situation. Data breaches every day, broad disrespect of user data, and a lack of consequences for some of the worst offenders. To be truly effective against these forces, data privacy guidance has to encompass more than fiddling with device settings or making onerous legal requests to companies.That's why, for Data Privacy Week this year, we're offering three pieces of advice that center on behavior. These changes won't stop some of the worst invasions against your privacy, but we hope they provide a new framework to understand what you actually get when you practice data privacy, which is control.You have control over who sees where you are and what inferences they make from that. You have control over whether you continue using products that don't respect your data privacy. And you have control over whether a fast food app is worth giving up your location data to just in exchange for a few measly coupons.Today, on the Lock and Code podcast, host David Ruiz explores his three rules for data privacy in 2025. In short, he recommends:Less location sharing. Only when you want it, only from those you trust, and never in the background, 24/7, for your apps. More accountability. If companies can't respect your data, respect yourself by dropping their products.No more data deals. That fast-food app offers more than just $4 off a combo meal, it creates a pipeline into your behavioral dataTune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Privacy is many things for many people.For the teenager suffering from a bad breakup, privacy is the ability to stop sharing her location and to block her ex on social media. For the political dissident advocating against an oppressive government, privacy is the protection that comes from secure, digital communications. And for the California resident who wants to know exactly how they're being included in so many targeted ads, privacy is the legal right to ask a marketing firm how they collect their data.In all these situations, privacy is being provided to a person, often by a company or that company's employees.The decisions to disallow location sharing and block social media users are made—and implemented—by people. The engineering that goes into building a secure, end-to-end encrypted messaging platform is done by people. Likewise, the response to someone's legal request is completed by either a lawyer, a paralegal, or someone with a career in compliance.In other words, privacy, for the people who spend their days with these companies, is work. It's their expertise, their career, and their to-do list.But what does that work actually entail?Today, on the Lock and Code podcast with host David Ruiz, we speak with Transcend Field Chief Privacy Officer Ron de Jesus about the responsibilities of privacy professionals today and how experts balance the privacy of users with the goals of their companies.De Jesus also explains how everyday people can meaningfully judge whether a company's privacy “promises” have any merit by looking into what the companies provide, including a legible privacy policy and “just-in-time” notifications that ask for consent for any data collection as it happens.“When companies provide these really easy-to-use controls around my personal information, that's a really great trigger for me to say, hey, this company, really, is putting their money where their mouth is.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
New report from Wowa.ca ranked Vancouver as the MOST UNAFFORDABLE housing market in North America. This report is based on the median household income compared to average home prices. We are no strangers to the issues that plague Vancouver, and in this episode, Cameron breaks down issues with the income, housing policy and immigration surge that has lead to Vancouver continuing to top the charts. (and not in a good way) Subscribe to the podcast on YouTube: youtube.com@manninguponrealestate?sub_confirmation=1 Watch this episode via YouTube: https://youtu.be/_tz3FkzcR_E Connect with Cameron through his socials: https://hoo.be/cameronemanning Cameron Manning is a TOP 1% agent with the @kellyfryteam in the Greater Vancouver Market with a focus on Real Estate Investors! --- Support this podcast: https://podcasters.spotify.com/pod/show/cameronmanning/support
It's December and time for our last monthly Toronto and Vancouver real estate roundtable of the calendar year with my co-host John Pasalis in Toronto and Steve Saretsky in Vancouver. This week, John and Steve explain why a jump up in YOY sales numbers obscures the reality of a sluggish market with some segments performing better than other and just how Vancouver and Toronto have come to top the list of the most expensive 25 cities in the US & Canada, topping LA, San Francisco and New York. We ended with a lively debate about whether large corporations should be allowed to buy homes to rent out as Canada's Finance Minister Chrystia Freeland calls for public submissions on this issue. Today's Links: Link to Wowa list of housing expenses in largest 25 cities in US & Canada: https://x.com/SteveSaretsky/status/1862941518144983313 Link to John and Steve twitter exchange re large corporations buying homes: https://x.com/JohnPasalis/status/1859682242366406986 Contact Us Follow Steve on X-Twitter @SteveSaretsky; Email at: steve@stevesaretsky.com Follow John on X-Twitter @JohnPasalis; Email at: askjohn@movesmartly.com Follow Urmi on X-Twitter @MoveSmartly; Email at: editor@movesmartly.com About This Roundtable Series Each month, Move Smartly.com editor, Urmi Desai, talks to John Pasalis, Housing Analyst, Broker and President of Realosophy Realty in Toronto, and Steve Saretsky, Housing Analyst and Realtor at the Oakwin Realty Group at Oakwin Realty in Vancouver about the latest data and on-the-ground insights in Canada's biggest residential RE markets. (Thanks to Jesse Bains, now at Linked In News, for kicking off this series at Yahoo Finance originally!) About This Show The Move Smartly show is co-hosted by Urmi Desai, Editor of Move Smartly, and John Pasalis, President and Broker of Realosophy Realty. MoveSmartly.com and its media channels on YouTube and various podcast platforms are powered by Realosophy Realty in Toronto, Canada. You can also watch this episode on our MoveSmartly YouTube channel here: https://www.youtube.com/movesmartly If you enjoy our show and find it useful, please like, subscribe, share, review and comment on whatever platform you are watching or listening to us from - we appreciate your support!
The month, a consumer rights group out of the UK posed a question to the public that they'd likely never considered: Were their air fryers spying on them?By analyzing the associated Android apps for three separate air fryer models from three different companies, a group of researchers learned that these kitchen devices didn't just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastries—they also wanted a lot of user data, from precise location to voice recordings from a user's phone.“In the air fryer category, as well as knowing customers' precise location, all three products wanted permission to record audio on the user's phone, for no specified reason,” the group wrote in its findings.While it may be easy to discount the data collection requests of an air fryer app, it is getting harder to buy any type of product today that doesn't connect to the internet, request your data, or share that data with unknown companies and contractors across the world.Today, on the Lock and Code pocast, host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook.These stories aren't about mass government surveillance, and they're not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated.Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
✨In dieser Folge✨ Ratet mal wer sich den Film angeschaut hat✨ - Ich
✨In dieser Folge✨ Lou und ich tauschen uns über den Film aus und vergleichen unsere Meinungen, da sie nur den Film kennt und ich auch die Bücher. - ✨Infos✨ Alternative Folgentitel:
✨In dieser Folge✨ Endlich der Rest der Parodie
The US presidential election is upon the American public, and with it come fears of “election interference.”But “election interference” is a broad term. It can mean the now-regular and expected foreign disinformation campaigns that are launched to sow political discord or to erode trust in American democracy. It can include domestic campaigns to disenfranchise voters in battleground states. And it can include the upsetting and increasing threats made to election officials and volunteers across the country.But there's an even broader category of election interference that is of particular importance to this podcast, and that's cybersecurity.Elections in the United States rely on a dizzying number of technologies. There are the voting machines themselves, there are electronic pollbooks that check voters in, there are optical scanners that tabulate the votes that the American public actually make when filling in an oval bubble with pen, or connecting an arrow with a solid line. And none of that is to mention the infrastructure that campaigns rely on every day to get information out—across websites, through emails, in text messages, and more.That interlocking complexity is only multiplied when you remember that each, individual state has its own way of complying with the Federal government's rules and standards for running an election. As Cait Conley, Senior Advisor to the Director of the US Cybersecurity and Infrastructure Security Agency (CISA) explains in today's episode:“There's a common saying in the election space: If you've seen one state's election, you've seen one state's election.”How, then, are elections secured in the United States, and what threats does CISA defend against?Today, on the Lock and Code podcast with host David Ruiz, we speak with Conley about how CISA prepares and trains election officials and volunteers before the big day, whether or not an American's vote can be “hacked,” and what the country is facing in the final days before an election, particularly from foreign adversaries that want to destabilize American trust.”There's a pretty good chance that you're going to see Russia, Iran, or China try to claim that a distributed denial of service attack or a ransomware attack against a county is somehow going to impact the security or integrity of your vote. And it's not true.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and
✨In dieser Folge✨ Ich ziehe WoWa-Charaktere und sage, ob ich mich mit ihnen verstehen würde, wir Feinde wären oder sogar beste Freunde
✨In dieser Folge✨ Heute lese ich eine Parodie von Wattpad vor - schreibt gern ob ihr die letzten 5 Kapitel noch hören wollt
On the internet, you can be shown an online ad because of your age, your address, your purchase history, your politics, your religion, and even your likelihood of having cancer.This is because of the largely unchecked “data broker” industry.Data brokers are analytics and marketing companies that collect every conceivable data point that exists about you, packaging it all into profiles that other companies use when deciding who should see their advertisements.Have a new mortgage? There are data brokers that collect that information and then sell it to advertisers who believe new homeowners are the perfect demographic to purchase, say, furniture, dining sets, or other home goods. Bought a new car? There are data brokers that collect all sorts of driving information directly from car manufacturers—including the direction you're driving, your car's gas tank status, its speed, and its location—because some unknown data model said somewhere that, perhaps, car drivers in certain states who are prone to speeding might be more likely to buy one type of product compared to another.This is just a glimpse of what is happening to essentially every single adult who uses the Internet today.So much of the information that people would never divulge to a stranger—like their addresses, phone numbers, criminal records, and mortgage payments—is collected away from view by thousands of data brokers. And while these companies know so much about people, the public at large likely know very little in return.Today, on the Lock and Code podcast with host David Ruiz, we speak with Cody Venzke, senior policy counsel with the ACLU, about how data brokers collect their information, what data points are off-limits (if any), and how people can protect their sensitive information, along with the harms that come from unchecked data broker activity—beyond just targeted advertising.“We're seeing data that's been purchased from data brokers used to make decisions about who gets a house, who gets an employment opportunity, who is offered credit, who is considered for admission into a university.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Online scammers were seen this August stooping to a new low—abusing local funerals to steal from bereaved family and friends.Cybercrime has never been a job of morals (calling it a “job” is already lending it too much credit), but, for many years, scams wavered between clever and brusque. Take the “Nigerian prince” email scam which has plagued victims for close to two decades. In it, would-be victims would receive a mysterious, unwanted message from alleged royalty, and, in exchange for a little help in moving funds across international borders, would be handsomely rewarded.The scam was preposterous but effective—in fact, in 2019, CNBC reported that this very same “Nigerian prince” scam campaign resulted in $700,000 in losses for victims in the United States.Since then, scams have evolved dramatically.Cybercriminals today willl send deceptive emails claiming to come from Netflix, or Google, or Uber, tricking victims into “resetting” their passwords. Cybercriminals will leverage global crises, like the COVID-19 pandemic, and send fraudulent requests for donations to nonprofits and hospital funds. And, time and again, cybercriminals will find a way to play on our emotions—be they fear, or urgency, or even affection—to lure us into unsafe places online.This summer, Malwarebytes social media manager Zach Hinkle encountered one such scam, and it happened while attending a funeral for a friend. In a campaign that Malwarebytes Labs is calling the “Facebook funeral live stream scam,” attendees at real funerals are being tricked into potentially signing up for a “live stream” service of the funerals they just attended.Today on the Lock and Code podcast with host David Ruiz, we speak with Hinkle and Malwarebytes security researcher Pieter Arntz about the Facebook funeral live stream scam, what potential victims have to watch out for, and how cybercriminals are targeting actual, grieving family members with such foul deceit. Hinkle also describes what he felt in the moment of trying to not only take the scam down, but to protect his friends from falling for it.“You're grieving… and you go through a service and you're feeling all these emotions, and then the emotion you feel is anger because someone is trying to take advantage of friends and loved ones, of somebody who has just died. That's so appalling”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code...
✨In dieser Folge✨Heute erzähle ich euch, wie ich zu Woodwalkers gekommen bin - mit einer Schwierigkeit
Every age group uses the internet a little bit differently, and it turns out for at least one Gen Z teen in the Bay Area, the classic approach to cyberecurity—defending against viruses, ransomware, worms, and more—is the least of her concerns. Of far more importance is Artificial Intelligence (AI).Today, the Lock and Code podcast with host David Ruiz revisits a prior episode from 2023 about what teenagers fear the most about going online. The conversation is a strong reminder that when America's youngest generations experience online is far from the same experience that Millennials, Gen X'ers, and Baby Boomers had with their own introduction to the internet.Even stronger proof of this is found in recent research that Malwarebytes debuted this summer about how people in committed relationships share their locations, passwords, and devices with one another. As detailed in the larger report, “What's mine is yours: How couples share an all-access pass to their digital lives,” Gen Z respondents were the most likely to say that they got a feeling of safety when sharing their locations with significant others.But a wrinkle appeared in that behavior, according to the same research: Gen Z was also the most likely to say that they only shared their locations because their partners forced them to do so.In our full conversation from last year, we speak with Nitya Sharma about how her “favorite app” to use with friends is “Find My” on iPhone, the dangers are of AI “sneak attacks,” and why she simply cannot be bothered about malware. “I know that there's a threat of sharing information with bad people and then abusing it, but I just don't know what you would do with it. Show up to my house and try to kill me?” Tune in today to listen to the full conversation.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Somewhere out there is a romantic AI chatbot that wants to know everything about you. But in a revealing overlap, other AI tools—which are developed and popularized by far larger companies in technology—could crave the very same thing.For AI tools of any type, our data is key.In the nearly two years since OpenAI unveiled ChatGPT to the public, the biggest names in technology have raced to compete. Meta announced Llama. Google revealed Gemini. And Microsoft debuted Copilot.All these AI features function in similar ways: After having been trained on mountains of text, videos, images, and more, these tools answer users' questions in immediate and contextually relevant ways. Perhaps that means taking a popular recipe and making it vegetarian friendly. Or maybe that involves developing a workout routine for someone who is recovering from a new knee injury.Whatever the ask, the more data that an AI tool has already digested, the better it can deliver answers.Interestingly, romantic AI chatbots operate in almost the same way, as the more information that a user gives about themselves, the more intimate and personal the AI chatbot's responses can appear.But where any part of our online world demands more data, questions around privacy arise.Today, on the Lock and Code podcast with host David Ruiz, we speak with Zoë MacDonald, content creator for Privacy Not Included at Mozilla about romantic AI tools and how users can protect their privacy from ChatGPT and other AI chatbots.When in doubt, MacDonald said, stick to a simple rule:“I would suggest that people don't share their personal information with an AI chatbot.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Wasim Jarrah joins us to take a look back at Tim Hudak's accomplishments at OREA. The industry is a better place because of the efforts of Mr Hudak. We are joined by WOWA to discuss their new platform. Tune in.
Puisque l'été est souvent l'occasion de rattraper ses écoutes en retard et de dévorer de longs programmes, Matilde Meslin a sélectionné dix séries de podcasts à binger, seul ou en famille. Témoignages, enquêtes, lectures, fictions: il y en a pour tous les goûts, à condition d'avoir un peu de temps devant soi. Matilde Meslin présente ses recommandations de podcasts à découvrir en vacances dans cet épisode qui clôt la troisième –et ultime– saison de Sans Algo.Les podcasts recommandés sont:Destin: Julia Sedefdjian, d'Anaïs Koopman pour NRJDétective 80, de Jehanne Cretin-Maitenaz et Anna Buy pour Arte RadioMarseille is so fucking cool, d'Amélie Perrot et Clément AadliGouinistan & co: Le royaume des exs, de Christine Gonzalez et Aurélie Cuttat pour la RTSÀ nos amis, à nos amants, produit par LACME ProductionCode source: 80 ans de la libération de Paris, de Raphaël Pueyo pour Le ParisienMusicopolis: Napoléon Bonaparte, la bande son d'une vie, d'Anne-Charlotte Rémond pour France MusiqueLa gloire de mon père, sur France CultureQuand est-ce qu'on arrive?, de Radio Vinci AutoroutesCold Cases: le combat des familles, du service police-justice de FranceinfoÀ l'avenir, vous pourrez retrouver les recommandations d'écoute de Matilde Meslin en version écrite dans sa newsletter «Sans Algo», disponible gratuitement. Elle est envoyée le vendredi, une semaine sur deux, comme ce podcast.Sans Algo est un podcast de Matilde Meslin produit par Slate Podcasts.Direction éditoriale: Christophe CarronProduction éditoriale: Nina ParejaMontage et réalisation: Aurélie RodriguesMusique: «Hangtime», Unminus & Wowa
More than 20 years ago, a law that the United States would eventually use to justify the warrantless collection of Americans' phone call records actually started out as a warning sign against an entirely different target: Libraries.Not two months after terrorists attacked the United States on September 11, 2001, Congress responded with the passage of The USA Patriot Act. Originally championed as a tool to fight terrorism, The Patriot Act, as introduced, allowed the FBI to request “any tangible things” from businesses, organizations, and people during investigations into alleged terrorist activity. Those “tangible things,” the law said, included “books, records, papers, documents, and other items.”Or, to put it a different way: things you'd find in a library and records of the things you'd check out from a library. The concern around this language was so strong that this section of the USA Patriot Act got a new moniker amongst the public: “The library provision.”The Patriot Act passed, and years later, the public was told that, all along, the US government wasn't interested in library records.But those government assurances are old.What remains true is that libraries and librarians want to maintain the privacy of your records. And what also remains true is that the government looks anywhere it can for information to aid investigations into national security, terrorism, human trafficking, illegal immigration, and more.What's changed, however, is that companies that libraries have relied on for published materials and collections—Thomson Reuters, Reed Elsevier, Lexis Nexis—have reimagined themselves as big data companies. And they've lined up to provide newly collected data to the government, particularly to agencies like Immigrations and Customers Enforcement, or ICE.There are many layers to this data web, and libraries are seemingly stuck in the middle.Today, on the Lock and Code podcast with host Davd Ruiz, we speak with Sarah Lamdan, deputy director Office of Intellectual Freedom at the American Library Association, about library privacy in the digital age, whether police are legitimately interested in what the public is reading, and how a small number of major publishing companies suddenly started aiding the work of government surveillance:“Because to me, these companies were information providers. These companies were library vendors. They're companies that we work with because they published science journals and they published court reporters. I did not know them as surveillance companies.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your...
A digital form of protest could become the go-to response for the world's largest porn website as it faces increased regulations: Not letting people access the site.In March, PornHub blocked access to visitors connecting to its website from Texas. It marked the second time in the past 12 months that the porn giant shut off its website to protest new requirements in online age verification.The Texas law, which was signed in June 2023, requires several types of adult websites to verify the age of their visitors by either collecting visitors' information from a government ID or relying on a third party to verify age through the collection of multiple streams of data, such as education and employment status.PornHub has long argued that these age verification methods do not keep minors safer and that they place undue onus on websites to collect and secure sensitive information.The fact remains, however, that these types of laws are growing in popularity.Today, Lock and Code revisits a prior episode from 2023 with guest Alec Muffett, discussing online age verification proposals, how they could weaken security and privacy on the internet, and whether these efforts are oafishly trying to solve a societal problem with a technological solution.“The battle cry of these people have has always been—either directly or mocked as being—'Could somebody think of the children?'” Muffett said. “And I'm thinking about the children because I want my daughter to grow up with an untracked, secure private internet when she's an adult. I want her to be able to have a private conversation. I want her to be able to browse sites without giving over any information or linking it to her identity.”Muffett continued:“I'm trying to protect that for her. I'd like to see more people grasping for that.”Alec MuffettTune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Few words apply as broadly to the public—yet mean as little—as “home network security.”For many, a “home network” is an amorphous thing. It exists somewhere between a router, a modem, an outlet, and whatever cable it is that plugs into the wall. But the idea of a “home network” doesn't need to intimidate, and securing that home network could be simpler than many folks realize.For starters, a home network can be simply understood as a router—which is the device that provides access to the internet in a home—and the other devices that connect to that router. That includes obvious devices like phones, laptops, and tablets, and it includes “Internet of Things” devices, like a Ring doorbell, a Nest thermostat, and any Amazon Echo device that come pre-packaged with the company's voice assistant, Alexa. There are also myriad “smart” devices to consider: smartwatches, smart speakers, smart light bulbs, don't forget the smart fridges.If it sounds like we're describing a home network as nothing more than a “list,” that's because a home network is pretty much just a list. But where securing that list becomes complicated is in all the updates, hardware issues, settings changes, and even scandals that relate to every single device on that list.Routers, for instance, provide their own security, but over many years, they can lose the support of their manufacturers. IoT devices, depending on the brand, can be made from cheap parts with little concern for user security or privacy. And some devices have scandals plaguing their past—smart doorbells have been hacked and fitness trackers have revealed running routes to the public online.This shouldn't be cause for fear. Instead, it should help prove why home network security is so important.Today, on the Lock and Code podcast with host David Ruiz, we're speaking with cybersecurity and privacy advocate Carey Parker about securing your home network.Author of the book Firewalls Don't Stop Dragons and host to the podcast of the same name, Parker chronicled the typical home network security journey last year and distilled the long process into four simple categories: Scan, simplify, assess, remediate.In joining the Lock and Code podcast yet again, Parker explains how everyone can begin their home network security path—where to start, what to prioritize, and the risks of putting this work off, while also emphasizing the importance of every home's router:Your router is kind of the threshold that protects all the devices inside your house. But, like a vampire, once you invite the vampire across the threshold, all the things inside the house are now up for grabs.Carey ParkerTune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen...
A disappointing meal at a restaurant. An ugly breakup between two partners. A popular TV show that kills off a beloved, main character.In a perfect world, these are irritations and moments of vulnerability. But online today, these same events can sometimes be the catalyst for hate. That disappointing meal can produce a frighteningly invasive Yelp review that exposes a restaurant owner's home address for all to see. That ugly breakup can lead to an abusive ex posting a video of revenge porn. And even a movie or videogame can enrage some individuals into such a fury that they begin sending death threats to the actors and cast mates involved.Online hate and harassment campaigns are well-known and widely studied. Sadly, they're also becoming more frequent.In 2023, the Anti-Defamation League revealed that 52% of American adults reported being harassed online at least some time in their life—the highest rate ever recorded by the organization and a dramatic climb from the 40% who responded similarly just one year earlier. When asking teens about recent harm, 51% said they'd suffered from online harassment in strictly the 12 months prior to taking the survey itself—a radical 15% increase from what teens said the year prior.The proposed solutions, so far, have been difficult to implement.Social media platforms often deflect blame—and are frequently shielded from legal liability—and many efforts to moderate and remove hateful content have either been slow or entirely absent in the past. Popular accounts with millions of followers will, without explicitly inciting violence, sometimes draw undue attention to everyday people. And the increasing need to have an online presence for teens—even classwork is done online now—makes it near impossible to simply “log off.”Today, on the Lock and Code podcast with host David Ruiz, we speak with Tall Poppy CEO and co-founder Leigh Honeywell, about the evolution of online hate, personal defense strategies that mirror many of the best practices in cybersecurity, and the modern risks of accidentally becoming viral in a world with little privacy.“It's not just that your content can go viral, it's that when your content goes viral, five people might be motivated enough to call in a fake bomb threat at your house.”Leigh Honeywell, CEO and co-founder of Tall PoppyTune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself
If your IT and security teams think malware is bad, wait until they learn about everything else.In 2024, the modern cyberattack is a segmented, prolonged, and professional effort, in which specialists create strictly financial alliances to plant malware on unsuspecting employees, steal corporate credentials, slip into business networks, and, for a period of days if not weeks, simply sit and watch and test and prod, escalating their privileges while refraining from installing any noisy hacking tools that could be flagged by detection-based antivirus scans. In fact, some attacks have gone so "quiet" that they involve no malware at all. Last year, some ransomware gangs refrained from deploying ransomware in their own attacks, opting to steal sensitive data and then threaten to publish it online if their victims refused to pay up—a method of extracting a ransom that is entirely without ransomware. Understandably, security teams are outflanked. Defending against sophisticated, multifaceted attacks takes resources, technologies, and human expertise. But not every organization has that at hand. What, then, are IT-constrained businesses to do? Today, on the Lock and Code podcast with host David Ruiz, we speak with Jason Haddix, the former Chief Information Security Officer at the videogame developer Ubisoft, about how he and his colleagues from other companies faced off against modern adversaries who, during a prolonged crime spree, plundered employee credentials from the dark web, subverted corporate 2FA protections, and leaned heavily on internal web access to steal sensitive documentation. Haddix, who launched his own cybersecurity training and consulting firm Arcanum Information Security this year, said he learned so much during his time at Ubisoft that he and his peers in the industry coined a new, humorous term for attacks that abuse internet-connected platforms: "A browser and a dream." "When you first hear that, you're like, 'Okay, what could a browser give you inside of an organization?'" But Haddix made it clear: "On the internal LAN, you have knowledge bases like SharePoint, Confluence, MediaWiki. You have dev and project management sites like Trello, local Jira, local Redmine. You have source code managers, which are managed via websites—Git, GitHub, GitLab, Bitbucket, Subversion. You have repo management, build servers, dev platforms, configuration, management platforms, operations, front ends. These are all websites."Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)LLM Prompt Injection Game: https://gandalf.lakera.ai/Overwhelmed by modern cyberthreats? ThreatDown can...
In this episode we discuss the press, periodicals and magazines in architectural history from the eighteenth and nineteenth centuries. Our contributors are: Dr Anne Hultzsch is an architectural historian and leads the ERC-funded group ‘Women Writing Architecture 1700-1900' (WoWA) at ETH Zurich. With a PhD from the Bartlett, University College London, and a postdoc at AHO Oslo, she works on intersectionality in architectural history between ca. 1650 and 1930, exploring the histories of gender, print, perception, and travel. She is author of Architecture, Travellers and Writers: Constructing Histories of Perception 1640-1950 (2014) and has edited The Printed and the Built: Architecture, Print Culture, and Public Debate in the Nineteenth Century (with Mari Hvattum, 2018) and The Origins of the Architectural Magazine in Nineteenth-Century Europe (The Journal of Architecture, 2020). Dr Lieske Huits, is a decorative arts historian and university lecturer at University of Leiden. Lieske's PhD, titled A New Visual Narrative of Nineteenth-Century Historicism, explored historicism and revival styles in the decorative arts and architecture of the nineteenth century, and the display of historicist objects in international expositions and museums of decorative arts. For more information about the SAHGB, their programme of events, publications and grants and to join the society, see their website at https://www.sahgb.org.uk/
If the internet helped create the era of mass surveillance, then artificial intelligence will bring about an era of mass spying.That's the latest prediction from noted cryptographer and computer security professional Bruce Schneier, who, in December, shared a vision of the near future where artificial intelligence—AI—will be able to comb through reams of surveillance data to answer the types of questions that, previously, only humans could. “Spying is limited by the need for human labor,” Schneier wrote. “AI is about to change that.”As theorized by Schneier, if fed enough conversations, AI tools could spot who first started a rumor online, identify who is planning to attend a political protest (or unionize a workforce), and even who is plotting a crime.But “there's so much more,” Schneier said.“To uncover an organizational structure, look for someone who gives similar instructions to a group of people, then all the people they have relayed those instructions to. To find people's confidants, look at whom they tell secrets to. You can track friendships and alliances as they form and break, in minute detail. In short, you can know everything about what everybody is talking about.”Today, on the Lock and Code podcast with host David Ruiz, we speak with Bruce Schneier about artificial intelligence, Soviet era government surveillance, personal spyware, and why companies will likely leap at the opportunity to use AI on their customers.“Surveillance-based manipulation is the business model [of the internet] and anything that gives a company an advantage, they're going to do.”Tune in today to listen to the full conversation.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
In 2022, Malwarebytes investigated the blurry, shifting idea of “identity” on the internet, and how online identities are not only shaped by the people behind them, but also inherited by the internet's youngest users, children. Children have always inherited some of their identities from their parents—consider that two of the largest indicators for political and religious affiliation in the US are, no surprise, the political and religious affiliations of someone's parents—but the transfer of online identity poses unique risks. When parents create email accounts for their kids, do they also teach their children about strong passwords? When parents post photos of their children online, do they also teach their children about the safest ways to post photos of themselves and others? When parents create a Netflix viewing profile on a child's iPad, are they prepared for what else a child might see online? Are parents certain that a kid is ready to watch before they can walk?Those types of questions drove a joint report that Malwarebytes published last year, based on a survey of 2,000 people in North America. That research showed that, broadly, not enough children and teenagers trust their parents to support them online, and not enough parents know exactly how to give the support their children need.But stats and figures can only tell so much of the story, which is why last year, Lock and Code host David Ruiz spoke with a Bay Area high school graduate about her own thoughts on the difficulties of growing up online. Lock and Code is re-airing that episode this week because, in less than one month, Malwarebytes is releasing a follow-on report about behaviors, beliefs, and blunders in online privacy and cybersecurity. And as part of that report, Lock and Code is bringing back the same guest as last year, Nitya Sharma. Before then, we are sharing with listeners our prior episode that aired in 2022 about the difficulties that an everyday teenager faces online, including managing her time online, trying to meet friends and complete homework, the traps of trading online interaction with in-person socializing, and what she would do differently with her children, if she ever started a family, in preparing them for the Internet.Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
"Freedom" is a big word, and for many parents today, it's a word that includes location tracking. Across America, parents are snapping up Apple AirTags, the inexpensive location tracking devices that can help owners find lost luggage, misplaced keys, and—increasingly so—roving toddlers setting out on mini-adventures. The parental fear right now, according to The Washington Post technology reporter Heather Kelly, is that "anybody who can walk, therefore can walk away." Parents wanting to know what their children are up to is nothing new. Before the advent of the Internet—and before the creation of search history—parents read through diaries. Before GPS location tracking, parents called the houses that their children were allegedly staying at. And before nearly every child had a smart phone that they could receive calls on, parents relied on a much simpler set of tools for coordination: Going to the mall, giving them a watch, and saying "Be at the food court at noon." But, as so much parental monitoring has moved to the digital sphere, there's a new problem: Children become physically mobile far faster than they become responsible enough to own a mobile. Enter the AirTag: a small, convenient device for parents to affix to toddlers' wrists, place into their backpacks, even sew into their clothes, as Kelly reported in her piece for The Washington Post. In speaking with parents, families, and childcare experts, Kelly also uncovered an interesting dynamic. Parents, she reported, have started relying on Apple AirTags as a means to provide freedom, not restrictions, to their children. Today, on the Lock and Code podcast with host David Ruiz, we speak with Kelly about why parents are using AirTags, how childcare experts are reacting to the recent trend, and whether the devices can actually provide a balm to increasingly stressed parents who may need a moment to sit back and relax. Or, as Kelly said:"In the end, parents need to chill—and if this lets them chill, and if it doesn't impact the kids too much, and it lets them go do silly things like jumping in some puddles with their friends or light, really inconsequential shoplifting, good for them."Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
Earlier this month, a group of hackers was spotted using a set of malicious tools—that originally gained popularity with online video game cheaters—to hide their Windows-based malware from being detected.Sounds unique, right? Frustratingly, it isn't, as the specific security loophole that was abused by the hackers has been around for years, and Microsoft's response, or lack thereof, is actually a telling illustration of the competing security environments within Windows and macOS. Even more perplexing is the fact that Apple dealt with a similar issue nearly 10 years ago, locking down the way that certain external tools are given permission to run alongside the operating system's critical, core internals. Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes' own Director of Core Tech Thomas Reed about everyone's favorite topic: Windows vs. Mac. But this isn't a conversation about the original iPod vs. Microsoft's Zune (we're sure you can find countless, 4-hour diatribes on YouTube for that), but instead about how the companies behind these operating systems can respond to security issues in their own products. Because it isn't fair to say that Apple or Microsoft are wholesale "better" or "worse" about security. Instead, they're hampered by their users and their core market segments—Apple excels in the consumer market, whereas Microsoft excels with enterprises. And when your customers include hospitals, government agencies, and pretty much any business over a certain headcount, well, it comes with complications in deciding how to address security problems that won't leave those same customers behind. Still, there's little excuse in leaving open the type of loophole that Windows has, said Reed:"Apple has done something that was pretty inconvenient for developers, but it really secured their customers because it basically meant we saw a complete stop in all kernel-level malware. It just shows you [that] it can be done. You're gonna break some eggs in the process, and Microsoft has not done that yet... They're gonna have to."Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
In the United States, when the police want to conduct a search on a suspected criminal, they must first obtain a search warrant. It is one of the foundational rights given to US persons under the Constitution, and a concept that has helped create the very idea of a right to privacy at home and online. But sometimes, individualized warrants are never issued, never asked for, never really needed, depending on which government agency is conducting the surveillance, and for what reason. Every year, countless emails, social media DMs, and likely mobile messages are swept up by the US National Security Agency—even if those communications involve a US person—without any significant warrant requirement. Those digital communications can be searched by the FBI. The information the FBI gleans from those searches can be used can be used to prosecute Americans for crimes. And when the NSA or FBI make mistakes—which they do—there is little oversight. This is surveillance under a law and authority called Section 702 of the FISA Amendments Act. The law and the regime it has enabled are opaque. There are definitions for "collection" of digital communications, for "queries" and "batch queries," rules for which government agency can ask for what type of intelligence, references to types of searches that were allegedly ended several years ago, "programs" that determine how the NSA grabs digital communications—by requesting them from companies or by directly tapping into the very cables that carry the Internet across the globe—and an entire, secret court that, only has rarely released its opinions to the public. Today, on the Lock and Code podcast, with host David Ruiz, we speak with Electronic Frontier Foundation Senior Policy Analyst Matthew Guariglia about what the NSA can grab online, whether its agents can read that information and who they can share it with, and how a database that was ostensibly created to monitor foreign intelligence operations became a tool for investigating Americans at home. As Guariglia explains:"In the United States, if you collect any amount of data, eventually law enforcement will come for it, and this includes data that is collected by intelligence communities."Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
When you think about the word "cyberthreat," what first comes to mind? Is it ransomware? Is it spyware? Maybe it's any collection of the infamous viruses, worms, Trojans, and botnets that have crippled countless companies throughout modern history. In the future, though, what many businesses might first think of is something new: Disinformation. Back in 2021, in speaking about threats to businesses, the former director of the US Cybersecurity and Infrastructure Security Agency, Chris Krebs, told news outlet Axios: “You've either been the target of a disinformation attack or you are about to be.”That same year, the consulting and professional services firm Price Waterhouse Coopers released a report on disinformation attacks against companies and organizations, and it found that these types of attacks were far more common than most of the public realized. From the report: “In one notable instance of disinformation, a forged US Department of Defense memo stated that a semiconductor giant's planned acquisition of another tech company had prompted national security concerns, causing the stocks of both companies to fall. In other incidents, widely publicized unfounded attacks on a businessman caused him to lose a bidding war, a false news story reported that a bottled water company's products had been contaminated, and a foreign state's TV network falsely linked 5G to adverse health effects in America, giving the adversary's companies more time to develop their own 5G network to compete with US businesses.”Disinformation is here, and as much of it happens online—through coordinated social media posts and fast-made websites—it can truly be considered a "cyberthreat." But what does that mean for businesses? Today, on the Lock and Code podcast with host David Ruiz, we speak with Lisa Kaplan, founder and CEO of Alethea, about how organizations can prepare for a disinformation attack, and what they should be thinking about in the intersection between disinformation, malware, and cybersecurity. Kaplan said:"When you think about disinformation in its purest form, what we're really talking about is people telling lies and hiding who they are in order to achieve objectives and doing so in a deliberate and malicious life. I think that this is more insidious than malware. I think it's more pervasive than traditional cyber attacks, but I don't think that you can separate disinformation from cybersecurity."Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
In May, a lawyer who was defending their client in a lawsuit against Columbia's biggest airline, Avianca, submitted a legal filing before a court in Manhattan, New York, that listed several previous cases as support for their main argument to continue the lawsuit.But when the court reviewed the lawyer's citations, it found something curious: Several were entirely fabricated. The lawyer in question had gotten the help of another attorney who, in scrounging around for legal precedent to cite, utilized the "services" of ChatGPT. ChatGPT was wrong. So why do so many people believe it's always right? Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes security evangelist Mark Stockley and Malwarebytes Labs editor-in-chief Anna Brading to discuss the potential consequences of companies and individuals embracing natural language processing tools—like ChatGPT and Google's Bard—as arbiters of truth. Far from being understood simply as chatbots that can produce remarkable mimicries of human speech and dialogue, these tools are becoming sources of truth for countless individuals, while also gaining attraction amongst companies that see artificial intelligence (AI) and large language models (LLM) as the future, no matter what industry they operate in. The future could look eerily similar to an earlier change in translation services, said Stockley, who witnessed the rapid displacement of human workers in favor of basic AI tools. The tools were far, far cheaper, but the quality of the translations—of the truth, Stockley said—was worse. "That is an example of exactly this technology coming in and being treated as the arbiter of truth in the sense that there is a cost to how much truth we want."Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
O polskiej polityce, która coraz bardziej odwraca się w stronę wschodu opowiadała Krystyna Kurczab-Redlich - polska dziennikarka i reportażystka, wieloletnia korespondentka w Rosji oraz autorka książki "Wowa, Wołodia, Władimir. Tajemnice Rosji Putina".
On January 1, 2023, the Internet in Louisiana looked a little different than the Internet in Texas, Mississippi, and Arkansas—its next-door state neighbors. And on May 1, the Internet in Utah looked quite different, depending on where you looked, than the Internet in Arizona, or Idaho, or Nevada, or California or Oregon or Washington or, really, much of the rest of the United States. The changes are, ostensibly, over pornography. In Louisiana, today, visitors to the online porn site PornHub are asked to verify their age before they can access the site, and that age verification process hinges on a state-approved digital ID app called LA Wallet. In the United Kingdom, sweeping changes to the Internet are being proposed that would similarly require porn sites to verify the ages of their users to keep kids from seeing sexually explicit material. And in Australia, similar efforts to require age verification for adult websites might come hand-in-hand with the deployment of a government-issued digital ID. But the large problem with all these proposals is not that they would make a new Internet only for children, but a new Internet for everyone.Look no further than Utah. On May 1, after new rules came into effect to make porn sites verify the ages of their users, the site PornHub decided to refuse to comply with the law and instead, to block access to the site for anyone visiting from an IP address based in Utah. If you're in Utah, right now, and connecting to the Internet with an IP address located in Utah, you cannot access PornHub. Instead, you're presented with a message from adult film star Cheri Deville who explains that:“As you may know, your elected officials have required us to verify your age before granting you access to our website. While safety and compliance are at the forefront of our mission, giving your ID card every time you want to visit an adult platform is not the most effective solution for protecting our users, and in fact, will put children and your privacy at risk.”Today, on the Lock and Code podcast with host David Ruiz, we speak with longtime security researcher Alec Muffett (who has joined us before to talk about Tor) to understand what is behind these requests to change the Internet, what flaws he's seen in studying past age verification proposals, and whether many members of the public are worrying about the wrong thing in trying to solve a social issue with technology. "The battle cry of these people have has always been—either directly or mocked as being—'Could somebody think of the children?' And I'm thinking about the children because I want my daughter to grow up with an untracked, secure private internet when she's an adult. I want her to be able to have a private conversation. I want her to be able to browse sites without giving over any information or linking it to her identity."Muffett continued:"I'm trying to protect that for her. I'd like to see more people grasping for that."Tune in today.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Additional Resources...
Ransomware is becoming bespoke, and that could mean trouble for businesses and law enforcement investigators. It wasn't always like this. For a few years now, ransomware operators have congregated around a relatively new model of crime called "Ransomware-as-a-Service." In the Ransomware-as-a-Service model, or RaaS model, ransomware itself is not delivered to victims by the same criminals that make the ransomware. Instead, it is used almost "on loan" by criminal groups called "affiliates" who carry out attacks with the ransomware and, if successful, pay a share of their ill-gotten gains back to the ransomware's creators.This model allows ransomware developers to significantly increase their reach and their illegal hauls. By essentially leasing out their malicious code to smaller groups of cybercriminals around the world, the ransomware developers can carry out more attacks, steal more money from victims, and avoid any isolated law enforcement action that would put their business in the ground, as the arrest of one affiliate group won't stop the work of dozens of others. And not only do ransomware developers lean on other cybercriminals to carry out attacks, they also rely on an entire network of criminals to carry out smaller, specialized tasks. There are "Initial Access Brokers" who break into company networks and then sell that illegal method of access online. "You also have coders that you can contract out to," Liska said. "You have pen testers that you can contract out to. You can contract negotiators if you want. You can contract translators if you want."But as Liska explained, as the ransomware "business" spreads out, so do new weak points: disgruntled criminals. "This whole underground marketplace that exists to serve ransomware means that your small group can do a lot," Liska said. "But that also means that you are entrusting the keys to your kingdom to these random contractors that you're paying in Bitcoin every now and then. And that, for example, is why the LockBit code got leaked—dude didn't pay his contractor."With plenty of leaked code now circulating online, some smaller cybercriminals gangs have taken to making minor alterations and then sending that new variant of ransomware out into the world—no affiliate model needed. "Most of what we see is just repurposed code and we see a lot of what I call 'Franken-ransomware.'" Today, on the Lock and Code podcast with host David Ruiz, Liska explains why Franken-ransomware poses unique challenges to future victims, cybersecurity companies, and law enforcement investigators. Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
The list of people and organizations that are hungry for your location data—collected so routinely and packaged so conveniently that it can easily reveal where you live, where you work, where you shop, pray, eat, and relax—includes many of the usual suspects.Advertisers, obviously, want to send targeted ads to you and they believe those ads have a better success rate if they're sent to, say, someone who spends their time at a fast-food drive-through on the way home from the office, as opposed to someone who doesn't, or someone whose visited a high-end department store, or someone who, say, vacations regularly at expensive resorts. Hedge funds, interestingly, are also big buyers of location data, constantly seeking a competitive edge in their investments, which might mean understanding whether a fast food chain's newest locations are getting more foot traffic, or whether a new commercial real estate development is walkable from nearby homes. But perhaps unexpected on this list is police.According to a recent investigation from Electronic Frontier Foundation and The Associated Press, a company called Fog Data Science has been gathering Americans' location data and selling it exclusively to local law enforcement agencies in the United States. Fog Data Science's tool—a subscription-based platform that charges clients for queries of the company's database—is called Fog Reveal. And according to Bennett Cyphers, one of the investigators who uncovered Fog Reveal through a series of public record requests, it's rather powerful. "What [Fog Data Science] sells is, I would say, like a God view mode for the world... It's a map and you draw a shape on the map and it will show you every device that was in that area during a specified timeframe."Today, on the Lock and Code podcast with host David Ruiz, we speak to Cyphers about how he and his organization uncovered a massive data location broker that seemingly works only with local law enforcement, how that data broker collected Americans' data in the first place, where this data comes from, and why it is so easy to sell. Tune in now. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
How many passwords do you have? If you're at all like our Lock and Code host David Ruiz, that number hovers around 200. But the important follow up question is: How many of those passwords can you actually remember on your own? Prior studies suggest a number that sounds nearly embarrassing—probably around six. After decades of requiring it, it turns out that the password has problems, the biggest of which is that when users are forced to create a password for every online account, they resort to creating easy-to-remember passwords that are built around their pets' names, their addresses, even the word "password." Those same users then re-use those weak passwords across multiple accounts, opening them up to easy online attacks that rely on entering the compromised credentials from one online account to crack into an entirely separate online account. As if that weren't dangerous enough, passwords themselves are vulnerable to phishing attacks, where hackers can fraudulently pose as businesses that ask users to enter their login information on a website that looks legitimate, but isn't. Thankfully, the cybersecurity industry has built a few safeguards around password use, such as multifactor authentication, which requires a second form of approval from a user beyond just entering their username and password. But, according to 1Password Head of Passwordless Anna Pobletts, many attempts around improving and replacing passwords have put extra work into the hands of users themselves:"There's been so many different attempts in the last 10, 20 years to replace passwords or improve passwords and the security around. But all of these attempts have been at the expense of the user."For Pobletts, who is our latest guest on the Lock and Code podcast, there is a better option now available that does not trade security for ease-of-use. Instead, it ensures that the secure option for users is also the easy option. That latest option is the use of "passkeys." Resistant to phishing attacks, secured behind biometrics, and free from any requirement by users to create new ones on their own, passkeys could dramatically change our security for the better. Today, we speak with Pobletts about whether we'll ever truly live in a passwordless future, along with what passkeys are, how they work, and what industry could see huge benefit from implementation. Tune in now. Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
Government threats to end-to-end encryption—the technology that secures your messages and shared photos and videos—have been around for decades, but the most recent threats to this technology are unique in how they intersect with a broader, sometimes-global effort to control information on the Internet. Take two efforts in the European Union and the United Kingdom. New proposals there would require companies to scan any content that their users share with one another for Child Sexual Abuse Material, or CSAM. If a company offers end-to-end encryption to its users, effectively locking the company itself out of being able to access the content that its users share, then it's tough luck for those companies. They will still be required to find a way to essentially do the impossible—build a system that keeps everyone else out, while letting themselves and the government in. While these government proposals may sound similar to previous global efforts to weaken end-to-end encryption in the past, like the United States' prolonged attempt to tarnish end-to-end encryption by linking it to terrorist plots, they differ because of how easily they could become tools for censorship. Today, on the Lock and Code podcast with host David Ruiz, we speak with Mallory Knodel, chief technology officer for Center for Democracy and Technology, about new threats to encryption, old and bad repeated proposals, who encryption benefits (everyone), and how building a tool to detect one legitimate harm could, in turn, create a tool to detect all sorts of legal content that other governments simply do not like. "In many places of the world where there's not such a strong feeling about individual and personal privacy, sometimes that is replaced by an inability to access mainstream media, news, accurate information, and so on, because there's a heavy censorship regime in place," Knodel said. "And I think that drawing that line between 'You're going to censor child sexual abuse material, which is illegal and disgusting and we want it to go away,' but it's so very easy to slide that knob over into 'Now you're also gonna block disinformation,' and you might at some point, take it a step further and block other kinds of content, too, and you just continue down that path." Knodel continued: "Then you do have a pretty easy way of mass-censoring certain kinds of content from the Internet that probably shouldn't be censored." Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)
In November of last year, the AI research and development lab OpenAI revealed its latest, most advanced language project: A tool called ChatGPT. ChatGPT is so much more than "just" a chatbot. As users have shown with repeated testing and prodding, ChatGPT seems to "understand" things. It can give you recipes that account for whatever dietary restrictions you have. It can deliver basic essays about moments in history. It can—and has been—used to cheat by university students who are giving a new meaning to plagiarism, stealing work that is not theirs. It can write song lyrics about X topic as though composed by Y artist. It can even have fun with language. For example, when ChatGPT was asked to “Write a Biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR,” ChatGPT responded in part: “And it came to pass that a man was troubled by a peanut butter sandwich, for it had been placed within his VCR, and he knew not how to remove it. And he cried out to the Lord, saying ‘Oh Lord, how can I remove this sandwich from my VCR, for it is stuck fast and will not budge.'” Is this fun? Yes. Is it interesting? Absolutely. But what we're primarily interested about in today's episode of Lock and Code, with host David Ruiz, is where artificial intelligence and machine learning—ChatGPT included—can be applied to cybersecurity, because as some users have already discovered, ChatGPT can be used to some success to analyze lines of code for flaws. It is a capability that has likely further energized the multibillion-dollar endeavor to apply AI to cybersecurity. Today, on Lock and Code, we speak to Joshua Saxe about what machine learning is "good" at, what problems it can make worse, whether we have defenses to those problems, and what place machine learning and artificial intelligence have in the future of cybersecurity. According to Saxe, there are some areas where, under certain conditions, machine learning will never be able to compete. "If you're, say, gonna deploy a set of security products on a new computer network that's never used your security products before, and you want to detect, for example, insider threats—like insiders moving files around in ways that look suspicious—if you don't have any known examples of people at the company doing that, and also examples of people not doing that, and if you don't have thousands of known examples of people at the company doing that, that are current and likely to reoccur in the future, machine learning is just never going to compete with just manually writing down some heuristics around what we think bad looks like." Saxe continued: "Because basically in this case, the machine learning is competing with the common sense model of the world and expert knowledge of a security analyst, and there's no way machine learning is gonna compete with the human brain in this context." Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)
In 2020, a photo of a woman sitting on a toilet—her shorts pulled half-way down her thighs—was shared on Facebook, and it was shared by someone whose job it was to look at that photo and, by labeling the objects in it, help train an artificial intelligence system for a vacuum. Bizarre? Yes. Unique? No. In December, MIT Technology Review investigated the data collection and sharing practices of the company iRobot, the developer of the popular self-automated Roomba vacuums. In their reporting, MIT Technology Review discovered a series of 15 images that were all captured by development versions of Roomba vacuums. Those images were eventually shared with third-party contractors in Venezuela who were tasked with the responsibility of "annotation"—the act of labeling photos with identifying information. This work of, say, tagging a cabinet as a cabinet, or a TV as a TV, or a shelf as a shelf, would help the robot vacuums "learn" about their surroundings when inside people's homes. In response to MIT Technology Review's reporting, iRobot stressed that none of the images found by the outlet came from customers. Instead, the images were "from iRobot development robots used by paid data collectors and employees in 2020." That meant that the images were from people who agreed to be part of a testing or "beta" program for non-public versions of the Roomba vacuums, and that everyone who participated had signed an agreement as to how iRobot would use their data. According to the company's CEO in a post on LinkedIn: "Participants are informed and acknowledge how the data will be collected." But after MIT Technology Review published its investigation, people who'd previously participated in iRobot's testing environments reached out. According to several of them, they felt misled. Today, on the Lock and Code podcast with host David Ruiz, we speak with the investigative reporter of the piece, Eileen Guo, about how all of this happened, and about how, she said, this story illuminates a broader problem in data privacy today. "What this story is ultimately about is that conversations about privacy, protection, and what that actually means, are so lopsided because we just don't know what it is that we're consenting to." Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)
Last month, the TikTok user TracketPacer posted a video online called “Network Engineering Facts to Impress No One at Zero Parties.” TracketPacer regularly posts fun, educational content about how the Internet operates. The account is run by a network engineer named Lexie Cooper, who has worked in a network operations center, or NOC, and who's earned her Cisco Certified Network Associate certificate, or CCNA. In the video, Cooper told listeners about the first spam email being sent over Arpanet, about how an IP address doesn't reveal that much about you, and about how Ethernet isn't really a cable—it's a protocol. But amidst Cooper's bite-sized factoids, a pair of comments she made about something else—the gender gap in the technology industry—set off a torrent of anger. As Cooper said in her video: “There are very few women in tech because there's a pervasive cultural idea that men are more logical than women and therefor better at technical, 'computery' things.” This, the Internet decided, would not stand. The IT industry is “not dominated by men, well actually, the women it self just few of them WANT to be engineer. So it's not man fault," said one commenter. “No one thinks it's because women can't be logical. They're finally figuring out those liberal arts degrees are worthless," said another. “The women not in computers fact is BS cuz the field was considered nerdy and uncool until shows like Big Bang Theory made it cool!” said yet another. The unfortunate reality facing many women in tech today is that, when they publicly address the gender gap in their field, they receive dozens of comments online that not only deny the reasons for the gender gap, but also, together, likely contribute to the gender gap. Nobody wants to work in a field where they aren't taken seriously, but that's what is happening. Today, on the Lock and Code podcast with host David Ruiz, we speak with Cooper about the gender gap in technology, what she did with the negative comments she received, and what, if anything, could help make technology a more welcoming space for women. One easy lesson, she said: "Guys... just don't hit on people at work. Just don't." Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)
When did technology last excite you? If Douglas Adams, author of The Hitchhiker's Guide to the Galaxy, is to be believed, your own excitement ended, simply had to end, after turning 35 years old. Decades ago, at first writing privately and later having those private writings published after his death, Adams had come up with "a set of rules that describe our reactions to technologies." They were simple and short: Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you're thirty-five is against the natural order of things. Today, on the Lock and Code podcast with host David Ruiz, we explore why technology seemingly no longer excites us. It could be because every annual product release is now just an iterative improvement from the exact same product release the year prior. It could be because just a handful of companies now control innovation. It could even be because technology is now fatally entangled with the business of money-making, and so, with every one money-making idea, dozens of other companies flock to the same idea, giving us the same product, but with a different veneer—Snapchat recreated endlessly across the social media landscape, cable television subscriptions "disrupted" by so many streaming services that we recreate the same problem we had before. Or, it could be because, as was first brought up by Shannon Vallor, director of the Centre for Technomoral Futures in the Edinburgh Futures Institute, that the promise of technology is not what it once was, or at least, not what we once thought it was. As Vallor wrote on Twitter in August of this year: "There's no longer anything being promised to us by tech companies that we actually need or asked for. Just more monitoring, more nudging, more draining of our data, our time, our joy." For our first episode of Lock and Code in 2023—and our first episode of our fourth season (how time flies)—we bring back Malwarebytes Labs editor-in-chief Anna Brading and Malwarebytes Labs writer Mark Stockley to ask: Why does technology no longer excite them? Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)
On June 7, 2021, the US Department of Justice announced a breakthrough: Less than one month after the oil and gas pipeline company Colonial Pipeline had paid its ransomware attackers roughly $4.4 million in bitcoin in exchange for a decryption key that would help the company get its systems back up and running, the government had in turn found where many of those bitcoins had gone, clawing back a remarkable $2.3 million from the cybercriminals. In cybercrime, this isn't supposed to happen—or at least it wasn't, until recently. Cryptocurrency is vital to modern cybercrime. Every recent story you hear about a major ransomware attack involves the implicit demand from attackers to their victims for a payment made in cryptocurrency—and, almost always, the preferred cryptocurrency is bitcoin. In 2019, the ransomware negotiation and recovery company Coveware revealed that a full 98 percent of ransomware payments were made using bitcoin. Why is that? Well, partly because, for years, bitcoin received an inflated reputation for being truly "anonymous," as payments to specific "bitcoin addresses" could not, seemingly, be attached to specific persons behind those addresses. But cryptocurrency has matured. Major cryptocurrency exchanges do not want their platforms to be used to exchange stolen funds into local currencies for criminals, so they, in turn, work with law enforcement agencies that have, independently, gained a great deal of experience in understanding cybercrime. Improving the rate and quality of investigations has also been the advancement of technology that actually tracks cryptocurrency payments online. All of these development don't necessarily mean that cybercriminals' identities can be easily revealed. But as Brian Carter, senior cybercrimes specialist for Chainalysis, explains on today's episode, it has become easier for investigators to know who is receiving payments, where they're moving it to, and even how their criminal organizations are set up. "We will plot a graph, like a link graph, that shows [a victim's] payment to the address provided by ransomware criminals, and then that payment will split among the members of the crew, and then those payments will end up going eventually to a place where it'll be cashed out for something that they can use on their local economy." Tune in to today's Lock and Code podcast, with host David Ruiz, to learn about the world of cryptocurrency forensics, what investigators are looking for in reams of data, how they find it, and why it's so hard. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)