Unraveling the technology that affects us all but that few of us understand, in a format to give you a basic understanding in the time it takes to drive to and from the grocery store. Support this podcast: https://anchor.fm/crucialtech/support

The journalism industry is in trouble and has been for most of the 21st century. But the advent of AI generated content has made professional journalists absolutely crucial not just to democracies but to business success.One of the most prolific and successful technology journalists is Bolaji Ojo. He has headed editorial efforts for the EETimes, AspenCore Media, the recently closed Ojo-Yoshida Report and the now-defunct EBN. Some of those titles may be foreign to people in the cybersecurity world, but not to executives in the electronics world that cybersecurity rests upon. This brief conversation is packed with information that most people don't think about including how technology impacts all industries, expanding scope of coverage. Traditional ad revenue has shifted to platforms like Google and Met allowing companiesto reach customers directly, but this now creates credibility challenges for companies. AI-generated content is causing markets to distrust marketing message more than ever before. That has established the need for experienced journalists to provide context, analysis, and trusted perspectives. Ojo describes how his new ventures are getting financial support from companies like Microchip, NXP, Infineon, Siemens, ST Microelectronics not to drive sales but to establish credibility. The challenge is justifying sponsorship to CFOs/boards vs. SEO.The value proposition of tech journalism, he said, is providing context and "what it means and offering trusted, independent analysis and future insights.This episode may be the most important we've had this year. It provides a roadmap to effective marketing for cybersecurity companies in the near future.

Our expanded coverage of the viability of the AI industry, and how it could affect the cybersecurity industry, continues with this episode. We've blown out our time limit of 30 minutes because we are talking with three entrepreneurs with less dependence on AI as a product feature. We talk with Tony Garcia, CISO of Infineo; Luigi Caramico, CEO of the innovative encryption company DataKrypto; and Chris Schueler, CEO of Cyderes, an automated MSSP. TL;DL they are all sanguine about the success or failure of AI.Full story can be found at CPM

We've been having a lively debate at Cyber Protection Magazine about the potential, dangers, and chances AI is going to survive in its current form. Co-editor Patrick Boch likes to say I'm something of a Luddite about it, and it's true (Luddites were not against technology, but were adamant about protecting workers using it). I like to say that Patrick is overly optimistic. But, then, He's a lot younger than me, so optimism comes more easily to him. Plus, he's not living in the dystopian hell-hole that the US has become. Lucky dog.That being said, this is the first of several discussion Patrick and I will be having on this subject, along with several other interviews and articles to come.

I was attending the AI Infra Summit recently and was handed a book with an interesting title : AI Made Easy for Parents and was intrigued by the title. It is an easy read, but ultimately disappointing, at least from an educator's view.

More often than not, when I'm interviewing a corporate leader about the news they are presenting to me, I find a bit of news in their own content that they didn't see, That was the case when I interviewed Mike Wiacek, founder and CTO of Stairwell. The company is in a very competitive market with almost 250 companies dedicated to identifying malware before it can mess up your system, The report was about the rise of malware variants in the world, but their own report showed that, at least this year, the technology niche they are in is actually knocking that number down. He was surprised, but it made for a good discussion.

When it comes to the implementation of AI in a corporation, the question is not if or when. It's more like, “How much if a disaster are we willing to accept?” A whole new industry niche is arising to help companies determine just how mediocre and unsafe they want to be. Tumeryk is one of those companies helping provide that insight.

The first of September began with a bang. I've got a lot to write and talk about, but barely had time to do this much. There is an AI infrastructure conference coming next week, along with a special issue on AI economics. But companies really need to start learning how to tell a story all over again. Generative AI and marketese is killing a lot of really good technology. Listen in and find out how to fix that.

A few weeks ago I talked with Paul Valente, CEO of VISO TRUST. In the excitement of Agentic AI adoption, a massive security hole has opened and Valente's goal is plugging that hole. Our conversation adds a needed reality check to the AI euphoria/

I got a pitch from Reality Defender (deepfake video detection) about a partnership with ValidSoft (deepfake voice) last week. We don't generally cover partnership agreements because, well, we get a handful every week and they just aren't news. But the pitch threw out a few statistics that seemed a bit off. After some research, I found out how off they were.See, fraud can be divided into two types: Criminal fraud, which companies like these are dedicated to stopping, and legally protected fraud like advertising and political speech (First Amendment and all that). As far as impacts go, the latter is much more dangerous and prevalent, but security companies can't relly do anything about that. And that is what I discussed with Reality Defender CEO, Ben Colman discussed.Key Takeaways and LinksDeepfake fraud attempts are low in percentage but high in potential impact, especially for high-value clients in regulated industriesThere's a critical need for national regulation to address AI-generated content on consumer platforms, as current measures are insufficient.Reality Defender and Validsoft claim to lead in deepfake detection, focusing on inference-based and provenance-based approaches respectivelyThe "David Act" (Deepfake Audio Video Image Detection Act) has been proposed to require platforms to flag AI-generated content.

We are starting out the 11th season of Crucial Tech is a bang. I am completing an article on a significant security hole in AI agents that shows how the tech industry makes security an afterthought every, damn, time. One of the companies pitching a solution is Teleport, which manages identity access and I had a friendly but contentious conversation about it with their CEO, Ev Kontsevoy who insisted that identity is NOT a security issue. OK, then.

Today ends the 10th season of Crucial Tech, 250+ episodes over six years and not a single repeat subject. Today we look at an aspect of cyber insurance not yet discussed as far as we can find: Why do so few cybersecurity companies carry cyber insurance? We bring in our friend and benefactor, Spencer Timmel from Safety National Insurance, to get that answer.We are taking a few weeks off before launching into season 11. Send any ideas for new episodes to Cyber Protection Magazine.

If you are one of the smart people who have a subscription to Cyber Protection Magazine you will soon receive our next special issue focused on the rise of non-human identities (NHI) and their impact on society. If not, you get just this podcast with a hint of what is in the issue.We talk with Mike Towers, Chief Security & Trust Officer at Veza, about the meteoric increase of NHI. As a bonus, we also look into the theft of $90 million in cryptocurrency by the Israeli hacktivist group Predatory Sparrow. This represents a new area of asymmetric warfare.

This episode of Crucial Tech iis a bit different. It's about technology public relations, rather than specifically about a product or service.Tech PR has a problem. Search Engine Optimization (SEO) already damaged the ability of practitioners to connect with members of the press and large language models (LLM) are destroying their ability to tell a compelling story to the press and customers.We sat down with one of the last great practitioners of tech PR, Beth Trier, to talk about how she is dealing with the degradation of the industry. Our 30-minute discussion was illuminating. She agreed that SEO and LLM have had significant impacts on the declline and how she and her team are making every effort to maintain professional and effective practices. She also points out that the fragmentation of the press adds significant complexity to their work. We also discuss the nature of “earned media” and how few people really understand what that means.We wrap up with ways press and public relations can work better together to do the crucial job of providing ethical and independent coverage of what is happening in the industries they support.Make sure you tak the poll on Spotify.

I haven't been shy about rejecting the hype behind the coming of Q-day -- the day that a quantum computer exists that can break modern encryption. But I've always felt that the most powerful encryption available could somehow be bypassed. Talking with Crick Waters, CEO of Patero, my fears have been realized. And yet I am encourages.And we talk again with Spencer Timmel, head of cyber security insurance company Safety National, on the affect of mergers and acquisitions on security

This week is a short one and a two-fer. ABC fired a long-time reporter for expressing an opinion on social media. One might be tempted to call it censorship and bowing to our weak and failing leader. But I understand the reason and took some time to explain it.But on a more positive note, I talked with Spencer Timmel of Safety National Insurance about the current retreat of the US government from securing the internet. He provides a refreshing idea that it might not be so bad.

Yes, AI is a problem in the hands of bad actors, especially when they use bots to automate brute force attacks on identity. There are also a ton of companies dedicated to protecting your identity to keep the bad guys from impersonating you and those you care about. One of those companies is Ping (no, not the guys that make the golf clubs). In a continuation of our series on bots, we talk with Peter Barker, chief product officer for Ping and what they are doing about AI-based attacks.

A few weeks ago I posted what was supposed to be an interview with Dale Hoak, CISO for RegScale, on understanding Zero Trust. Unfortunately, the audio was of yet another interview that I have to repost on a different subject. That's what comes from having to wrangle 50 hours of fecordings from the RSAC Conference along with follow ups/.So, I promise, this is the right one.

During @RSAC Conference in April I met with Matthew Gracey-McMinn, VP of Threat Services for Netacea and we talked about the damage malicious bots can do. His company is one of a handful of companies dedicated to protecting users against that threat, in particular media companies. It was a short conversation and I decided it was worth going into a bit more depth.

Last week, Dr. Zero Trust, AKA Dr. Chase Cunningham, posted in Linked in that he was fed up with people who say they don't understand Zero Trust. To a certain extent, I feel his frustration. Journalists understand the concept. We have a decades-old saying, “If your mother says she loves you, check it out.” It doesn't get more zero trust than that. The problem is that while it's easy to understand as a concept, it isn't easy to build a zero trust infrastructure, especially with the misleading gobbledygook most cybersecurity companies put out. Cunningham says there are hundred of books and articles on the subject. He's right, of course. The question is, which one do you choose? At the RSAC Conference, I sat down and briefly talked with Dale Hoak, CISO for RegScale, about how easy it is to understand Zero Trust but how complex it can be to pull it off. RegScale does government regulation compliance (GRC) and has only been around since 2021, but I found several competitors who promote themselves by saying “when you're tired of RegScale, come see us.” I find that a ringing endorsement of the company.So I called Dale up and said I wanted a longer talk about the issue of Zero Trust and where GRC fits in. We also spent some time talking about how the US federal government seems to be stepping away from cybersecurity regulations. I'll be doing a larger story about that later, but this conversation is a good start.

Physical authentication keys are a common trope in movies, TV and spy thrillers and they have been around for almost 20 years. But they are still hard to find in real life. We talked with Alex Summerer, head of authentication for Swissbit, which is a relatively new player in the field, headquartered in ...of course, Switzerland. Frankly, after talking with him I'm wondering why I haven't bought one of these things.

Still digging through dozens of hours of recordings and pages of notes from #RSAC_Conference last week. But while looking into the issue of bots, both good and bad, discovered a fairly recent story about how scammers use bots to steal financial ait. And as I always say, if I don't know about something, I know someone who does. So I called up an old friend, Craig Mosher, who teaches history and political science about what he has experienced with fake students and how to deal with them.

This was another exhausting #RSAC in San Francisco but I think I'm finally getting a handle on it. There will be more to come, but Bruce Schneier gave a keynote on Tuesday that I think bodes well for journalism.And we had a visit with our friend at Safety National Insurance, Spencer Timmel, about just how far insurance can cover cybersecurity wweaknesses.

This is a short episode previewing what I'll be doing at RSAC 2025 next week, kudos to the California Franchise Tax Board, and a how-to on working with the press.

There comes a moment in many abusive relationships, when observant friends encourage the abused party to leave the abuser.I consider myself a friend of the cybersecurity industry, aside from its bad marketing practices, I see it as important to the well-being of society worldwide. And that's why I say now, it's time to leave the federal government, at least for the next two years.The actions persecuting Chris Krebs and SentinelOne for merely for doing their jobs without political bias, demonstrate that no amount of money is worth working with the Trump administration.I spend much of the past week unsuccessfully trying to get members of the US cyber industry to comment publicly on this issue. I was able to get public comment from a few outside the country. Some of that can be found in my piece this week on Cyber Protection Magazine. This podcast is with one of the commenters, James Bore, a British cybersecurity consultant and speaker. He says what everyone is thinking.It's time to divorce the orange git.

When it comes to polite discussion, there are two things you should never discuss: Politics and Religion. At the same time, most people would also rather not talk about insurance or data encryption. Well, I can't say I'm all that polite, because that is exactly what this episode is about.The need for encryption on or data has never been more important, but msot of us don't know what is or isn't encrypted and that knowledge has a direct bearing on how much cybersecurity insurance might cost. So we sat down with Spencer Timmel, head of cybersecurity and technology insurance for Safety National, the primary sponsor for this podcast, and we discussed the unmentionable topics.

Microsoft haw cancelled plans for a massive build out of AI data centers and China is shutting down gigawatts or AI processing due to lack of demand. It seems the AI boom is on the verge of busting as big as the Dotcom collapse. And since cybersecurity companies seem dependent on the AI buzzwords to sell their services, that is going to mean change for the industry.We chatted with ThreatLocker CEO Danny Jenkins and Reality Defender CEO Ben Colman about what is and isn't real regarding the concepts of AI threats.

The "Signalgate" scandal has raised the issue of encryption to a broader audience in the past week. On the plus side, many sources say that 95% of digital traffic is encrypted now, compared to 43% in 2014, but most people have no idea that their personal data is being encrypted. It's one of those invisible technologies that touch many people.But there is a basic fact, that a lot of stuff that should be secured, isn't because users don't know they have to turn it on For example, WhatsApp, the messenger platform from Meta, advertises that they have end to end encryption, but they don't tell you that you have to turn it on to get that benefit. So that brings us to today. What is encryption? Why do we need it and where does it come into play. We talk with Luigi Caramico, CTO and founder of DataKrypto, a company dedicated to encryption. And not just encryption but fully homomorphic encryption, an important step forward in protecting our data

As I've said before, I get a lot of "studies" and "surveys" from cybersecurity firms with breathless and urgent warnings about a coming cyber-pocalypse of one sort or another. Funny thing, it's always about something that they supposedly defend against. As I started writing this note, I got another one.I did one podcast about a survey from Huntress about phishing in February, which was actually pretty good. Then I did one a couple of weeks ago about a less-than-good survey from iProov. Well, my partner in Germany, Patrick Boch, wanted to get into the fun and we decided to talk about two more of these that were also less-than-good from HiddenLayer and Ontinue. No, we didn't interview representatives from either company on this one. We were just having some fun at, unfortunately, their expense.Here are some of the highlights of our discussion.Many cybersecurity surveys lack scientific rigor, often using small, potentially biased samples (e.g., 250 IT decision-makers)Reports frequently make vague assumptions or present data in ways that may exaggerate threats or market demandDeep fake attacks, while concerning, are currently not as prevalent or successful in cybercrime as often portrayedThe Verizon Data Breach Investigation Report (DBIR) is considered a gold standard for its concrete terms and unbiased approach

The DDoS attack on X.com this week provided a certain amount of schadenfreude for people less than enamored by Elon Musk. It also rang alarm bells in the cybersecurity community as that style of attack seems to be making a comeback, and not for financial gain. All indications are corporations, and, in particular, government institutions are not ready to repel attacks motivated by political revenge. We talked with Inversion6 CISO Ian Thornton-Trump about how the attack was allowed to happen and what it may mean for the very near future.

I get a lot of "studies" about the state of cybersecurity and most of them are poorly done. In Episode 10.8 I talked about one I like, from Huntress and the week it came out I got pitched another report from iProov that was, well, less than well done. And as much as I tried to help them focus on reality, the more they pushed back.Again, this is not a knock on what the company does, which is to ensure the veracity of biometric identity, but it is a good example of how cybersecurity companies spend too little and on the wrong efforts to get their story out.

Artificial Intelligence is all the rage right now with broad claims about how it is going to change the world as we know it. I have my doubts about the hype and so does Bob Ackerman, the granddaddy of cybersecurity venture capital, founder and managing director of AllegisCyber Capital (for the past 29 years) and cofounder of the cyber incubator, DataTribe in Maryland. I always enjoy chatting with Bob because he sees the nuts and bolts of tech advancements and isn't the kind of investor to get swayed by the glitz of questionable marketing. In this session, we discussed how AI is starting to displace high-paying jobs like computer coding and legal work, raising concerns about who will be left to buy the AI subscriptions and services. While there will be short-term disruption, he thinks AI will ultimately enable new industries and use cases that create new jobs and economic opportunities. Surprisingly enough, he believes the transition may require policies like universal basic income to support displaced workers.We also discussed the demographic challenges facing countries like the US and Europe, with aging populations and declining birth rates straining social welfare systems. dAI and automation may help address labor shortages, but also raise questions about how to fund programs like Social Security and Medicare long-term.More importantly, Bob thinks that the people who invested in AI early will to lose their shirts.Check it out.

Today we are talking about insurance and government regulation... No! Wait! It's good stuff so bear with us.As the US administration seems intent on dismantling government protections in cybersecutiy, we will all rely heavily on foreign governments and private industries, like insurance, to keep us safe from cybercrime. The Digital Operations Resilience Act, that the EU put into force in January, is a good example of the former, and the insurance industry is a good example of the latter.We talk to Spencer Timmel, head of cybersecurity and technology for Safety National Insurance(our sponsor) and Arnaud Treps fromOdaseva about how insurance and cybersecurity tech companies are working hand-in-glove to fill in the gaps being left by the Musk/Trump administration.

Phishing attacks are on the rise again with the help of sophisticated generative-AI tools. But new defenses and increased wariness among potential victims are blunting the potential for widespread harm.We talked with Greg Linares, Principle Threat Intelligence Analyst for Huntress regarding their annual Threat Intelligence report. It sounds grim, but in a new article on Cyber Protection Magazine, we also report on how defensive technology from companies like DeepTempo and personal awareness can blunt the attacks.

I am very far behind in writing stories and making podcasts. The events since January 20 have made it difficult to keep up. But today, while walking downtown I came across a brand-new independent book store That had a copy of a book dedicated to Martin Luther King Jr.'s "Dream" speech. I attended that event, with my mother, when I was 11. It was a foundational moment for me. It is when I became "woke." When I saw that book, I knew I had to buy it for my grandchildren, because being two generations separated from that moment is too far. I needed to bring it forward for them, so I bought the book for them and intend to read it to them and help them understand how important the dream is for them as well, especially today. This isn't a political issue for me. It is how I want to model my life. It does afect my politics, but it also affects my view of family, friends, neighbors, theology, and the world. If you choose to listen to me read this speech, I thank you for taking the time. My thanks extends even to those who are offended by it, as long as they listen to it. It is important to hear, even 60+ years after the fact. Only by repeating it can we learn from it.

This week, we are talking to a lawyer. Maryam Meseha is a founding partner of Pierson Ferdinand LLC, a relatively new and large firm dedicated to digital security. In the first few weeks of the year, the new US administration has castrated the governmental infrastructure to make sure corporation keep customer data safe, especially in the area of retail fraud. It's law firms, like Pierson Ferdinand and insurance companies, like our sponsor Safety National that are stepping up to remind companies that maintaining g a strong security posture is a good idea. We appreciate that sentiment. Hope it works.

I've had several discussions about the nature of censorship, the freedom of speech and moderation and I came to the realization that most people have no idea what sociial media moderation is. So I did a bit of a rant. We also are bringing back the top threat reports from Fletch for a bit of lightheartedness before the rant.

James Bore is a cybersecurity consultant, speaker and publisher based in the UK. He has a refined sense of cynicism that clicks with my own, so we've been chatting back and forth for several months on various subjects and decided it's probably time to record some of our interaction. Today, we are looking at the preferred marketing practice (shiny objects) of sewing fear, uncertainty and doubt to get people to buy their products. It drives us both nuts. The issue is not limited to cybersecurity, but it is prevalent in the industry. I'm guessing this conversation will resonate with many of you. Our hope is that our marketing listeners will rethink some strategies,

For about two years, the team at Cyber Protection Magazine has debated whether Meta platforms (Facebook, Instagram and *shudder* WhatsApp) were valuable or even necessary for the reach of our magazine. For two years, I've been outvoted every time. Instead, I unilaterally decided to divorce from the platforms. Providentially, Mark Zuckerberg made two announcements in as many weeks that made the decision unanimous. We are leaving Meta behind for good. Instead, we will remain on Linkedin and join Mastodon and Bluesky this year. This podcast is the recording of the conversation my co-founder, Patrick Boch, and I had on the "momentous" decision, which also drifted into the issue of what constituted valid information. Check it out.

We open a new year and a new season with our friend, Ian Thornton-Trump, chief information security officer at the MSSP Inversion6 and in 30 minutes we take on some pretty meaty subjects. First we discuss how China strategically infiltrated technology systems in the US and other countries as a geopolitical message rather than attacks. He discusses the challenges of securing complex, interconnected systems and the need for proactive defense. Next we review the rise of corporate power and influence and how the increasing wealth and influence of individuals like Elon Musk is disrupting the traditional balance of power in democracies. The ethical concerns around wealthy individuals wielding disproportionate political influence could result in something the oligarchs are not expecting. Finally, we review potential trade wars and the possibility of Canada and Mexico joining the BRICS alliance. 2025 is going to be bumpy but very interesting.

The available guests for this last podcast of the year dried up pretty quickly so I thought I would give some closing thoughts on a big issue facing the world: Trust and the lack of it. Also, our last threat reports courtesy of Fletch.ai --- Support this podcast: https://podcasters.spotify.com/pod/show/crucialtech/support

Our friends at Fletch provide a grand slam of threats for Thanksgiving week, covering Apple, Android, AWS and Microsoft vulnerabilities No regular podcast this week but we will be back next week with a possible new way to abuse AI. --- Support this podcast: https://podcasters.spotify.com/pod/show/crucialtech/support

This is part two of our mashup of recent surveys. This time we talk with Tom Tovar, CEO of Appdome about their comprehensive annual survey of consumer attitudes regarding security in digital technology. The good news is there is a groundswell of security "consciousness" regarding the subject. The bad news is the consumers are not confident that corporations even care. --- Support this podcast: https://podcasters.spotify.com/pod/show/crucialtech/support

This week, a two-parter. I'm still trying to make sense of all the surveys and studies sent to me. Between trying to figure out if they are plagiarized, use inadequate samples, are a lame attempt at self-promotion or are actually good data is almost a full-time job. Luckily I got a couple of good ones this month and am doing another mashup. Today's interview is with Frank Teruel, CFO of Arkose Labs. We are talking about a finding in their latest survey showing that managers and developers of apps are dealing with no small amount of stress I how to deal with adversarial AIs. Later this week, I hope to post a second interview of where consumers are in this mess. Then I'll wrap it up next week with an article that looks into the potential of actually controlling the damage cause by AI. Also, an abbreviated threat report from the folks at Fletch.ai. --- Support this podcast: https://podcasters.spotify.com/pod/show/crucialtech/support

This episode includes our weekly top cyber threats with help from Fletch and this week Cyjax, and a shot interview with cybersecurity contrarian James Bore, a consultant in the UK with a kindred spirit. The interview is introducing the theme for Cyber Protection Magazine next year. Put up or shut up. The past decade has been filled with optimism in the tech sector about what they thought they could accomplish. Social media companies thought they could democratize the internet and provide a public square for free speech. Hardware companies thought they could, make computers so fast they could replace the human brain. AI companies thought they could make a computer program smarter than humans. And cybersecurity companies were positive that if every company would use their products they could stop cybercrime. None of that is close to being true. In some cases it has proven to be absolutely false. So we are going to spend a lot of time debunking assumptions and looking at what needs doing. --- Support this podcast: https://podcasters.spotify.com/pod/show/crucialtech/support

I received more than dozen studies and reports on the "state of cybersecurity" all with different foci depending on the company that was pushing the document. It seems like they are replacing press releases as a primary marketing tool. But there was one thing that jumped out of me. Almost every one of them had a throwaway line that customers had #zerotrust in the effiicacy of the tools and services they bought to keep them secure. Of course that's what I went after. We talked to executives from Keepit, Cogility, and Protegrity --- Support this podcast: https://podcasters.spotify.com/pod/show/crucialtech/support

Here's our top three threat reports for the week. Hackers are targeting gambling apps on mobile devices and obsolete Microsoft products. Thanks to the folks at Fletch for the info --- Support this podcast: https://podcasters.spotify.com/pod/show/crucialtech/support

Our friends at Fletch.AI dropped a bunch of threat reports this week, here's what we see as the top three. --- Support this podcast: https://podcasters.spotify.com/pod/show/crucialtech/support

I bet you never heard of FHE. Me neither. Then I got a pitch about it. Tried to ignore it because I had never herd of it, but they were insistent. Turns out to be interesting. Fully homomorphic encryption, or FHE, has been talked about for about five years but not it has its very own industry association and NIST is starting to take it very seriously. It doesn't eliminate quantum encryption standards, but it might be a better defense against nation state attempts to break the strongest modern encryption, although I still think that's more a fever dream than a potential reality. One of the members of the new association with the unfortunate name of FHETCH, Niobium put me in front of the chief product officer, Jorge Myszne, to give me the lowdown on this tech. --- Support this podcast: https://podcasters.spotify.com/pod/show/crucialtech/support

After getting knocked for a loop with a dose of Covid I'm slowly crawling back to the desk and providing some timely advice regarding current and predicted threat reports from our friends at Fletch. --- Support this podcast: https://podcasters.spotify.com/pod/show/crucialtech/support

Quick, what is the biggest single category of cybercrime today? If you said pig butchering, you get a gold star. (If you said ransomware you need to stop believe press releases). It's big. $75 billion in stolen funds, mostly cryptocurrency last year alone. And it wasn't from lonely elderly people. We talked with Arkose Labs CEO Kevin Gosschalk about the growing phenomenon and how you can defend yourself. (Hint: don't be naive) --- Support this podcast: https://podcasters.spotify.com/pod/show/crucialtech/support

Firts, apologies for the sound quality. Tried out a new microphone and I definitely do not like it. Going back to the tried and true. But it stands as an example of what we are talking about today. When people from one discipline start talking about moving into another discipline where they lack expertise, things go haywire. Such is the case with the digital world and energy production. The big news this week is Microsoft plans to open up Three Mile Island Nuclear power plant to power their planned AI datacenter. Joe Basques and I have a frank discussion about how the AI/Social Media/Internet industry just lacks the knowledge of how to do this right and with the current path, chaos is bound to reign, --- Support this podcast: https://podcasters.spotify.com/pod/show/crucialtech/support