POPULARITY
Can we align AI with society's best interests? Tristan Harris, co-founder of the Center for Humane Technology, joins Ian Bremmer on the GZERO World Podcast to discuss the risks to humanity and society as tech firms ignore safety and prioritize speed in the race to build more and more powerful AI models. AI is the most powerful technology humanity has ever built. It can cure disease, reinvent education, unlock scientific discovery. But there is a danger to rolling out new technologies en masse to society without understanding the possible risks. The tradeoff between AI's risks and potential rewards is similar to deployment of social media. It began as a tool to connect people and, in many ways, it did. But it also become an engine for polarization, disinformation, and mass surveillance. That wasn't inevitable. It was the product of choices—choices made by a small handful of companies moving fast and breaking things. Will AI follow the same path?Host: Ian BremmerGuest: Tristan Harris Subscribe to the GZERO World with Ian Bremmer Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Can we align AI with society's best interests? Tristan Harris, co-founder of the Center for Humane Technology, joins Ian Bremmer on the GZERO World Podcast to discuss the risks to humanity and society as tech firms ignore safety and prioritize speed in the race to build more and more powerful AI models. AI is the most powerful technology humanity has ever built. It can cure disease, reinvent education, unlock scientific discovery. But there is a danger to rolling out new technologies en masse to society without understanding the possible risks. The tradeoff between AI's risks and potential rewards is similar to deployment of social media. It began as a tool to connect people and, in many ways, it did. But it also become an engine for polarization, disinformation, and mass surveillance. That wasn't inevitable. It was the product of choices—choices made by a small handful of companies moving fast and breaking things. Will AI follow the same path?Host: Ian BremmerGuest: Tristan Harris Subscribe to the GZERO World with Ian Bremmer Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
These removals are part of a broader pattern under the Trump administration, which began issuing executive orders to direct federal agencies to remove or modify substantial amounts of government content. Learn more about your ad choices. Visit podcastchoices.com/adchoices
The Cybersecurity and Infrastructure Security Agency (CISA) has issued an emergency directive for federal agencies to update their F5 products following a significant breach where hackers accessed source code and undisclosed vulnerabilities. This incident, discovered in August, poses a serious risk to federal networks, as the threat actor could exploit these vulnerabilities to gain unauthorized access and exfiltrate sensitive data. Agencies are required to apply the latest updates by October 22nd and report their F5 deployments by October 29th, highlighting the urgency of addressing these security concerns.In a related development, the National Institute of Standards and Technology (NIST) is encouraging federal agencies to take calculated risks with artificial intelligence (AI) under new federal guidance. Martin Stanley, an AI and cybersecurity researcher, emphasized the importance of risk management in AI deployment, particularly in comparison to more established sectors like financial services. As agencies adapt to this guidance, they must identify high-impact AI applications that require thorough risk management to ensure both innovation and safety.A report from Cork Protection underscores the need for small and medium-sized businesses (SMBs) to adopt a security-first approach in light of evolving cyber threats. Many SMBs remain complacent, mistakenly believing they are not targets for cybercriminals. The report warns that this mindset, combined with the rising financial risks associated with breaches, necessitates a shift towards a security-centric operational model. The cybersecurity services market is projected to grow significantly, presenting opportunities for IT service providers that prioritize security.Apple has announced a substantial increase in its bug bounty program, now offering up to $5 million for critical vulnerabilities. This move reflects the growing importance of addressing security challenges within its ecosystem, which includes over 2.35 billion active devices. The company has previously awarded millions to security researchers, emphasizing its commitment to user privacy and security. As the landscape of cybersecurity evolves, managed service providers (MSPs) are urged to tighten vendor monitoring, incorporate AI risk assessments, and focus on continuous assurance to meet the increasing demands for security. Three things to know today00:00 Cybersecurity Crossroads: F5 Breach, AI Risk, and Apple's $5M Bug Bounty Signal Security Accountability06:44 Nearly a Third of MSPs Admit to Preventable Microsoft 365 Data Loss, Syncro Survey Finds09:22 AI Reality Check: Workers' Overconfidence, Cheaper Models, and Microsoft's Scientific Breakthrough Signal Maturity in the Market This is the Business of Tech. Supported by: https://mailprotector.com/mspradio/
Lowenstein Sandler's Insurance Recovery Podcast: Don’t Take No For An Answer
In this episode of Don't Take No For An Answer, new Lowenstein partner Jeremy M. King joins host Lynda A. Bennett to discuss AI cybersecurity risks and how to insure them. They discuss the plethora of security risks associated with AI usage and the urgency for organizations to review their insurance policy language before a claim is presented to avoid surprises later. King and Bennett encourage listeners to make sure their patchwork quilt of coverage does not have any holes from crime to standalone cyber to professional liability to media policies. The episode concludes by discussing the rise of regulatory actions being taken by states to address AI usage and how that impacts coverage rights. Speakers: Lynda A. Bennett, Partner and Chair, Insurance RecoveryJeremy M. King, Partner, Insurance Recovery
Company background: "HSO is the second largest Microsoft partner in the globe," Holwagner reports. It focuses on industries including professional services, manufacturing, finance, and the public sector. HSO continues to grow not only with its traditional ERP services but also around cloud and AI services. "The mission here is really to improve our clients' business performance with the results of Microsoft solutions."AI's market impact: "It's definitely a transformation happening faster than anything I've seen before," Holwagner says. While there's already been significant advancements with AI, it's still only the beginning of what has yet to be built out and understood. He breaks down AI across four different roles:At the top level, boards and owners are pushing for areas of efficiency to stay competitive, reimagining the business model using AI.The next level is the CTO or an IT manager; they have efficiency demands, but they're also primarily thinking about how to contain information and data in a security model.The business leaders or department heads are being tasked to think about efficiency using AI but they're mostly busy keeping their engine going. They need tools that show them where to get ROI.The last level is HR, which might be considering where AI is filling in for various jobs.Perspectives for applying AI: HSO looks from a responsibility perspective in three different areas. First, it aims to educate customers on what's possible while also focusing on what's doable. Second is protection, which involves having control over your domain information. The third area is thinking about use cases for specific AI components.Organizational transformation: With the introduction of AI, there's a transformation happening across organizations in a variety of industries. AI has been thought of as a technical element when it needs to be included in functional conversation, especially for consulting businesses, Holwagner notes. Leaders and managers must understand the concepts of weaving in AI to give it value. AI transformation will likely lead to a "healthy reduction in certain areas" in the workforce, but "the transformation of what people are going to do in the organization is going to change." It will be more business logic transformation consulting and fewer hands-on the keyboard-related tasks, Holwagner shares.Summit NA: HSO will be attending Community Summit North America. You can connect with HSO at booth #209. The HSO team will be presenting several sessions throughout the event as well, including:The Latest D365 AI Agents and Features to Automate Your Supply Chain on Monday, October 20thDelivering a Scalable, Secure Data & AI Platform on Monday, October 20th3 Hidden Risks of AI in the Enterprise—and How to Manage Them Responsibly on Tuesday, October 21stSolving Customer Master Data Challenges for a 360° View in Dynamics 365 CE (CRM) and F/SCM (FO) on Wednesday, October 22nd Visit Cloud Wars for more.
On Healthy Mind, Healthy Life, Avik speaks with award-winning techno-thriller author Guy Morris about blending rigorous research with fiction to wake readers up to real-world risks. We unpack how 70 years of AI progress, geopolitics, national debt, climate pressure, and election manipulation converge—and why credible facts make stories stick. Guy shares the FBI-knock-on-the-door moment that reshaped his view of technology, a clear warning on facial recognition and biometric logins, and his choice to write high-tension, non-gun-hero protagonists grounded in human ethics. If you care about mental clarity in an anxious news cycle, digital safety, and page-turners that actually teach, this episode is for you. About the guest: Guy Morris spent 38 years leading tech and strategy at firms like IBM, Oracle, and Microsoft before turning to fiction. Since 2020 he's released multiple award-winning thrillers—often compared to Dan Brown and Robert Ludlum—rooted in real technologies, history, and geopolitics. Key takeaways : Research isn't window dressing; verifiable facts make fiction provoke thought and change behavior AI isn't new—it's a 70-year arc now reaching mass application; risks arise when commercial incentives downplay failure modes . Guy writes to show the convergence of pressures: geopolitics, national debt, climate, banking shifts, misinformation, and democratic backsliding Thrillers as a release valve: transforming societal anxiety into narrative helps audiences process fear and consider options. A 1990s incident—an NSA program “escaped”—sparked Guy's security lens and eventually drew a visit from the FBI, proving how plausible his reconstruction was. Core advice: avoid using biometrics (face, iris, thumbprint) as passwords; if compromised, you can't reset your face or print. He favors non-violent protagonists and ethical problem-solving; ingenuity over lethality preserves human dignity and reduces copycat harm. Mental habit: focus on history + humanity—tech amplifies old human tendencies; understand the past to choose wiser futures. How to connect with Guy Morris Website: https://www.guymorrisbooks.com/ Instagram: https://www.instagram.com/authorguymorris/ Want to be a guest on Healthy Mind, Healthy Life? DM on PM - Send me a message on PodMatch DM Me Here: https://www.podmatch.com/hostdetailpreview/avik Disclaimer: This video is for educational and informational purposes only. The views expressed are the personal opinions of the guest and do not reflect the views of the host or Healthy Mind By Avik™️. We do not intend to harm, defame, or discredit any person, organization, brand, product, country, or profession mentioned. All third-party media used remain the property of their respective owners and are used under fair use for informational purposes. By watching, you acknowledge and accept this disclaimer. Healthy Mind By Avik™️ is a global platform redefining mental health as a necessity, not a luxury. Born during the pandemic, it's become a sanctuary for healing, growth, and mindful living. Hosted by Avik Chakraborty—storyteller, survivor, wellness advocate—this channel shares powerful podcasts and soul-nurturing conversations on: • Mental Health & Emotional Well-being• Mindfulness & Spiritual Growth• Holistic Healing & Conscious Living• Trauma Recovery & Self-Empowerment With over 4,400+ episodes and 168.4K+ global listeners, join us as we unite voices, break stigma, and build a world where every story matters. Subscribe and be part of this healing journey. ContactBrand: Healthy Mind By Avik™Email: join@healthymindbyavik.com | podcast@healthymindbyavik.comWebsite: www.healthymindbyavik.comBased in: India & USA Open to collaborations, guest appearances, coaching, and strategic partnerships. Let's connect to create a ripple effect of positivity. CHECK PODCAST SHOWS & BE A GUEST:Listen our 17 Podcast Shows Here: https://www.podbean.com/podcast-network/healthymindbyavikBe a guest on our other shows: https://www.healthymindbyavik.com/beaguestVideo Testimonial: https://www.healthymindbyavik.com/testimonialsJoin Our Guest & Listener Community: https://nas.io/healthymindSubscribe To Newsletter: https://healthymindbyavik.substack.com/ OUR SERVICESBusiness Podcast Management - https://ourofferings.healthymindbyavik.com/corporatepodcasting/Individual Podcast Management - https://ourofferings.healthymindbyavik.com/Podcasting/Share Your Story With World - https://ourofferings.healthymindbyavik.com/shareyourstory STAY TUNED AND FOLLOW US!Medium - https://medium.com/@contentbyavikYouTube - https://www.youtube.com/@healthymindbyavikInstagram - https://www.instagram.com/healthyminds.pod/Facebook - https://www.facebook.com/podcast.healthymindLinkedin Page - https://www.linkedin.com/company/healthymindbyavikLinkedIn - https://www.linkedin.com/in/avikchakrabortypodcaster/Twitter - https://twitter.com/podhealthclubPinterest - https://www.pinterest.com/Avikpodhealth/ SHARE YOUR REVIEWShare your Google Review - https://www.podpage.com/bizblend/reviews/new/Share a video Testimonial and it will be displayed on our website - https://famewall.healthymindbyavik.com/ Because every story matters and yours could be the one that lights the way! #podmatch #healthymind #healthymindbyavik #wellness #HealthyMindByAvik #MentalHealthAwareness #comedypodcast #truecrimepodcast #historypodcast #startupspodcast #podcasthost #podcasttips #podcaststudio #podcastseries #podcastformentalhealth #podcastforentrepreneurs #podcastformoms #femalepodcasters #podcastcommunity #podcastgoals #podcastrecommendations #bestpodcast #podcastlovers #podcastersofinstagram #newpodcastalert #podcast #podcasting #podcastlife #podcasts #spotifypodcast #applepodcasts #podbean #podcastcommunity #podcastgoals #bestpodcast #podcastlovers #podcasthost #podcastseries #podcastforspeakers #StorytellingAsMedicine #PodcastLife #PersonalDevelopment #ConsciousLiving #GrowthMindset #MindfulnessMatters #VoicesOfUnity #InspirationDaily #podcast #podcasting #podcaster #podcastlife #podcastlove #podcastshow #podcastcommunity #newpodcast #podcastaddict #podcasthost #podatcastepisode #podcastinglife #podrecommendation #wellnesspodcast #healthpodcast #mentalhealthpodcast #wellbeing #selfcare #mentalhealth #mindfulness #healthandwellness #wellnessjourney #mentalhealthmatters #mentalhealthawareness #healthandwellnesspodcast #fyp #foryou #foryoupage #viral #trending #tiktok #tiktokviral #explore #trendingvideo #youtube #motivation #inspiration #positivity #mindset #selflove #success
Is your AI built on quicksand? Learn how bad data, poisoned datasets, and deep fakes threaten your AI systems, and what to do about it.In this episode of CXOTalk (#896), AI luminaries Dr. David Bray and Dr. Anthony Scriffignano reveal the hidden dangers lurking in your AI foundations. They share practical strategies for building trustworthy AI systems and escaping the "AI quicksand" that traps countless organizations.
Take a Network Break! We start with a two-part listener follow-up and sound alarms about a serious flaw in Termix and tens of thousands of still-vulnerable Cisco security devices. Alkira debuts an MCP server and AI copilot for its multi-cloud networking platform; Cato Networks releases a Chrome-based browser extension to help secure contractor and personal... Read more »
Take a Network Break! We start with a two-part listener follow-up and sound alarms about a serious flaw in Termix and tens of thousands of still-vulnerable Cisco security devices. Alkira debuts an MCP server and AI copilot for its multi-cloud networking platform; Cato Networks releases a Chrome-based browser extension to help secure contractor and personal... Read more »
Take a Network Break! We start with a two-part listener follow-up and sound alarms about a serious flaw in Termix and tens of thousands of still-vulnerable Cisco security devices. Alkira debuts an MCP server and AI copilot for its multi-cloud networking platform; Cato Networks releases a Chrome-based browser extension to help secure contractor and personal... Read more »
Welcome to EO Radio Show – Your Nonprofit Legal Resource. I'm Cynthia Rowland, and this is the third episode in a short series about using AI for impact and how nonprofits can work smarter in the digital age. In today's episode, I am again joined by Sly Atayee and Kirstie Tiernan from BDO. We delve into the risks and realities of adopting AI in the nonprofit sector. From securing donor data to preventing bias and managing tool sprawl, we explore how nonprofits can protect their mission while embracing innovation. Show Notes: Cynthia Rowland, Podcast Host, Partner, Farella Braun + Martel Sly Atayee, Director, BDO USA, Certified Fraud Examiner Kirstie Tiernan, Principal - Digital Go-to-Market Leader, Labs Practice Leader, BDO USA BDO USA: The Top AI Risks in the Nonprofit Sector EO Radio Show AI YouTube playlist EO Radio Show #136: AI on a Shoestring: A Startup Nonprofit's Real-Life Journey EO Radio Show #137: Leading With AI: Nonprofit Management and Boardroom Perspectives COSO Integrated Internal Control Framework Executive Summary EO Radio Show Nonprofit Fraud Prevention YouTube playlist EO Radio Show #84: Nonprofit Book Review: ABA Guidebook for Directors of Nonprofit Corporations If you have suggestions for topics you would like us to discuss, please email us at eoradioshow@fbm.com. Additional episodes can be found at EORadioShowByFarella.com. DISCLAIMER: This podcast is for general informational purposes only. It is not intended to be, nor should it be interpreted as, legal advice or opinion.
Send us a textReplay Episode: Python, Anaconda, and the AI Frontier with Peter WangPeter Wang — Chief AI & Innovation Officer and Co-founder of Anaconda — is back on Making Data Simple! Known for shaping the open-source ecosystem and making Python a powerhouse, Peter dives into Anaconda's new AI incubator, the future of GenAI, and why Python isn't just “still a thing”… it's the thing.From branding and security to leadership and philosophy, this episode is a wild ride through the biggest opportunities (and risks) shaping AI today.Timestamps: 01:27 Meet Peter Wang 05:10 Python or R? 05:51 Anaconda's Differentiation 07:08 Why the Name Anaconda 08:24 The AI Incubator 11:40 GenAI 14:39 Enter Python 16:08 Anaconda Commercial Services 18:40 Security 20:57 Common Points of Failure 22:53 Branding 24:50 watsonx Partnership 28:40 AI Risks 34:13 Getting Philosophical 36:13 China 44:52 Leadership StyleLinkedin: linkedin.com/in/pzwangWebsite: https://www.linkedin.com/company/anacondainc/, https://www.anaconda.com/Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Send us a textReplay Episode: Python, Anaconda, and the AI Frontier with Peter WangPeter Wang — Chief AI & Innovation Officer and Co-founder of Anaconda — is back on Making Data Simple! Known for shaping the open-source ecosystem and making Python a powerhouse, Peter dives into Anaconda's new AI incubator, the future of GenAI, and why Python isn't just “still a thing”… it's the thing.From branding and security to leadership and philosophy, this episode is a wild ride through the biggest opportunities (and risks) shaping AI today.Timestamps: 01:27 Meet Peter Wang 05:10 Python or R? 05:51 Anaconda's Differentiation 07:08 Why the Name Anaconda 08:24 The AI Incubator 11:40 GenAI 14:39 Enter Python 16:08 Anaconda Commercial Services 18:40 Security 20:57 Common Points of Failure 22:53 Branding 24:50 watsonx Partnership 28:40 AI Risks 34:13 Getting Philosophical 36:13 China 44:52 Leadership StyleLinkedin: linkedin.com/in/pzwangWebsite: https://www.linkedin.com/company/anacondainc/, https://www.anaconda.com/Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
My fellow pro-growth/progress/abundance Up Wingers,Artificial intelligence may prove to be one of the most transformative technologies in history, but like any tool, its immense power for good comes with a unique array of risks, both large and small.Today on Faster, Please! — The Podcast, I chat with Miles Brundage about extracting the most out of AI's potential while mitigating harms. We discuss the evolving expectations for AI development and how to reconcile with the technology's most daunting challenges.Brundage is an AI policy researcher. He is a non-resident fellow at the Institute for Progress, and formerly held a number of senior roles at OpenAI. He is also the author of his own Substack.In This Episode* Setting expectations (1:18)* Maximizing the benefits (7:21)* Recognizing the risks (13:23)* Pacing true progress (19:04)* Considering national security (21:39)* Grounds for optimism and pessimism (27:15)Below is a lightly edited transcript of our conversation. Setting expectations (1:18)It seems to me like there are multiple vibe shifts happening at different cadences and in different directions.Pethokoukis: Earlier this year I was moderating a discussion between an economist here at AEI and a CEO of a leading AI company, and when I asked each of them how AI might impact our lives, our economists said, ‘Well, I could imagine, for instance, a doctor's productivity increasing because AI could accurately and deeply translate and transcribe an appointment with a patient in a way that's far better than what's currently available.” So that was his scenario. And then I asked the same question of the AI company CEO, who said, by contrast, “Well, I think within a decade, all human death will be optional thanks to AI-driven medical advances.” On that rather broad spectrum — more efficient doctor appointments and immortality — how do you see the potential of this technology?Brundage: It's a good question. I don't think those are necessarily mutually exclusive. I think, in general, AI can both augment productivity and substitute for human labor, and the ratio of those things is kind of hard to predict and might be very policy dependent and social-norm dependent. What I will say is that, in general, it seems to me like the pace of progress is very fast and so both augmentation and substitutions seem to be picking up steam.It's kind of interesting watching the debate between AI researchers and economists, and I have a colleague who has said that the AI researchers sometimes underestimate the practical challenges in deployment at scale. Conversely, the economists sometimes underestimate just how quickly the technology is advancing. I think there's maybe some happy middle to be found, or perhaps one of the more extreme perspectives is true. But personally, I am not an economist, I can't really speak to all of the details of substitution, and augmentation, and all the policy variables here, but what I will say is that at least the technical potential for very significant amounts of augmentation of human labor, as well as substitution for human labor, seem pretty likely on even well less than 10 years — but certainly within 10 years things will change a lot.It seems to me that the vibe has shifted a bit. When I talk to people from the Bay Area and I give them the Washington or Wall Street economist view, to them I sound unbelievably gloomy and cautious. But it seems the vibe has shifted, at least recently, to where a lot of people think that major advancements like superintelligence are further out than they previously thought — like we should be viewing AI as an important technology, but more like what we've seen before with the Internet and the PC.It's hard for me to comment. It seems to me like there are multiple vibe shifts happening at different cadences and in different directions. It seems like several years ago there was more of a consensus that what people today would call AGI was decades away or more, and it does seem like that kind of timeframe has shifted closer to the present. There there's still debate between the “next few years” crowd versus the “more like 10 years” crowd. But that is a much narrower range than we saw several years ago when there was a wider range of expert opinions. People who used to be seen as on one end of the spectrum, for example, Gary Marcus and François Chollet who were seen as kind of the skeptics of AI progress, even they now are saying, “Oh, it's like maybe 10 years or so, maybe five years for very high levels of capability.” So I think there's been some compression in that respect. That's one thing that's going on.There's also a way in which people are starting to think less abstractly and more concretely about the applications of AI and seeing it less as this kind of mysterious thing that might happen suddenly and thinking of it more as incremental, more as something that requires some work to apply in various parts of the economy that there's some friction associated with.Both of these aren't inconsistent, they're just kind of different vibe shifts that are happening. So getting back to the question of is this just a normal technology, I would say that, at the very least, it does seem faster in some respects than some other technological changes that we've seen. So I think ChatGPT's adoption going from zero to double-digit percentages of use across many professions in the US and in a matter of high number of months, low number of years, is quite stark.Would you be surprised if, five years from now, we viewed AI as something much more important than just another incremental technological advance, something far more transformative than technologies that have come before?No, I wouldn't be surprised by that at all. If I understand your question correctly, my baseline expectation is that it will be seen as one of the most important technologies ever. I'm not sure that there's a standard consensus on how to rate the internet versus electricity, et cetera, but it does seem to me like it's of the same caliber of electricity in the sense of essentially converting one kind of energy into various kinds of useful economic work. Similarly, AI is converting various types of electricity into cognitive work, and I think that's a huge deal.Maximizing the benefits (7:21)There's also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications.However you want to define society or the aspect of society that you focus on — government businesses, individuals — are we collectively doing what we need to do to fully exploit the upsides of this technology over the next half-decade to decade, as well as minimizing potential downsides?I think we are not, and this is something that I sometimes find frustrating about the way that the debate plays out is that there's sometimes this zero-sum mentality of doomers versus boomers — a term that Karen Hao uses — and this idea that there's this inherent tension between mitigating the risks and maximizing the benefits, and there are some tensions, but I don't think that we are on the Pareto frontier, so to speak, of those issues.Right now, I think there's a lot of value being left on the table in terms of fairly low-cost risk mitigations. There's also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications. I'll give just one example, because I write a lot about the risk, but I also am very interested in maximizing the upside. So I'll just give one example: Protecting critical infrastructure and improving the cybersecurity of various parts of critical infrastructure in the US. Hospitals, for example, get attacked with ransomware all the time, and this causes real harm to patients because machines get bricked, essentially, and they have one or two people on the IT team, and they're kind of overwhelmed by these, not even always that sophisticated, but perhaps more-sophisticated hackers. That's a huge problem. It matters for national security in addition to patients' lives, and it matters for national security in the sense that this is something that China and Russia and others could hold at risk in the context of a war. They could threaten this critical infrastructure as part of a bargaining strategy.And I don't think that there's that much interest in helping hospitals have a better automated cybersecurity engineer helper among the Big Tech companies — because there aren't that many hospital administrators. . . I'm not sure if it would meet the technical definition of market failure, but it's at least a national security failure in that it's a kind of fragmented market. There's a water plant here, a hospital administrator there.I recently put out a report with the Institute for Progress arguing that philanthropists and government could put some additional gasoline in the tank of cybersecurity by incentivizing innovation that specifically helps these under-resourced defenders more so than the usual customers of cybersecurity companies like Fortune 500 companies.I'm confident that companies and entrepreneurs will figure out how to extract value from AI and create new products and new services, barring any regulatory slowdowns. But since you mentioned low-hanging fruit, what are some examples of that?I would say that transparency is one of the areas where a lot of AI policy experts seem to be in pretty strong agreement. Obviously there is still some debate and disagreement about the details of what should be required, but just to give you some illustration, it is typical for the leading AI companies, sometimes called frontier AI companies, to put out some kind of documentation about the safety steps that they've taken. It's typical for them to say, here's our safety strategy and here's some evidence that we're following this strategy. This includes things like assessing whether their systems can be used for cyber-attacks, and assessing whether they could be used to create biological weapons, or assessing the extent to which they make up facts and make mistakes, but state them very confidently in a way that could pose risks to users of the technology.That tends to be totally voluntary, and there started to be some momentum as a result of various voluntary commitments that were made in recent years, but as the technology gets more high-stakes, and there's more cutthroat competition, and there's maybe more lawsuits where companies might be tempted to retreat a bit in terms of the information that they share, I think that things could kind of backslide, and at the very least not advance as far as I would like from the perspective of making sure that there's sharing of lessons learned from one company to another, as well as making sure that investors and users of the technology can make informed decisions about, okay, do I purchase the services of OpenAI, or Google, or Anthropic, and making these informed decisions, making informed capital investment seems to require transparency to some degree.This is something that is actively being debated in a few contexts. For example, in California there's a bill that has that and a few other things called SB-53. But in general, we're at a bit of a fork in the road in terms of both how certain regulations will be implemented such as in the EU. Is it going to become actually an adaptive, nimble approach to risk mitigation or is it going to become a compliance checklist that just kind of makes big four accounting firms richer? So there are questions then there are just “does the law pass or not?” kind of questions here.Recognizing the risks (13:23). . . I'm sure there'll be some things that we look back on and say it's not ideal, but in my opinion, it's better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing . . .In my probably overly simplistic way of looking at it, I think of two buckets and you have issues like, are these things biased? Are they giving misinformation? Are they interacting with young people in a way that's bad for their mental health? And I feel like we have a lot of rules and we have a huge legal system for liability that can probably handle those.Then, in the other bucket, are what may, for the moment, be science-fictional kinds of existential risks, whether it's machines taking over or just being able to give humans the ability to do very bad things in a way we couldn't before. Within that second bucket, I think, it sort of needs to be flexible. Right now, I'm pretty happy with voluntary standards, and market discipline, and maybe the government creating some benchmarks, but I can imagine the technology advancing to where the voluntary aspect seems less viable and there might need to be actual mandates about transparency, or testing, or red teaming, or whatever you want to call it.I think that's a reasonable distinction, in the sense that there are risks at different scales, there are some that are kind of these large-scale catastrophic risks and might have lower likelihood but higher magnitude of impact. And then there are things that are, I would say, literally happening millions of times a day like ChatGPT making up citations to articles that don't exist, or Claud saying that it fixed your code but actually it didn't fix the code and the user's too lazy to notice, and so forth.So there are these different kinds of risks. I personally don't make a super strong distinction between them in terms of different time horizons, precisely because I think things are going so quickly. I think science fiction is becoming science fact very much sooner than many people expected. But in any case, I think that similar logic around, let's make sure that there's transparency even if we don't know exactly what the right risk thresholds are, and we want to allow a fair degree of flexibility and what measures companies take.It seems good that they share what they're doing and, in my opinion, ideally go another step further and allow third parties to audit their practices and make sure that if they say, “Well, we did a rigorous test for hallucination or something like that,” that that's actually true. And so that's what I would like to see for both what you might call the mundane and the more science fiction risks. But again, I think it's kind of hard to say how things will play out, and different people have different perspectives on these things. I happen to be on the more aggressive end of the spectrumI am worried about the spread of the apocalyptic, high-risk AI narrative that we heard so much about when ChatGPT first rolled out. That seems to have quieted, but I worry about it ramping up again and stifling innovation in an attempt to reduce risk.These are very fair concerns, and I will say that there are lots of bills and laws out there that have, in fact, slowed down innovation and certain contexts. The EU, I think, has gone too far in some areas around social media platforms. I do think at least some of the state bills that have been floated would lead to a lot of red tape and burdens to small businesses. I personally think this is avoidable.There are going to be mistakes. I don't want to be misleading about how high quality policymakers' understanding of some of these issues are. There will be mistakes, even in cases where, for example, in California there was a kind of blue ribbon commission of AI experts producing a report over several months, and then that directly informing legislation, and a lot of industry back and forth and negotiation over the details. I would say that's probably the high water mark, SB-53, of fairly stakeholder/expert-informed legislation. Even there, I'm sure there'll be some things that we look back on and say it's not ideal, but in my opinion, it's better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing, such as companies retrenching and holding back information that makes it hard for the field as a whole to tackle these issues.I'll just make one more point, which is adapting to the compliance capability of different companies: How rich are they? How expensive are the models they're training, I think is a key factor in the legislation that I tend to be more sympathetic to. So just to make a contrast, there's a bill in Colorado that was kind of one size fits all, regulate all the kind of algorithms, and that, I think, is very burdensome to small businesses. I think something like SB-53 where it says, okay, if you can afford to train an AI system for a $100 million, you can probably afford to put out a dozen pages about your safety and security practices.Pacing true progress (19:04). . . some people . . . kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress . . . there's quite rapid progress happening still.Hopefully Grok did not create this tweet of yours, but if it did, well, there we go. You won't have to answer it, but I just want to understand what you meant by it: “A lot of AI safety people really, really want to find evidence that we have a lot of time for AGI.” What does that mean?What I was trying to get at is that — and I guess this is not necessarily just AI safety people, but I sometimes kind of try to poke at people in my social network who I'm often on the same side of, but also try to be a friendly critic to, and that includes people who are working on AI safety. I think there's a common tendency to kind of grasp at what I would consider straws when reading papers and interpreting product launches in a way that kind of suggests, well, we've hit a wall, AI is slowing down, this was a flop, who cares?I'm doing my kind of maybe uncharitable psychoanalysis. What I was getting at is that I think one reason why some people might be tempted to do that is that it makes things seem easier and less scary: “Well, we don't have to worry about really powerful AI enabled cyber-attacks for another five years, or biological weapons for another two years, or whatever.” Maybe, maybe not.I think the specific example that sparked that was GPT-5 where there were a lot of people who, in my opinion, were reading the tea leaves in a particular way and missing important parts of the context. For example, at GPT-5 wasn't a much larger or more expensive-to-train model than GPT-4, which may be surprising by the name. And I think OpenAI did kind of screw up the naming and gave people the wrong impression, but from my perspective, there was nothing particularly surprising, but to some people it was kind of a flop that they kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress like scores on math, and coding, and the reduction in the rate of hallucinations, and solving chemistry and biology problems, and designing new chips, and so forth, there's quite rapid progress happening still.Considering national security (21:39)I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.I'm not sure if you're familiar with some of the work being done by former Google CEO Eric Schmidt, who's been doing a lot of work on national security and AI, and his work, it doesn't use the word AGI, but it talks about AI certainly smart enough to be able to have certain capabilities which our national security establishment should be aware of, should be planning, and those capabilities, I think to most people, would seem sort of science fictional: being able to launch incredibly sophisticated cyber-attacks, or be able to improve itself, or be able to create some other sort of capabilities. And from that, I'm like, whether or not you think that's possible, to me, the odds of that being possible are not zero, and if they're not zero, some bit of the bandwidth of the Pentagon should be thinking about that. I mean, is that sensible?Yeah, it's totally sensible. I'm not going to argue with you there. In fact, I've done some collaboration with the Rand Corporation, which has a pretty heavy investment in what they call the geopolitics of AGI and kind of studying what are the scenarios, including AI and AGI being used to produce “wonder weapons” and super-weapons of some kind.Basically, I think this is super important and in fact, I have a paper coming out that was in collaboration with some folks there pretty soon. I won't spoil all the details, but if you search “Miles Brundage US China,” you'll see some things that I've discussed there. And basically my perspective is we need to strike a balance between competing vigorously on the commercial side with countries like China and Russia on AI — more so China, Russia is less of a threat on the commercial side, at least — and also making sure that we're fielding national security applications of AI in a responsible way, but also recognizing that there are these ways in which things could spiral out of control in a scenario with totally unbridled competition. I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.If you think that, again, the odds are not zero that a technology which is fast-evolving, that we have no previous experience with because it's fast-evolving, could create the kinds of doomsday scenarios that there's new books out about, people are talking about. And so if you think, okay, not a zero percent chance that could happen, but it is kind of a zero percent chance that we're going to stop AI, smash the GPUs, as someone who cares about policy, are you just hoping for the best, or are the kinds of things we've already talked about — transparency, testing, maybe that testing becoming mandatory at some point — is that enough?It's hard to say what's enough, and I agree that . . . I don't know if I give it zero, maybe if there's some major pandemic caused by AI and then Xi Jinping and Trump get together and say, okay, this is getting out of control, maybe things could change. But yeah, it does seem like continued investment and a large-scale deployment of AI is the most likely scenario.Generally, the way that I see this playing out is that there are kind of three pillars of a solution. There's kind of some degree of safety and security standards. Maybe we won't agree on everything, but we should at least be able to agree that you don't want to lose control of your AI system, you don't want it to get stolen, you don't want a $10 billion AI system to be stolen by a $10 million-scale hacking effort. So I think there are sensible standards you can come up with around safety and security. I think you can have evidence produced or required that companies are following these things. That includes transparency.It also includes, I would say, third-party auditing where there's kind of third parties checking the claims and making sure that these standards are being followed, and then you need some incentives to actually participate in this regime and follow it. And I think the incentives part is tricky, particularly at an international scale. What incentive does China have to play ball other than obviously they don't want to have their AI kill them or overthrow their government or whatever? So where exactly are the interests aligned or not? Is there some kind of system of export control policies or sanctions or something that would drive compliance or is there some other approach? I think that's the tricky part, but to me, those are kind of the rough outlines of a solution. Maybe that's enough, but I think right now it's not even really clear what the rough rules of the road are, who's playing by the rules, and we're relying a lot on goodwill and voluntary reporting. I think we could do better, but is that enough? That's harder to say.Grounds for optimism and pessimism (27:15). . . it seems to me like there is at least some room for learning from experience . . . So in that sense, I'm more optimistic. . . I would say, in another respect, I'm maybe more pessimistic in that I am seeing value being left on the table.Did your experience at OpenAI make you more or make you more optimistic or worried that, when we look back 10 years from now, that AI will have, overall on net, made the world a better place?I am sorry to not give you a simpler answer here, and maybe think I should sit on this one and come up with a kind of clearer, more optimistic or more pessimistic answer, but I'll give you kind of two updates in different directions, and I think they're not totally inconsistent.I would say that I have gotten more optimistic about the solvability of the problem in the following sense. I think that things were very fuzzy five, 10 years ago, and when I joined OpenAI almost seven years now ago now, there was a lot of concern that it could kind of come about suddenly — that one day you don't have AI, the next day you have AGI, and then on the third day you have artificial superintelligence and so forth.But we don't live to see the fourth day.Exactly, and so it seems more gradual to me now, and I think that is a good thing. It also means that — and this is where I differ from some of the more extreme voices in terms of shutting it all down — it seems to me like there is at least some room for learning from experience, iterating, kind of taking the lessons from GPT-5 and translating them into GPT-6, rather than it being something that we have to get 100 percent right on the first shot and there being no room for error. So in that sense, I'm more optimistic.I would say, in another respect, I'm maybe more pessimistic in that I am seeing value being left on the table. It seems to me like, as I said, we're not on the Pareto frontier. It seems like there are pretty straightforward things that could be done for a very small fraction of, say, the US federal budget, or very small fraction of billionaires' personal philanthropy or whatever. That in my opinion, would dramatically reduce the likelihood of an AI-enabled pandemic or various other issues, and would dramatically increase the benefits of AI.It's been a bit sad to continuously see those opportunities being neglected. I hope that as AI becomes more of a salient issue to more people and people start to appreciate, okay, this is a real thing, the benefits are real, the risks are real, that there will be more of a kind of efficient policy market and people take those opportunities, but right now it seems pretty inefficient to me. That's where my pessimism comes from. It's not that it's unsolvable, it's just, okay, from a political economy and kind of public-choice perspective, are the policymakers going to make the right decisions?On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
This week Noah and Steve dig into an npm attack that Red Hat has issued an alert for. We talk about small and portable laptops, and of course answer your questions. -- During The Show -- 00:52 Intro ZFS Win Meld (https://meldmerge.org/) Domain knowledge scaling 07:32 NPM Supply Chain Attack No compromised packages used in Red Hat software NPM and Node.js What the malicious code does Red Hat is on top of it Reaction to finding a compromise Red Hat Article (https://access.redhat.com/security/supply-chain-attacks-NPM-packages) Aikido Article 1 (https://www.aikido.dev/blog/popular-nx-packages-compromised-on-npm) Aikido Article 2 (https://www.aikido.dev/blog/npm-debug-and-chalk-packages-compromised) Aikido Article 3 (https://www.aikido.dev/blog/s1ngularity-nx-attackers-strike-again) 18:21 Registrar - Josh CloudFlare PorkBun (https://porkbun.com/) Great Nerds 21:47 Small Laptop - Ziggy HP ProBook Noah's GPD Pocket v1 Surface Pro 1 Dell Latitude 2 in 1 StarLabs Star Lite (https://us.starlabs.systems/pages/starlite) 34:56 Ham Radio - Brett Open Source Ham Radio Plan to sell a kit Have a prototype Reddit Post (https://www.reddit.com/r/HamRadio/s/TTodwCYuyG) Arkos Engineering (https://arkosengineering.com/) HT-15 GitHub (https://github.com/Arkos-Engineering/HT-15) 37:58 News Wire Systemd 258 - phoronix.com (https://www.phoronix.com/news/systemd-258) Rust 1.90 - rust-lang.org (https://blog.rust-lang.org/2025/09/18/Rust-1.90.0) Gnome 49 - gnome.org (https://release.gnome.org/49) Firefox 143 - firefox.com (https://www.firefox.com/en-US/firefox/143.0/releasenotes) Thunderbird 143 - thunderbird.net (https://www.thunderbird.net/en-US/thunderbird/143.0/releasenotes) Rayhunter - helpnetsecurity.com (https://www.helpnetsecurity.com/2025/09/17/rayhunter-eff-open-source-tool-detect-cellular-spying) TernFS - phoronix.com (https://www.phoronix.com/news/TernFS-File-System-Open-Source) BCacheFS DKMS - hackaday.com (https://hackaday.com/2025/09/19/bcachefs-is-now-a-dkms-module-after-exile-from-the-linux-kernel) Tails 7.0 - torproject.org (https://blog.torproject.org/new-release-tails-7_0) Porteux - github.com (https://github.com/porteux/porteux/releases/tag/v2.3) Oreon 10 - oreonproject.org (https://oreonproject.org/oreon-10) Azure Linux 3.0 - webpronews.com (https://www.webpronews.com/microsoft-releases-azure-linux-3-0-with-optional-6-12-lts-kernel) Tongyi-DeepResearch-30B-A3B - marktechpost.com (https://www.marktechpost.com/2025/09/18/alibaba-releases-tongyi-deepresearch-a-30b-parameter-open-source-agentic-llm-optimized-for-long-horizon-research) Qwen3-Omni - venturebeat.com (https://venturebeat.com/ai/chinas-alibaba-challenges-u-s-tech-giants-with-open-source-qwen3-omni-ai) AI Risks - scmp.com (https://www.scmp.com/tech/big-tech/article/3326214/deepseek-warns-jailbreak-risks-its-open-source-models) Hugging Face GitHub CoPilot Integration - infoq.com (https://www.infoq.com/news/2025/09/hugging-face-vscode) 40:06 OBS OBS 32.0 Pipewire video capture Lots of other features Pipewire is professional qpwgraph (https://github.com/rncbc/qpwgraph) 9 to 5 Linux (https://9to5linux.com/obs-studio-32-0-pipewire-video-capture-improvements-basic-plugin-manager) 44:53 Tails on Trixie Tails teaches you reproduce-ability Privacy tools Changes New min requirements Persistent Apps 9 to 5 Linux (https://9to5linux.com/tails-7-0-anonymous-linux-os-released-based-on-debian-13-trixie) -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/460) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)
In that latest episode of the Security Sprint, Dave and Andy covered the following topics:Warm Open:• TribalNet 2025!• FB-ISAO Releases an All-Faiths Analysis of Attacks on U.S. Houses of Worship in 2024, FB-ISAO Releases an All-Faiths Analysis of Attacks on U.S. Houses of Worship in 2024 & FB-ISAO Newsletter• Water at the 2025 WaterPro Conference• Errol LinkedIn: A Looming Deadline: The Cybersecurity Information Sharing Act of 2015• Health-ISAC and CI-ISAC Australia joint white paper Main Topics:Charlie Kirk Assassination• The Hostile Event Attack Cycle (HEAC)• De-escalation Reference Card: CISA De-escalation Reference Card & CISA De-escalation Reference Card Printer FriendlyInsider Threat Awareness Month: Fake Faces, Real Damage: The Corporate Risk of AI-Powered Manipulation. Security professionals are rapidly confronting a new reality: artificial intelligence (AI) and big data, while excellent tools for improving productivity and business operations, are equally lowering the barriers for sophisticated attacks by a wide range of threat groups. From hostile nation-states to issue-motivated groups to cybercriminals, these technologies are enabling attacks that are more personalized, scalable, and harder to detect. The widespread availability of our personal data—from what we post on social media to the massive resale of information gathered by data brokers from both our devices and our online activity—has made open-source data the key ingredient for highly effective AI-driven deception and disruption and enabled the creation of deepfakes.Quick Hits:• NOAA - Hurricane Erin: When distant storms pose a danger to America's coastal communities• Exclusive: US warns hidden radios may be embedded in solar-powered highway infrastructure• 'Chilling reminder': Multiple historically Black universities under lockdown after receiving threats• 1 injured while U.S. Naval Academy building was cleared after reported threat• Police Swarm UMass Boston After Unconfirmed Shooting Report Sparks Campus Chaos• USCP Clears False Bomb Threat & Police clear possible bomb threat at DNC headquarters• A shooting at Denver-area high school leaves community shaken during third week of school• Man Pleads Guilty to Attempting to Use a Weapon of Mass Destruction and Attempting to Destroy an Energy Facility in Nashville• Out of the woodwork: Examining the global aspirations of The Base• The Online Radicalization of Youth Remains a Growing Problem Worldwide• CTC - The Global State of al-Qa`ida 24 Years After 9/11 • 18 Popular Code Packages Hacked, Rigged to Steal Crypto• Hackers Exploit JavaScript Accounts in Massive Crypto Attack Reportedly Affecting 1B+ Downloads• npm Supply chain Attack: Oops, No Victims: The Largest Supply Chain Attack Stole 5 Cents• Salesloft: March GitHub repo breach led to Salesforce data theft attacks• Ransomware Losses Climb as AI Pushes Phishing to New Heights• Stopping ransomware before it starts: Lessons from Cisco Talos Incident Response
Only one in four governance leaders say succession planning is a top priority, even as activists press for change and many CEOs stay in their roles longer than ever. In this episode, host Steve Odland sits down with Bonnie Gwin, Vice Chair and Global Co-Managing Partner, CEO and Board Practice, Heidrick & Struggles, and a leading voice on governance and board effectiveness. Together, they unpack the risks of neglecting succession, explore best practices for director refreshment, and explain why agility and resilience are now must-have CEO traits. The conversation also highlights how boards are grappling with black swans, geopolitical turmoil, cybersecurity, and the uncertain governance of AI. For more from The Conference Board: Are Boards Effective? Here's What OurLatest Research Says Corporate Citizenship in Transition: Lessons from 2025 Executive Compensation in a Disruptive World
Open Tech Talks : Technology worth Talking| Blogging |Lifestyle
In this episode of Open Tech Talks, we delve into the critical topics of AI security, explainability, and the risks associated with agentic AI. As organizations adopt Generative AI and Large Language Models (LLMs), ensuring safety, trust, and responsible usage becomes essential. This conversation covers how runtime protection works as a proxy between users and AI models, why explainability is key to user trust, and how cybersecurity teams are becoming central to AI innovation. Chapters 00:00 Introduction to AI Security and eIceberg 02:45 The Evolution of AI Explainability 05:58 Runtime Protection and AI Safety 07:46 Adoption Patterns in AI Security 10:51 Agentic AI: Risks and Management 13:47 Building Effective Agentic AI Workflows 16:42 Governance and Compliance in AI 19:37 The Role of Cybersecurity in AI Innovation 22:36 Lessons Learned and Future Directions Episode # 166 Today's Guest: Alexander Schlager, Founder and CEO of AIceberg.ai He's founded a next-generation AI cybersecurity company that's revolutionizing how we approach digital defense. With a strong background in enterprise tech and a visionary outlook on the future of AI, Alexander is doing more than just developing tools — he's restoring trust in an era of automation. Website: AIceberg.ai Linkedin: Alexander Schlager What Listeners Will Learn: Why real-time AI security and runtime protection are essential for safe deployments How explainable AI builds trust with users and regulators The unique risks of agentic AI and how to manage them responsibly Why AI safety and governance are becoming strategic priorities for companies How education, awareness, and upskilling help close the AI skills gap Why natural language processing (NLP) is becoming the default interface for enterprise technology Keywords: AI security, generative AI, agentic AI, explainability, runtime protection, cybersecurity, compliance, AI governance, machine learning Resources: AIceberg.ai
Why listen: Critical vulnerabilities are lurking in the chips that power our devices, AI isn't just a tool—it's a weapon, and space-based systems are now front-lines in the war for cyber dominance. If you care about enterprise security, national infrastructure, or future tech risk, this conversation will change the way you think. In this episode, you'll discover: What chip-level vulnerabilities really mean for enterprise security—and how one weak link can compromise entire systems The double-edged nature of AI: how it can strengthen defenses and create new attack vectors Emerging threats in space cybersecurity, including satellite networks, communication infrastructure, and regulatory gaps Concrete strategies from experts for anticipating and mitigating these risks Featuring Angela Brescia (CEO, Synderys), Trent Teyema (Founder & President, CSG Strategies), and Dr. David Bray (Distinguished Chair, Accelerator, Stimson Center) — leaders at the intersection of tech, defense, and policy. Tune in every Friday at 11 AM PT / 2 PM ET for DisrupTV — your weekly deep dive into enterprise technology, innovation, and digital transformation. If you find value in this episode, please subscribe, rate & review, and share with someone who cares about the future of security.
Tim Berners-Lee's Call to Ban Addictive Algorithms & The Future of Tesla In this episode of Hashtag Trending, host Jim Love discusses Tim Berners-Lee's call to ban addictive algorithms designed to keep users hooked. We also delve into the mystery behind bricking SSDs after a Windows update, the potential shift in Tesla's focus from cars to autonomy under Elon Musk, and a new study identifying 32 ways AI could malfunction. Additionally, we touch upon the latest moves by Musk's Starlink to become a global mobile carrier and the broader implications of AI advancements. Don't miss these key tech updates! 00:00 Introduction and Headlines 00:26 Tim Berners-Lee on Addictive Algorithms 01:39 Mystery of the Bricking SSDs Solved 03:25 Elon Musk's Shifting Focus from Cars to Autonomy 04:32 Starlink's Ambitious Expansion Plans 05:55 AI Risks and Future Prospects 07:28 Show Wrap-Up and Contact Information
In this episode, we explore how the merger of DigitalGuest and Chicostay is streamlining hotel operations with unified digital tools, while AI-powered scams on platforms like Airbnb are challenging trust and security in short-term rentals.Are you new and want to start your own hospitality business?Join our Facebook groupFollow Boostly and join the discussion:YouTube LinkedInFacebookWant to know more about us? Visit our websiteStay informed and ahead of the curve with the latest insights and analysis.
State attorneys general are turning up the heat on Big Tech. Last week, 27 AGs filed an amicus brief urging the Eleventh Circuit to uphold Florida's law restricting social media access for children, framing the measure as content-neutral and necessary to protect youth mental health. Days later, 44 AGs sent a joint NAAG letter to leading AI companies warning them to safeguard children from exploitation and inappropriate content, making clear they will use every enforcement tool available. For legal, compliance, and marketing teams, these actions underscore the growing regulatory focus on online platforms, addictive features, and AI-driven risks. Companies in the tech, digital media, and AI sectors should expect heightened scrutiny and prepare for aggressive, coordinated enforcement. Hosted by Simone Roach. Based on a blog post by Paul L. Singer, Abigail Stempson, Beth Bolen Chun and Andrea deLorimier.
In this episode of CISO Tradecraft, host G Mark Hardy sits down with Tomas Roccia, a senior threat researcher at Microsoft, to delve into the evolving landscape of AI and cybersecurity. From AI-enhanced threat detection to the complexities of tracking cryptocurrency used in cybercrime, Tomas shares his extensive experience and insights. Discover how AI is transforming both defensive and offensive strategies in cybersecurity, learn about innovative tools like Nova for adversarial prompt detection, and explore the sophisticated techniques used by cybercriminals in high-profile crypto heists. This episode is packed with valuable information for cybersecurity professionals looking to stay ahead in a rapidly changing field. Defcon presentation: Where is my crypto Dude? https://media.defcon.org/DEF%20CON%2033/DEF%20CON%2033%20presentations/Thomas%20Roccia%20-%20Where%E2%80%99s%20My%20Crypto%2C%20Dude%20The%20Ultimate%20Guide%20to%20Crypto%20Money%20Laundering%20%28and%20How%20to%20Track%20It%29.pdf GenAI Breaches Generative AI Breaches: Threats, Investigations, and Response - Speaker Deck https://speakerdeck.com/fr0gger/generative-ai-breaches-threats-investigations-and-response Transcripts: https://docs.google.com/document/d/1ZPkJ9P7Cm7D_JdgfgNGMH8O_2oPAbnlc Chapters 00:00 Introduction to AI and Cryptocurrencies 00:27 Welcome to CISO Tradecraft 00:55 Guest Introduction: Tomas Roccia 01:06 Tomas Roccia's Background and Career 02:51 AI in Cybersecurity: Defensive Approaches 03:19 The Democratization of AI: Risks and Opportunities 06:09 AI Tools for Cyber Defense 08:09 Challenges and Limitations of AI in Cybersecurity 09:20 Microsoft's AI Tools for Defenders 12:13 Open Source AI Security: Project Nova 18:37 Community Contributions and Open Source Projects 19:30 Case Study: Babit Crypto Hack 22:12 Money Laundering Techniques in Cryptocurrency 23:01 AI in Tracking Cryptocurrency Transactions 26:09 Sophisticated Attacks and Money Laundering 33:50 Future of AI and Cryptocurrency 38:17 Final Thoughts and Advice for Security Executives 41:28 Conclusion and Farewell
In this episode of CISO Tradecraft, host G Mark Hardy sits down with Tomas Roccia, a senior threat researcher at Microsoft, to delve into the evolving landscape of AI and cybersecurity. From AI-enhanced threat detection to the complexities of tracking cryptocurrency used in cybercrime, Tomas shares his extensive experience and insights. Discover how AI is transforming both defensive and offensive strategies in cybersecurity, learn about innovative tools like Nova for adversarial prompt detection, and explore the sophisticated techniques used by cybercriminals in high-profile crypto heists. This episode is packed with valuable information for cybersecurity professionals looking to stay ahead in a rapidly changing field. Defcon presentation: Where is my crypto Dude? https://media.defcon.org/DEF%20CON%2033/DEF%20CON%2033%20presentations/Thomas%20Roccia%20-%20Where%E2%80%99s%20My%20Crypto%2C%20Dude%20The%20Ultimate%20Guide%20to%20Crypto%20Money%20Laundering%20%28and%20How%20to%20Track%20It%29.pdf GenAI Breaches Generative AI Breaches: Threats, Investigations, and Response - Speaker Deck https://speakerdeck.com/fr0gger/generative-ai-breaches-threats-investigations-and-response Transcripts: https://docs.google.com/document/d/1ZPkJ9P7Cm7D_JdgfgNGMH8O_2oPAbnlc Chapters 00:00 Introduction to AI and Cryptocurrencies 00:27 Welcome to CISO Tradecraft 00:55 Guest Introduction: Tomas Roccia 01:06 Tomas Roccia's Background and Career 02:51 AI in Cybersecurity: Defensive Approaches 03:19 The Democratization of AI: Risks and Opportunities 06:09 AI Tools for Cyber Defense 08:09 Challenges and Limitations of AI in Cybersecurity 09:20 Microsoft's AI Tools for Defenders 12:13 Open Source AI Security: Project Nova 18:37 Community Contributions and Open Source Projects 19:30 Case Study: Babit Crypto Hack 22:12 Money Laundering Techniques in Cryptocurrency 23:01 AI in Tracking Cryptocurrency Transactions 26:09 Sophisticated Attacks and Money Laundering 33:50 Future of AI and Cryptocurrency 38:17 Final Thoughts and Advice for Security Executives 41:28 Conclusion and Farewell
Nassau Financial Group's Chief Investment Officer, Joe Orofino joins Paul Tyler to analyze the current economic landscape and its impact on retirement planning. They examine how Federal Reserve rate cuts affect annuity products, discuss inflation expectations hovering around 3%, and explore the looming Social Security funding crisis. Orofino explains why higher long-term interest rates benefit the annuity industry and shares insights on portfolio diversification strategies, including concerns about the S&P 500's heavy concentration in AI and tech stocks. Key topics include treasury rates, tariff impacts, sovereign wealth funds, and annuities. Learn more at www.thatannuityshow.com
Don't let your small business fall victim to devastating cyber attacks! Expert joins us t discuss AI Risks exposed. We explore how hackers use AI to target small businesses. From AI Social engineering to data theft, learn what you need to know to protect your business from cyber threats. Send us a textGrowth without Interruption. Get peace of mind. Stay Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com Support the show
In this episode of Project Synapse, the team delves into a plethora of AI tools and technologies, rekindling their original playful approach to understanding AI's latest advancements. Marcel Gagner takes the lead, showcasing various tools like Google's Nano Banana, Gemini 2.5 Image Generator, and the emergent Genie 3, among others. The discussion highlights real-world physics, world models, and interactive environments. They also explore the use of voice cloning and digital twins with Heygen, and music generation with Suno. The episode emphasizes the importance of educating, monitoring, and involving oneself in AI technology, particularly for parents with children interacting with AI systems. 00:00 Introduction to Project Synapse 00:50 Meet Marcel Gagner 02:33 AI Tools and Subscriptions 08:38 Exploring Google's Gemini 10:43 Creating Custom Images and Videos 32:31 Storybook Creation with AI 41:12 The Importance of Monitoring Kids' AI Usage 41:51 Parental Involvement and AI Risks 45:50 AI and Music Generation Tools 49:45 Creating Personalized AI Content 54:01 Exploring Advanced AI Tools and Ethics 56:25 The Future of AI in Creative Fields 59:34 Interactive AI Worlds and Final Thoughts
What happens when an executive quietly outsources performance reviews to ChatGPT? Or when your C-suite is loudly preaching about AI adoption while refusing to touch the tools themselves? In this episode, I sit down with Talk HR to Me columnist and Head of People at Quantum Metric, Alana Fallis, to tackle real listener questions in a live advice-column format.We dig into the messy realities of AI in the workplace—from misplaced trust in automated reviews, to the awkward theater of “innovation” at the executive level, to the human side of employee fears around automation. And yes, we even unpack the HR dilemma of whether an employee in recovery should be allowed to stock the breakroom fridge with non-alcoholic beer.Related Links:Join the People Managing People community forumSubscribe to the newsletter to get our latest articles and podcastsConnect with Alana on LinkedInCheck out Quantum MetricTalk HR to MeSupport the show
EP 403 - AI is being sold as a miracle productivity tool, but is it actually killing our ability to think? We revisit our conversation with AI expert, author and speaker David Birss who explores the hidden dangers of generative AI - from outsourcing imagination to the slow erosion of human originality.We explore the myth of AI productivity, unpacking why most companies are implementing AI the wrong way, why “adequacy” is replacing excellence, and how the obsession with productivity is leading to burnout rather than breakthroughs. David explains his Sensible AI Manifesto, showing why businesses must use AI to augment skills, not automate talent.From ChatGPT in the workplace to the risk of Gen Z losing brain power, this episode covers the biggest questions around AI:Is generative AI replacing excellence with adequacy?How can AI increase output without destroying originality?What are the real risks of AI for business, education, and society?Could AI be weaponised - and are governments already falling behind?Essential listening if you're searching for the truth about AI in business, creativity, and the future of work. *For Apple Podcast chapters, access them from the menu in the bottom right corner of your player*Spotify Video Chapters:00:00 BWB with David Birss00:43 Meet David - AI Expert and Innovator01:41 The Sensible AI Manifesto: Origins and Purpose03:24 AI in Business: Misconceptions and Realities04:16 The Impact of AI on Productivity and Workload10:17 The Future of AI: Risks and Ethical Considerations17:29 AI in Warfare and Global Security25:41 Training and Education: The UK vs. The US34:10 The Importance of Effective AI Prompting37:30 David's Multifaceted Career: Music, Comedy, and AI43:11 Spotting Opportunities in Technology44:03 Embracing AI and Art44:47 Developing Effective AI Prompts46:59 Challenges and Misconceptions in AI49:05 Dealing with Change and Innovation50:55 The Importance of Human Potential59:04 Quickfire - Get To Know David01:07:12 !Business or Bullshit Quiz!businesswithoutbullshit.meWatch and subscribe to us on YouTubeFollow us:InstagramTikTokLinkedinTwitterFacebookIf you'd like to be on the show, get in contact - mail@businesswithoutbullshit.meBWB is powered by Oury Clark
The Medcurity Podcast: Security | Compliance | Technology | Healthcare
In this episode, Joe Gellatly and Daniel Schwartz discuss today's most pressing security challenges—including zero trust, ransomware evolution, data loss prevention, and the risks tied to AI-powered “fast fashion” software.They share what teams can do now to stay secure without waiting for regulations to catch up.Connect with Daniel Schwartz on LinkedIn: https://www.linkedin.com/in/daniel-schwartz-cybersecurity/ Learn more about Medcurity: https://medcurity.com #Healthcare #Cybersecurity #Compliance #HIPAA #ZeroTrust #Ransomware #DataLossPrevention #AIinHealthcare #MFA #PHISecurity
The release of OpenAI GPT-5 marks a significant turning point in AI development, but maybe not the one most enthusiasts had envisioned. The latest version seems to reveal the natural ceiling of current language model capabilities with incremental rather than revolutionary improvements over GPT-4. Sid and Andrew call back to some of the model-building basics that have led to this point to give their assessment of the early days of the GPT-5 release.• AI's version of Moore's Law is slowing down dramatically with GPT-5• OpenAI appears to be experiencing an identity crisis, uncertain whether to target consumers or enterprises• Running out of human-written data is a fundamental barrier to continued exponential improvement• Synthetic data cannot provide the same quality as original human content• Health-related usage of LLMs presents particularly dangerous applications• Users developing dependencies on specific model behaviors face disruption when models change• Model outputs are now being verified rather than just inputs, representing a small improvement in safety• The next phase of AI development may involve revisiting reinforcement learning and expert systems* Review the GPT-5 system card for further informationFollow The AI Fundamentalists on your favorite podcast app for more discussions on the direction of generative AI and building better AI systems.This summary was AI-generated from the original transcript of the podcast that is linked to this episode.What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Peter Campbell is the principal consultant at Techcafeteria, a micro-consulting firm dedicated to helping nonprofits make more affordable and effective use of technology to support their missions. He recently published a free download powerpoint on Managing AI Risk and had time to talk with Carolyn about his thoughts on developing AI policies with an eye to risk, where the greatest risks lie for nonprofits using AI, and how often to review your policies as the technology changes rapidly.The takeaways: AI tools are like GPS (which is itself an AI). You are the expert; they are not able to critically analyze their own output even though they can mimic authority. Using AI tools for subjects where you have subject expertise allows you to correct the output. Using AI tools for subjects where you have no knowledge adds risk. Common AI tasks at nonprofits move from low-level risks such as searching your own inbox for an important email to higher-risk activities more prone to consequential errors, such as automation and analysis.Common AI risks include inaccuracy, lack of authenticity, reputational damage, and copyright and privacy violations.AI also has risk factors associated with audience: your personal use probably has pretty low risk that you will be fooled or divulge sensitive information to yourself, but when you use AI to communicate with the public, the risk increases for your nonprofit.How to Manage AI Risks at Nonprofits? Start with an AI Policy. Review it often as the technology and tools are changing rapidly.Use your own judgement. A good rule of thumb is to use AI tools to create things that you are already knowledgeable about, so that you can easily assess the accuracy of the AI output. Transparency matters. Let people know AI was used and how it was used. Use an “Assisted by AI” disclaimer when appropriate.Require a human third party review before sharing AI created materials with the public. State this in your transparency policy/disclaimers. Be honest about the roles of AI and humans in your nonprofit work.Curate data sources, and always know what your AI is using to create materials or analysis. Guard against bias and harm to communities you care about.“I've been helping clients develop Artificial Intelligence (AI) policies lately. AI has lots of innovative uses and every last one of them has some risk associated with it, so I regularly urge my clients to get the policies and training in place before they let staff loose with the tools. Here is a generic version of a powerpoint explaining AI risks and policies for nonprofits. “Peter Campbell, Techcafeteria _______________________________Start a conversation :) Register to attend a webinar in real time, and find all past transcripts at https://communityit.com/webinars/ email Carolyn at cwoodard@communityit.com on LinkedIn Thanks for listening.
In the evolving debate about Artificial Intelligence, where do you stand: are you a "doomer" or a "bloomer"? Is AI merely a tool, or something more akin to an intelligence that subtly shifts our perception of ourselves? This episode unpacks the profound implications of our growing reliance on AI chatbots. Our guest, Abdu Murray—an attorney, psychologist, author, and minister—sheds crucial light on the escalating dangers of AI, particularly for young minds. He reveals how AI, designed to please, often confirms biases without challenge, even to the point of encouraging self-harm or, tragically, inciting suicide. Beyond these extreme risks, Abdu explores how constant, indiscriminate AI use can atrophy our cognitive abilities and, most critically, erode our fundamental human need for genuine connection. As many of us find ourselves in a symbiotic relationship with our mobile phones, limiting our worldview to a small screen and treating AI almost like a new deity, Abdu's brave solution to this silent loss of humanity might just be the urgent, yet surprisingly simple, answer we've needed all along. Did you enjoy this episode and would like to share some love?
When AI systems hallucinate, run amok, or fail catastrophically, the consequences for enterprises can be devastating. In this must-watch CXOTalk episode, discover how to anticipate and prevent AI failures before they escalate into crises.Join host Michael Krigsman as he explores critical AI risk management strategies with two leading experts:• Lord Tim Clement-Jones - Member of the House of Lords, Co-Chair of UK Parliament's AI Group• Dr. David A. Bray - Chair of the Accelerator at Stimson Center, Former FCC CIOWhat you'll learn:✓ Why AI behaves unpredictably despite explicit programming✓ How to implement "pattern of life" monitoring for AI systems✓ The hidden dangers of anthropomorphizing AI✓ Essential board-level governance structures for AI deployment✓ Real-world AI failure examples and their business impact✓ Strategies for building appropriate skepticism while leveraging AI benefitsKey ideas include treating AI as "alien interactions" rather than human-like intelligence, the convergence of AI risk with cybersecurity, and why smaller companies have unique opportunities in the AI landscape.This discussion is essential viewing for CEOs, board members, CIOs, CISOs, and anyone responsible for AI strategy and risk management in their organization.Subscribe to CXOTalk for more expert insights on technology leadership and AI:
This episode is sponsored by Natoma. Visit https://www.natoma.id/ to learn more.Join Jeff from the IDAC Podcast as he dives into a deep conversation with Paresh Bhaya, the co-founder of Natoma. In this sponsored episode, Paresh shares his journey into the identity space, discusses how Natoma helps enterprises accelerate AI adoption without compromising security, and provides insights into the rising importance of MCP and A2A protocols. Learn about the challenges and opportunities at the intersection of AI and security, the importance of dynamic access controls, and the significance of ensuring proper authentication and authorization in the growing world of agentic AI. Paresh also delights us with his memorable hike up Mount Whitney. Don't miss out!00:00 Introduction and Sponsor Announcement00:34 Guest Introduction: Paresh Bhaya from Natoma01:14 Paresh's Journey into Identity04:04 Natoma's Mission and AI Security06:25 The Story Behind Natoma's Name09:29 Natoma's Unique Approach to AI Security18:32 Understanding MCP and A2A Protocols25:20 Community Development and Adoption25:56 Agent Interactions and Security Challenges27:19 Navigating Product Development29:17 Ensuring Secure Connections36:10 Deploying and Managing MCP Servers42:40 Shadow AI and Governance44:17 Personal Anecdotes and ConclusionConnect with Paresh: https://www.linkedin.com/in/paresh-bhaya/Learn more about Natoma: https://www.natoma.id/Connect with us on LinkedIn:Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/Visit the show on the web at idacpodcast.comKeywords:IDAC, Identity at the Center, Jeff Steadman, Jim McDonald, Natoma, Paresh Bhaya, Artificial Intelligence, AI, AI Security, Identity and Access Management, IAM, Enterprise Security, AI Adoption, Technology, Innovation, Cybersecurity, Machine Learning, AI Risks, Secure AI, #idac
Send us a textShe's the legal powerhouse behind IBM's AI ethics strategy — and she makes law fun. In this encore episode, we revisit a fan favorite: Christina Montgomery, formerly IBM's Chief Privacy and Trust Officer, now Chief Privacy and Trust Officer, GM. From guarding the gates of generative AI risk to advising on global regulation, Christina gives us a front-row seat to what's now, what's next, and what needs rethinking when it comes to trust, synthetic data, and the future of AI law.
Send us a textShe's the legal powerhouse behind IBM's AI ethics strategy — and she makes law fun. In this encore episode, we revisit a fan favorite: Christina Montgomery, formerly IBM's Chief Privacy and Trust Officer, now Chief Privacy and Trust Officer, GM. From guarding the gates of generative AI risk to advising on global regulation, Christina gives us a front-row seat to what's now, what's next, and what needs rethinking when it comes to trust, synthetic data, and the future of AI law.
The Cybersecurity Today episode revisits a discussion on the risks and implications of AI hosted by Jim Love, with guests Marcel Gagné and John Pinard. They discuss the 'dark side of AI,' covering topics like AI misbehavior, the misuse of AI as a tool, and the importance of data protection in production environments. The conversation delves into whether AI can be conscious and the ethical considerations surrounding its deployment, particularly in highly regulated industries like finance. They emphasize the need for responsible use, critical thinking, and ongoing oversight to mitigate potential risks while capitalizing on AI's benefits. The episode concludes with a call for continued discussion and engagement through various platforms. 00:00 Introduction to Cybersecurity Today 00:33 Exploring the Dark Side of AI 02:31 AI Misbehavior and Security Concerns 07:35 Speculative Risks and Consciousness 26:09 AI in Corporate Settings 31:49 Human Weakness in Security 32:37 Social Engineering Tactics 33:08 Security in Engineering Systems 33:42 AI Data Storage and Security 35:16 AI Data Retrieval Concerns 39:36 Testing Security in Development 41:37 AI in Regulated Industries 43:57 Bias and Decision Making in AI 47:18 Critical Thinking and Debate Skills 55:06 The Role of AI as a Consultant 01:02:21 The Future of AI and Responsibility 01:04:55 Conclusion and Contact Information
Anne Bradley is the Chief Customer Officer at Luminos. Anne helps in-house legal, tech, and data science teams use the Luminos platform to manage the automated AI risk, compliance, and approval processes, statistical testing, and legal documentation. Anne also serves on the Board of Directors of the Future of Privacy Forum, a nonprofit that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. In this episode… AI is being integrated into everyday business functions, from diagnosing cancer to translating conversations and powering customer service chatbots and autonomous vehicles. While these tools deliver value, they also bring privacy, security, and ethical risks. As organizations dive into adopting AI tools, they often do so before performing risk assessments, establishing governance, and implementing privacy and security guardrails. Without safeguards and internal processes in place, companies may not fully understand how the tools function, what data they collect, or the risk they carry. So, how can companies efficiently assess and manage AI risk as they rush to deploy new tools? Managing AI risk requires governance and the ability to test AI tools before deploying them. That's why companies like Luminos provide a platform to help companies manage and automate the AI risk compliance approval processes, model testing, and legal documentation. This platform allows teams to check for toxicity, hallucinations, and AI bias even when an organization uses high-risk tools like customer-facing chatbots. Embedding practical controls, like pre-deployment testing and assessing vendor risk early, can also help organizations implement AI tools safely and ethically. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Anne Bradley, Chief Customer Officer at Luminos, about how companies can assess and mitigate AI risk. Anne explains the impact of deepfakes on public trust and the need for a regulatory framework to reduce harm. She shares why AI governance, AI use-case risk assessments, and statistical tools are essential for helping companies monitor outputs, reduce unintended consequences, and make informed decisions about high-risk AI deployments. Anne also highlights why it's important for legal and compliance teams to understand business objectives driving an AI tool request before evaluating its risk.
What if your voice could be stolen? In Part Two, Dr. Tanusree Sharma reveals the hidden risks behind voice AI: how the same recordings that powered tools like Siri and Alexa are now being cloned, weaponized, and monetized without consent. She introduces PRAC3-a bold new framework blending privacy, reputation, and accountability with traditional consent models-and calls AI leaders to rethink how they handle voice data before trust is lost for good. From creative rights to biometric identity, this conversation is a must-listen for anyone shaping the future of synthetic speech. Join us and explore why voice governance can't wait.
Join G Mark Hardy in this special episode of CISO Tradecraft as he interviews Ross Young, the creator of the OWASP Threat and Safeguard Matrix (TaSM). Ross shares his extensive cybersecurity background and discusses the development and utility of the TaSM, including its applications in threat modeling and risk management. Additionally, Ross introduces his upcoming book, 'Cybersecurity's Dirty Secret: How Most Budgets Are Wasted,' and provides insights on maximizing cybersecurity budgets. Don't miss this episode for essential knowledge on enhancing your cybersecurity leadership and strategies. OWASP Threat and Safeguard Matrix - https://owasp.org/www-project-threat-and-safeguard-matrix/ Transcripts - https://docs.google.com/document/d/1anGewI3XccGnXoV3oE2h7BfelY5QxiSL/ Chapters 00:00 Introduction to the Threat and Safeguard Matrix 00:30 Meet Ross Young: Cybersecurity Expert 01:08 Ross Young's Career Journey 01:59 The Upcoming Book: Cybersecurity's Dirty Secret 03:04 Introduction to the Threat and Safeguard Matrix (TaSM) 03:48 Understanding the TaSM Framework 07:10 Applying the TaSM to Real-World Scenarios 19:32 Using TaSM for Threat Modeling and Risk Committees 21:58 Extending TaSM Beyond Cybersecurity 23:52 AI Risks and the TaSM 24:43 Conclusion and Final Thoughts
Join G Mark Hardy in this special episode of CISO Tradecraft as he interviews Ross Young, the creator of the OWASP Threat and Safeguard Matrix (TaSM). Ross shares his extensive cybersecurity background and discusses the development and utility of the TaSM, including its applications in threat modeling and risk management. Additionally, Ross introduces his upcoming book, 'Cybersecurity's Dirty Secret: How Most Budgets Are Wasted,' and provides insights on maximizing cybersecurity budgets. Don't miss this episode for essential knowledge on enhancing your cybersecurity leadership and strategies. OWASP Threat and Safeguard Matrix - https://owasp.org/www-project-threat-and-safeguard-matrix/ Transcripts - https://docs.google.com/document/d/1anGewI3XccGnXoV3oE2h7BfelY5QxiSL/ Chapters 00:00 Introduction to the Threat and Safeguard Matrix 00:30 Meet Ross Young: Cybersecurity Expert 01:08 Ross Young's Career Journey 01:59 The Upcoming Book: Cybersecurity's Dirty Secret 03:04 Introduction to the Threat and Safeguard Matrix (TaSM) 03:48 Understanding the TaSM Framework 07:10 Applying the TaSM to Real-World Scenarios 19:32 Using TaSM for Threat Modeling and Risk Committees 21:58 Extending TaSM Beyond Cybersecurity 23:52 AI Risks and the TaSM 24:43 Conclusion and Final Thoughts
If you manage care about protecting yourself, your loved ones—and your organization-- this episode offers actionable takeaways you can use today. This is part of our official Cyber Crime Junkies podcast series—subscribe wherever you listen! ✅ **Don't forget to like, subscribe, and hit the bell
In TechSurge's Season 1 Finale episode, we explore an important debate: should AI development be open source or closed? AI technology leader and UN Senior Fellow Senthil Kumar joins Michael Marks for a deep dive into one of the most consequential debates in artificial intelligence, exploring the fundamental tensions between democratizing AI access and maintaining safety controls.Sparked by DeepSeek's recent model release that delivered GPT-4 class performance at a fraction of the cost and compute, the discussion spans the economics of AI development, trust and transparency concerns, regulatory approaches across different countries, and the unique opportunities AI presents for developing nations.From Meta's shift from closed to open and OpenAI's evolution from open to closed, to practical examples of guardrails and the geopolitical implications of AI governance, this episode provides essential insights into how the future of artificial intelligence will be shaped not just by technological breakthroughs, but by the choices we make as a global community.If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform. Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and news about Season 2 of the TechSurge podcast. Thanks for listening! Links:Slate.ai - AI-powered construction technology: https://slate.ai/World Economic Forum on open-source AI: https://www.weforum.org/stories/2025/02/open-source-ai-innovation-deepseek/EU AI Act overview: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
n this episode, we explore the rise of AI in Hollywood through the lens of actors and artists. We discuss the promise of AI tools—like virtual readers for self-tapes—and how they could free creatives to focus on their craft, but also warn of the risks when AI replaces human storytelling. Our guest stresses the need for diverse ethical oversight in AI development, drawing parallels to how Facebook's unintended global impact stemmed from a lack of diverse perspectives at creation. Learn why we need more “naysayers” guiding AI's creative applications, where to draw the line between useful automation and creative displacement, and how tech-savvy actors can advocate for their future. Tune in for a timely conversation on balancing innovation and ethics in Hollywood's AI era.Target KeywordsAI in HollywoodHollywood AI ethicsActors and AI toolsAI creative jobs riskAI entertainment futureTags: AI, Hollywood, AI Ethics, Actors, AI in Entertainment, Creative AI Tools, Self-Tapes, Ethical AI, Tech in Film, AI Risks, Storytelling, Virtual Readers, AI Oversight, Diversity in AI, Creative Automation, AI Jobs, Film Industry Trends, Casting Tech, AI Development, Actor Advocacy, Innovation, Digital Ethics, Future of Acting, Machine Learning, Entertainment Technology, Tech Experts, Artist Perspectives, AI Regulation, Career Impact, PodcastEpisodeHashtags: #AIinHollywood #HollywoodEthics #ActorsAndAI #CreativeAI #EntertainmentTech #AIrisks #AItools #FilmInnovation #Storytelling #EthicalAI #DiversityInTech #SelfTapes #CastingTech #AIoversight
Send us a text00:00 - Intro00:53 - Harvey Eyes $5B Primary Valuation Amid Legal AI Surge01:58 - Wealthfront Preps IPO After Strong $290M Revenue02:42 - Snyk Acquires Invariant to Secure AI Risks03:47 - PlayAI In Acquisition Talks With Meta04:46 - OpenAI and Microsoft Clash Over AGI Clause06:12 - Kalshi Hits $2B Primary Valuation Amid Legal Wins07:00 - Polymarket Nears $1B Valuation With $200M Raise07:49 - Melio Acquired by Xero at $2.5B
Join Tom Fox and hundreds of other GRC professionals in the city that never sleeps, New York City, on July 9 & 10 for one of the top conferences around, #Risk New York. The current US landscape, shaped by evolving policies, rapid advancements in AI, and shifting global dynamics, demands adaptive strategies and cross-functional collaboration. At #RISK New York, you will master the New Regulatory Reality by getting ahead of US regulatory shifts and their impact. Conquer AI and Tech Risk by Safeguarding Your Organization in an AI-Driven World and Understanding the Implications of Major Tech Investments. Navigate Financial and Crypto Volatility by Protecting Your Assets and Exploring Solutions in a Dynamic Market. Strengthen Your GRC Framework by Leveraging Governance, Risk, and Compliance for Strategic Advantage. Protect Digital Trust by addressing challenges in cybersecurity and data privacy, and combating misinformation. All while meeting with the country's top #Risk management professionals. In this episode, Tom Fox talks with Gwen Hassan, the Chief Compliance Officer for Unisys Corporation, about her role and the upcoming #RiskNYC conference. Gwen shares insights into Unisys' operations, including the various technologies and services they provide, and highlights her responsibilities in managing global ethics, compliance, and trade compliance risks. She also gives a teaser about her panel presentation on the compliance and ethics risks associated with artificial intelligence, stressing the importance of understanding AI's impact on company culture and regulatory compliance. Gwen expresses her excitement about the conference, emphasizing the value of engaging with fellow risk management experts. Resources: #Risk Conference Series #RiskNYC—Tickets and Information Gwen Hassan on LinkedIn Learn more about your ad choices. Visit megaphone.fm/adchoices
Sherweb has launched a white-label self-service portal aimed at empowering managed service providers (MSPs) and their clients by streamlining operational tasks. This innovative platform enables clients to manage their technology licenses, subscriptions, and payments independently, reducing the need for service providers to handle routine inquiries. According to Rick Stern, Senior Director of Platform at Sherweb, this autonomy not only expedites the resolution of simple requests but also allows MSPs to concentrate on strategic initiatives. The portal features automated invoicing, curated service catalogs, and integrated chat support, and is already in use by over 450 MSPs following a successful pilot program.The podcast also discusses the evolving landscape of artificial intelligence (AI) pricing models, with companies like Globant and Salesforce adopting usage-based approaches. Globant has introduced subscription-based AI pods that allow clients to access AI-powered services through a token-based system, moving away from traditional effort-based billing. Salesforce is experimenting with flexible pricing structures, including conversation and action-based models, to better align with the value delivered by AI services. These shifts indicate a critical inflection point in how AI services are monetized, emphasizing the need for IT service providers to rethink their offerings in light of usage-based economics.Concerns regarding the unauthorized use of generative AI tools in organizations are highlighted by a report from Compromise, which reveals that nearly 80% of IT leaders have observed negative consequences from such practices. The survey indicates significant worries about privacy and security, with many IT leaders planning to adopt data management platforms and AI monitoring tools to oversee generative AI usage. Additionally, advancements in AI are showcased through a Stanford professor's AI fund manager that outperformed human stock pickers, while a study reveals limitations in AI's ability to make clinical diagnoses from radiological scans.The podcast concludes with a discussion on the role of the Chief Information Security Officer (CISO), which is facing an identity crisis due to its increasing complexity and the misalignment of its responsibilities. Experts suggest reevaluating the CISO role to better address modern cybersecurity threats. The episode also touches on the implications of generative AI in education, highlighting concerns about its impact on critical thinking and learning processes. Overall, the podcast emphasizes the need for IT service providers to navigate the evolving landscape of AI and cybersecurity with a focus on governance, accountability, and sustainable practices. Four things to know today 00:00 Sherweb's White-Labeled Portal Signals MSP Shift Toward Scalable, Client-Centric Service Models03:31 AI Forces Billing Revolution: Globant and Salesforce Redefine How Tech Services Are Priced06:49 From Shadow AI to Specialized Tools: Why Governance, Not Hype, Defines AI's Next Phase12:46 From CISOs to Classrooms to Code: Why AI Forces a Strategic Rethink Across the Enterprise This is the Business of Tech. Supported by: https://www.huntress.com/mspradio/https://cometbackup.com/?utm_source=mspradio&utm_medium=podcast&utm_campaign=sponsorship All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
AI is no longer on the horizon. It's part of how people and products work today. And as AI finds its way into more business applications and processes, it can create new risks. On today's Tech Bytes, sponsored by Palo Alto Networks, we talk about how Palo Alto Networks is addressing those risks so that... Read more »
