POPULARITY
Send us a textReplay Episode: Python, Anaconda, and the AI Frontier with Peter WangPeter Wang — Chief AI & Innovation Officer and Co-founder of Anaconda — is back on Making Data Simple! Known for shaping the open-source ecosystem and making Python a powerhouse, Peter dives into Anaconda's new AI incubator, the future of GenAI, and why Python isn't just “still a thing”… it's the thing.From branding and security to leadership and philosophy, this episode is a wild ride through the biggest opportunities (and risks) shaping AI today.Timestamps: 01:27 Meet Peter Wang 05:10 Python or R? 05:51 Anaconda's Differentiation 07:08 Why the Name Anaconda 08:24 The AI Incubator 11:40 GenAI 14:39 Enter Python 16:08 Anaconda Commercial Services 18:40 Security 20:57 Common Points of Failure 22:53 Branding 24:50 watsonx Partnership 28:40 AI Risks 34:13 Getting Philosophical 36:13 China 44:52 Leadership StyleLinkedin: linkedin.com/in/pzwangWebsite: https://www.linkedin.com/company/anacondainc/, https://www.anaconda.com/Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Send us a textReplay Episode: Python, Anaconda, and the AI Frontier with Peter WangPeter Wang — Chief AI & Innovation Officer and Co-founder of Anaconda — is back on Making Data Simple! Known for shaping the open-source ecosystem and making Python a powerhouse, Peter dives into Anaconda's new AI incubator, the future of GenAI, and why Python isn't just “still a thing”… it's the thing.From branding and security to leadership and philosophy, this episode is a wild ride through the biggest opportunities (and risks) shaping AI today.Timestamps: 01:27 Meet Peter Wang 05:10 Python or R? 05:51 Anaconda's Differentiation 07:08 Why the Name Anaconda 08:24 The AI Incubator 11:40 GenAI 14:39 Enter Python 16:08 Anaconda Commercial Services 18:40 Security 20:57 Common Points of Failure 22:53 Branding 24:50 watsonx Partnership 28:40 AI Risks 34:13 Getting Philosophical 36:13 China 44:52 Leadership StyleLinkedin: linkedin.com/in/pzwangWebsite: https://www.linkedin.com/company/anacondainc/, https://www.anaconda.com/Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
My fellow pro-growth/progress/abundance Up Wingers,Artificial intelligence may prove to be one of the most transformative technologies in history, but like any tool, its immense power for good comes with a unique array of risks, both large and small.Today on Faster, Please! — The Podcast, I chat with Miles Brundage about extracting the most out of AI's potential while mitigating harms. We discuss the evolving expectations for AI development and how to reconcile with the technology's most daunting challenges.Brundage is an AI policy researcher. He is a non-resident fellow at the Institute for Progress, and formerly held a number of senior roles at OpenAI. He is also the author of his own Substack.In This Episode* Setting expectations (1:18)* Maximizing the benefits (7:21)* Recognizing the risks (13:23)* Pacing true progress (19:04)* Considering national security (21:39)* Grounds for optimism and pessimism (27:15)Below is a lightly edited transcript of our conversation. Setting expectations (1:18)It seems to me like there are multiple vibe shifts happening at different cadences and in different directions.Pethokoukis: Earlier this year I was moderating a discussion between an economist here at AEI and a CEO of a leading AI company, and when I asked each of them how AI might impact our lives, our economists said, ‘Well, I could imagine, for instance, a doctor's productivity increasing because AI could accurately and deeply translate and transcribe an appointment with a patient in a way that's far better than what's currently available.” So that was his scenario. And then I asked the same question of the AI company CEO, who said, by contrast, “Well, I think within a decade, all human death will be optional thanks to AI-driven medical advances.” On that rather broad spectrum — more efficient doctor appointments and immortality — how do you see the potential of this technology?Brundage: It's a good question. I don't think those are necessarily mutually exclusive. I think, in general, AI can both augment productivity and substitute for human labor, and the ratio of those things is kind of hard to predict and might be very policy dependent and social-norm dependent. What I will say is that, in general, it seems to me like the pace of progress is very fast and so both augmentation and substitutions seem to be picking up steam.It's kind of interesting watching the debate between AI researchers and economists, and I have a colleague who has said that the AI researchers sometimes underestimate the practical challenges in deployment at scale. Conversely, the economists sometimes underestimate just how quickly the technology is advancing. I think there's maybe some happy middle to be found, or perhaps one of the more extreme perspectives is true. But personally, I am not an economist, I can't really speak to all of the details of substitution, and augmentation, and all the policy variables here, but what I will say is that at least the technical potential for very significant amounts of augmentation of human labor, as well as substitution for human labor, seem pretty likely on even well less than 10 years — but certainly within 10 years things will change a lot.It seems to me that the vibe has shifted a bit. When I talk to people from the Bay Area and I give them the Washington or Wall Street economist view, to them I sound unbelievably gloomy and cautious. But it seems the vibe has shifted, at least recently, to where a lot of people think that major advancements like superintelligence are further out than they previously thought — like we should be viewing AI as an important technology, but more like what we've seen before with the Internet and the PC.It's hard for me to comment. It seems to me like there are multiple vibe shifts happening at different cadences and in different directions. It seems like several years ago there was more of a consensus that what people today would call AGI was decades away or more, and it does seem like that kind of timeframe has shifted closer to the present. There there's still debate between the “next few years” crowd versus the “more like 10 years” crowd. But that is a much narrower range than we saw several years ago when there was a wider range of expert opinions. People who used to be seen as on one end of the spectrum, for example, Gary Marcus and François Chollet who were seen as kind of the skeptics of AI progress, even they now are saying, “Oh, it's like maybe 10 years or so, maybe five years for very high levels of capability.” So I think there's been some compression in that respect. That's one thing that's going on.There's also a way in which people are starting to think less abstractly and more concretely about the applications of AI and seeing it less as this kind of mysterious thing that might happen suddenly and thinking of it more as incremental, more as something that requires some work to apply in various parts of the economy that there's some friction associated with.Both of these aren't inconsistent, they're just kind of different vibe shifts that are happening. So getting back to the question of is this just a normal technology, I would say that, at the very least, it does seem faster in some respects than some other technological changes that we've seen. So I think ChatGPT's adoption going from zero to double-digit percentages of use across many professions in the US and in a matter of high number of months, low number of years, is quite stark.Would you be surprised if, five years from now, we viewed AI as something much more important than just another incremental technological advance, something far more transformative than technologies that have come before?No, I wouldn't be surprised by that at all. If I understand your question correctly, my baseline expectation is that it will be seen as one of the most important technologies ever. I'm not sure that there's a standard consensus on how to rate the internet versus electricity, et cetera, but it does seem to me like it's of the same caliber of electricity in the sense of essentially converting one kind of energy into various kinds of useful economic work. Similarly, AI is converting various types of electricity into cognitive work, and I think that's a huge deal.Maximizing the benefits (7:21)There's also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications.However you want to define society or the aspect of society that you focus on — government businesses, individuals — are we collectively doing what we need to do to fully exploit the upsides of this technology over the next half-decade to decade, as well as minimizing potential downsides?I think we are not, and this is something that I sometimes find frustrating about the way that the debate plays out is that there's sometimes this zero-sum mentality of doomers versus boomers — a term that Karen Hao uses — and this idea that there's this inherent tension between mitigating the risks and maximizing the benefits, and there are some tensions, but I don't think that we are on the Pareto frontier, so to speak, of those issues.Right now, I think there's a lot of value being left on the table in terms of fairly low-cost risk mitigations. There's also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications. I'll give just one example, because I write a lot about the risk, but I also am very interested in maximizing the upside. So I'll just give one example: Protecting critical infrastructure and improving the cybersecurity of various parts of critical infrastructure in the US. Hospitals, for example, get attacked with ransomware all the time, and this causes real harm to patients because machines get bricked, essentially, and they have one or two people on the IT team, and they're kind of overwhelmed by these, not even always that sophisticated, but perhaps more-sophisticated hackers. That's a huge problem. It matters for national security in addition to patients' lives, and it matters for national security in the sense that this is something that China and Russia and others could hold at risk in the context of a war. They could threaten this critical infrastructure as part of a bargaining strategy.And I don't think that there's that much interest in helping hospitals have a better automated cybersecurity engineer helper among the Big Tech companies — because there aren't that many hospital administrators. . . I'm not sure if it would meet the technical definition of market failure, but it's at least a national security failure in that it's a kind of fragmented market. There's a water plant here, a hospital administrator there.I recently put out a report with the Institute for Progress arguing that philanthropists and government could put some additional gasoline in the tank of cybersecurity by incentivizing innovation that specifically helps these under-resourced defenders more so than the usual customers of cybersecurity companies like Fortune 500 companies.I'm confident that companies and entrepreneurs will figure out how to extract value from AI and create new products and new services, barring any regulatory slowdowns. But since you mentioned low-hanging fruit, what are some examples of that?I would say that transparency is one of the areas where a lot of AI policy experts seem to be in pretty strong agreement. Obviously there is still some debate and disagreement about the details of what should be required, but just to give you some illustration, it is typical for the leading AI companies, sometimes called frontier AI companies, to put out some kind of documentation about the safety steps that they've taken. It's typical for them to say, here's our safety strategy and here's some evidence that we're following this strategy. This includes things like assessing whether their systems can be used for cyber-attacks, and assessing whether they could be used to create biological weapons, or assessing the extent to which they make up facts and make mistakes, but state them very confidently in a way that could pose risks to users of the technology.That tends to be totally voluntary, and there started to be some momentum as a result of various voluntary commitments that were made in recent years, but as the technology gets more high-stakes, and there's more cutthroat competition, and there's maybe more lawsuits where companies might be tempted to retreat a bit in terms of the information that they share, I think that things could kind of backslide, and at the very least not advance as far as I would like from the perspective of making sure that there's sharing of lessons learned from one company to another, as well as making sure that investors and users of the technology can make informed decisions about, okay, do I purchase the services of OpenAI, or Google, or Anthropic, and making these informed decisions, making informed capital investment seems to require transparency to some degree.This is something that is actively being debated in a few contexts. For example, in California there's a bill that has that and a few other things called SB-53. But in general, we're at a bit of a fork in the road in terms of both how certain regulations will be implemented such as in the EU. Is it going to become actually an adaptive, nimble approach to risk mitigation or is it going to become a compliance checklist that just kind of makes big four accounting firms richer? So there are questions then there are just “does the law pass or not?” kind of questions here.Recognizing the risks (13:23). . . I'm sure there'll be some things that we look back on and say it's not ideal, but in my opinion, it's better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing . . .In my probably overly simplistic way of looking at it, I think of two buckets and you have issues like, are these things biased? Are they giving misinformation? Are they interacting with young people in a way that's bad for their mental health? And I feel like we have a lot of rules and we have a huge legal system for liability that can probably handle those.Then, in the other bucket, are what may, for the moment, be science-fictional kinds of existential risks, whether it's machines taking over or just being able to give humans the ability to do very bad things in a way we couldn't before. Within that second bucket, I think, it sort of needs to be flexible. Right now, I'm pretty happy with voluntary standards, and market discipline, and maybe the government creating some benchmarks, but I can imagine the technology advancing to where the voluntary aspect seems less viable and there might need to be actual mandates about transparency, or testing, or red teaming, or whatever you want to call it.I think that's a reasonable distinction, in the sense that there are risks at different scales, there are some that are kind of these large-scale catastrophic risks and might have lower likelihood but higher magnitude of impact. And then there are things that are, I would say, literally happening millions of times a day like ChatGPT making up citations to articles that don't exist, or Claud saying that it fixed your code but actually it didn't fix the code and the user's too lazy to notice, and so forth.So there are these different kinds of risks. I personally don't make a super strong distinction between them in terms of different time horizons, precisely because I think things are going so quickly. I think science fiction is becoming science fact very much sooner than many people expected. But in any case, I think that similar logic around, let's make sure that there's transparency even if we don't know exactly what the right risk thresholds are, and we want to allow a fair degree of flexibility and what measures companies take.It seems good that they share what they're doing and, in my opinion, ideally go another step further and allow third parties to audit their practices and make sure that if they say, “Well, we did a rigorous test for hallucination or something like that,” that that's actually true. And so that's what I would like to see for both what you might call the mundane and the more science fiction risks. But again, I think it's kind of hard to say how things will play out, and different people have different perspectives on these things. I happen to be on the more aggressive end of the spectrumI am worried about the spread of the apocalyptic, high-risk AI narrative that we heard so much about when ChatGPT first rolled out. That seems to have quieted, but I worry about it ramping up again and stifling innovation in an attempt to reduce risk.These are very fair concerns, and I will say that there are lots of bills and laws out there that have, in fact, slowed down innovation and certain contexts. The EU, I think, has gone too far in some areas around social media platforms. I do think at least some of the state bills that have been floated would lead to a lot of red tape and burdens to small businesses. I personally think this is avoidable.There are going to be mistakes. I don't want to be misleading about how high quality policymakers' understanding of some of these issues are. There will be mistakes, even in cases where, for example, in California there was a kind of blue ribbon commission of AI experts producing a report over several months, and then that directly informing legislation, and a lot of industry back and forth and negotiation over the details. I would say that's probably the high water mark, SB-53, of fairly stakeholder/expert-informed legislation. Even there, I'm sure there'll be some things that we look back on and say it's not ideal, but in my opinion, it's better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing, such as companies retrenching and holding back information that makes it hard for the field as a whole to tackle these issues.I'll just make one more point, which is adapting to the compliance capability of different companies: How rich are they? How expensive are the models they're training, I think is a key factor in the legislation that I tend to be more sympathetic to. So just to make a contrast, there's a bill in Colorado that was kind of one size fits all, regulate all the kind of algorithms, and that, I think, is very burdensome to small businesses. I think something like SB-53 where it says, okay, if you can afford to train an AI system for a $100 million, you can probably afford to put out a dozen pages about your safety and security practices.Pacing true progress (19:04). . . some people . . . kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress . . . there's quite rapid progress happening still.Hopefully Grok did not create this tweet of yours, but if it did, well, there we go. You won't have to answer it, but I just want to understand what you meant by it: “A lot of AI safety people really, really want to find evidence that we have a lot of time for AGI.” What does that mean?What I was trying to get at is that — and I guess this is not necessarily just AI safety people, but I sometimes kind of try to poke at people in my social network who I'm often on the same side of, but also try to be a friendly critic to, and that includes people who are working on AI safety. I think there's a common tendency to kind of grasp at what I would consider straws when reading papers and interpreting product launches in a way that kind of suggests, well, we've hit a wall, AI is slowing down, this was a flop, who cares?I'm doing my kind of maybe uncharitable psychoanalysis. What I was getting at is that I think one reason why some people might be tempted to do that is that it makes things seem easier and less scary: “Well, we don't have to worry about really powerful AI enabled cyber-attacks for another five years, or biological weapons for another two years, or whatever.” Maybe, maybe not.I think the specific example that sparked that was GPT-5 where there were a lot of people who, in my opinion, were reading the tea leaves in a particular way and missing important parts of the context. For example, at GPT-5 wasn't a much larger or more expensive-to-train model than GPT-4, which may be surprising by the name. And I think OpenAI did kind of screw up the naming and gave people the wrong impression, but from my perspective, there was nothing particularly surprising, but to some people it was kind of a flop that they kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress like scores on math, and coding, and the reduction in the rate of hallucinations, and solving chemistry and biology problems, and designing new chips, and so forth, there's quite rapid progress happening still.Considering national security (21:39)I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.I'm not sure if you're familiar with some of the work being done by former Google CEO Eric Schmidt, who's been doing a lot of work on national security and AI, and his work, it doesn't use the word AGI, but it talks about AI certainly smart enough to be able to have certain capabilities which our national security establishment should be aware of, should be planning, and those capabilities, I think to most people, would seem sort of science fictional: being able to launch incredibly sophisticated cyber-attacks, or be able to improve itself, or be able to create some other sort of capabilities. And from that, I'm like, whether or not you think that's possible, to me, the odds of that being possible are not zero, and if they're not zero, some bit of the bandwidth of the Pentagon should be thinking about that. I mean, is that sensible?Yeah, it's totally sensible. I'm not going to argue with you there. In fact, I've done some collaboration with the Rand Corporation, which has a pretty heavy investment in what they call the geopolitics of AGI and kind of studying what are the scenarios, including AI and AGI being used to produce “wonder weapons” and super-weapons of some kind.Basically, I think this is super important and in fact, I have a paper coming out that was in collaboration with some folks there pretty soon. I won't spoil all the details, but if you search “Miles Brundage US China,” you'll see some things that I've discussed there. And basically my perspective is we need to strike a balance between competing vigorously on the commercial side with countries like China and Russia on AI — more so China, Russia is less of a threat on the commercial side, at least — and also making sure that we're fielding national security applications of AI in a responsible way, but also recognizing that there are these ways in which things could spiral out of control in a scenario with totally unbridled competition. I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.If you think that, again, the odds are not zero that a technology which is fast-evolving, that we have no previous experience with because it's fast-evolving, could create the kinds of doomsday scenarios that there's new books out about, people are talking about. And so if you think, okay, not a zero percent chance that could happen, but it is kind of a zero percent chance that we're going to stop AI, smash the GPUs, as someone who cares about policy, are you just hoping for the best, or are the kinds of things we've already talked about — transparency, testing, maybe that testing becoming mandatory at some point — is that enough?It's hard to say what's enough, and I agree that . . . I don't know if I give it zero, maybe if there's some major pandemic caused by AI and then Xi Jinping and Trump get together and say, okay, this is getting out of control, maybe things could change. But yeah, it does seem like continued investment and a large-scale deployment of AI is the most likely scenario.Generally, the way that I see this playing out is that there are kind of three pillars of a solution. There's kind of some degree of safety and security standards. Maybe we won't agree on everything, but we should at least be able to agree that you don't want to lose control of your AI system, you don't want it to get stolen, you don't want a $10 billion AI system to be stolen by a $10 million-scale hacking effort. So I think there are sensible standards you can come up with around safety and security. I think you can have evidence produced or required that companies are following these things. That includes transparency.It also includes, I would say, third-party auditing where there's kind of third parties checking the claims and making sure that these standards are being followed, and then you need some incentives to actually participate in this regime and follow it. And I think the incentives part is tricky, particularly at an international scale. What incentive does China have to play ball other than obviously they don't want to have their AI kill them or overthrow their government or whatever? So where exactly are the interests aligned or not? Is there some kind of system of export control policies or sanctions or something that would drive compliance or is there some other approach? I think that's the tricky part, but to me, those are kind of the rough outlines of a solution. Maybe that's enough, but I think right now it's not even really clear what the rough rules of the road are, who's playing by the rules, and we're relying a lot on goodwill and voluntary reporting. I think we could do better, but is that enough? That's harder to say.Grounds for optimism and pessimism (27:15). . . it seems to me like there is at least some room for learning from experience . . . So in that sense, I'm more optimistic. . . I would say, in another respect, I'm maybe more pessimistic in that I am seeing value being left on the table.Did your experience at OpenAI make you more or make you more optimistic or worried that, when we look back 10 years from now, that AI will have, overall on net, made the world a better place?I am sorry to not give you a simpler answer here, and maybe think I should sit on this one and come up with a kind of clearer, more optimistic or more pessimistic answer, but I'll give you kind of two updates in different directions, and I think they're not totally inconsistent.I would say that I have gotten more optimistic about the solvability of the problem in the following sense. I think that things were very fuzzy five, 10 years ago, and when I joined OpenAI almost seven years now ago now, there was a lot of concern that it could kind of come about suddenly — that one day you don't have AI, the next day you have AGI, and then on the third day you have artificial superintelligence and so forth.But we don't live to see the fourth day.Exactly, and so it seems more gradual to me now, and I think that is a good thing. It also means that — and this is where I differ from some of the more extreme voices in terms of shutting it all down — it seems to me like there is at least some room for learning from experience, iterating, kind of taking the lessons from GPT-5 and translating them into GPT-6, rather than it being something that we have to get 100 percent right on the first shot and there being no room for error. So in that sense, I'm more optimistic.I would say, in another respect, I'm maybe more pessimistic in that I am seeing value being left on the table. It seems to me like, as I said, we're not on the Pareto frontier. It seems like there are pretty straightforward things that could be done for a very small fraction of, say, the US federal budget, or very small fraction of billionaires' personal philanthropy or whatever. That in my opinion, would dramatically reduce the likelihood of an AI-enabled pandemic or various other issues, and would dramatically increase the benefits of AI.It's been a bit sad to continuously see those opportunities being neglected. I hope that as AI becomes more of a salient issue to more people and people start to appreciate, okay, this is a real thing, the benefits are real, the risks are real, that there will be more of a kind of efficient policy market and people take those opportunities, but right now it seems pretty inefficient to me. That's where my pessimism comes from. It's not that it's unsolvable, it's just, okay, from a political economy and kind of public-choice perspective, are the policymakers going to make the right decisions?On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
This week Noah and Steve dig into an npm attack that Red Hat has issued an alert for. We talk about small and portable laptops, and of course answer your questions. -- During The Show -- 00:52 Intro ZFS Win Meld (https://meldmerge.org/) Domain knowledge scaling 07:32 NPM Supply Chain Attack No compromised packages used in Red Hat software NPM and Node.js What the malicious code does Red Hat is on top of it Reaction to finding a compromise Red Hat Article (https://access.redhat.com/security/supply-chain-attacks-NPM-packages) Aikido Article 1 (https://www.aikido.dev/blog/popular-nx-packages-compromised-on-npm) Aikido Article 2 (https://www.aikido.dev/blog/npm-debug-and-chalk-packages-compromised) Aikido Article 3 (https://www.aikido.dev/blog/s1ngularity-nx-attackers-strike-again) 18:21 Registrar - Josh CloudFlare PorkBun (https://porkbun.com/) Great Nerds 21:47 Small Laptop - Ziggy HP ProBook Noah's GPD Pocket v1 Surface Pro 1 Dell Latitude 2 in 1 StarLabs Star Lite (https://us.starlabs.systems/pages/starlite) 34:56 Ham Radio - Brett Open Source Ham Radio Plan to sell a kit Have a prototype Reddit Post (https://www.reddit.com/r/HamRadio/s/TTodwCYuyG) Arkos Engineering (https://arkosengineering.com/) HT-15 GitHub (https://github.com/Arkos-Engineering/HT-15) 37:58 News Wire Systemd 258 - phoronix.com (https://www.phoronix.com/news/systemd-258) Rust 1.90 - rust-lang.org (https://blog.rust-lang.org/2025/09/18/Rust-1.90.0) Gnome 49 - gnome.org (https://release.gnome.org/49) Firefox 143 - firefox.com (https://www.firefox.com/en-US/firefox/143.0/releasenotes) Thunderbird 143 - thunderbird.net (https://www.thunderbird.net/en-US/thunderbird/143.0/releasenotes) Rayhunter - helpnetsecurity.com (https://www.helpnetsecurity.com/2025/09/17/rayhunter-eff-open-source-tool-detect-cellular-spying) TernFS - phoronix.com (https://www.phoronix.com/news/TernFS-File-System-Open-Source) BCacheFS DKMS - hackaday.com (https://hackaday.com/2025/09/19/bcachefs-is-now-a-dkms-module-after-exile-from-the-linux-kernel) Tails 7.0 - torproject.org (https://blog.torproject.org/new-release-tails-7_0) Porteux - github.com (https://github.com/porteux/porteux/releases/tag/v2.3) Oreon 10 - oreonproject.org (https://oreonproject.org/oreon-10) Azure Linux 3.0 - webpronews.com (https://www.webpronews.com/microsoft-releases-azure-linux-3-0-with-optional-6-12-lts-kernel) Tongyi-DeepResearch-30B-A3B - marktechpost.com (https://www.marktechpost.com/2025/09/18/alibaba-releases-tongyi-deepresearch-a-30b-parameter-open-source-agentic-llm-optimized-for-long-horizon-research) Qwen3-Omni - venturebeat.com (https://venturebeat.com/ai/chinas-alibaba-challenges-u-s-tech-giants-with-open-source-qwen3-omni-ai) AI Risks - scmp.com (https://www.scmp.com/tech/big-tech/article/3326214/deepseek-warns-jailbreak-risks-its-open-source-models) Hugging Face GitHub CoPilot Integration - infoq.com (https://www.infoq.com/news/2025/09/hugging-face-vscode) 40:06 OBS OBS 32.0 Pipewire video capture Lots of other features Pipewire is professional qpwgraph (https://github.com/rncbc/qpwgraph) 9 to 5 Linux (https://9to5linux.com/obs-studio-32-0-pipewire-video-capture-improvements-basic-plugin-manager) 44:53 Tails on Trixie Tails teaches you reproduce-ability Privacy tools Changes New min requirements Persistent Apps 9 to 5 Linux (https://9to5linux.com/tails-7-0-anonymous-linux-os-released-based-on-debian-13-trixie) -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/460) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)
In that latest episode of the Security Sprint, Dave and Andy covered the following topics:Warm Open:• TribalNet 2025!• FB-ISAO Releases an All-Faiths Analysis of Attacks on U.S. Houses of Worship in 2024, FB-ISAO Releases an All-Faiths Analysis of Attacks on U.S. Houses of Worship in 2024 & FB-ISAO Newsletter• Water at the 2025 WaterPro Conference• Errol LinkedIn: A Looming Deadline: The Cybersecurity Information Sharing Act of 2015• Health-ISAC and CI-ISAC Australia joint white paper Main Topics:Charlie Kirk Assassination• The Hostile Event Attack Cycle (HEAC)• De-escalation Reference Card: CISA De-escalation Reference Card & CISA De-escalation Reference Card Printer FriendlyInsider Threat Awareness Month: Fake Faces, Real Damage: The Corporate Risk of AI-Powered Manipulation. Security professionals are rapidly confronting a new reality: artificial intelligence (AI) and big data, while excellent tools for improving productivity and business operations, are equally lowering the barriers for sophisticated attacks by a wide range of threat groups. From hostile nation-states to issue-motivated groups to cybercriminals, these technologies are enabling attacks that are more personalized, scalable, and harder to detect. The widespread availability of our personal data—from what we post on social media to the massive resale of information gathered by data brokers from both our devices and our online activity—has made open-source data the key ingredient for highly effective AI-driven deception and disruption and enabled the creation of deepfakes.Quick Hits:• NOAA - Hurricane Erin: When distant storms pose a danger to America's coastal communities• Exclusive: US warns hidden radios may be embedded in solar-powered highway infrastructure• 'Chilling reminder': Multiple historically Black universities under lockdown after receiving threats• 1 injured while U.S. Naval Academy building was cleared after reported threat• Police Swarm UMass Boston After Unconfirmed Shooting Report Sparks Campus Chaos• USCP Clears False Bomb Threat & Police clear possible bomb threat at DNC headquarters• A shooting at Denver-area high school leaves community shaken during third week of school• Man Pleads Guilty to Attempting to Use a Weapon of Mass Destruction and Attempting to Destroy an Energy Facility in Nashville• Out of the woodwork: Examining the global aspirations of The Base• The Online Radicalization of Youth Remains a Growing Problem Worldwide• CTC - The Global State of al-Qa`ida 24 Years After 9/11 • 18 Popular Code Packages Hacked, Rigged to Steal Crypto• Hackers Exploit JavaScript Accounts in Massive Crypto Attack Reportedly Affecting 1B+ Downloads• npm Supply chain Attack: Oops, No Victims: The Largest Supply Chain Attack Stole 5 Cents• Salesloft: March GitHub repo breach led to Salesforce data theft attacks• Ransomware Losses Climb as AI Pushes Phishing to New Heights• Stopping ransomware before it starts: Lessons from Cisco Talos Incident Response
Only one in four governance leaders say succession planning is a top priority, even as activists press for change and many CEOs stay in their roles longer than ever. In this episode, host Steve Odland sits down with Bonnie Gwin, Vice Chair and Global Co-Managing Partner, CEO and Board Practice, Heidrick & Struggles, and a leading voice on governance and board effectiveness. Together, they unpack the risks of neglecting succession, explore best practices for director refreshment, and explain why agility and resilience are now must-have CEO traits. The conversation also highlights how boards are grappling with black swans, geopolitical turmoil, cybersecurity, and the uncertain governance of AI. For more from The Conference Board: Are Boards Effective? Here's What OurLatest Research Says Corporate Citizenship in Transition: Lessons from 2025 Executive Compensation in a Disruptive World
Open Tech Talks : Technology worth Talking| Blogging |Lifestyle
In this episode of Open Tech Talks, we delve into the critical topics of AI security, explainability, and the risks associated with agentic AI. As organizations adopt Generative AI and Large Language Models (LLMs), ensuring safety, trust, and responsible usage becomes essential. This conversation covers how runtime protection works as a proxy between users and AI models, why explainability is key to user trust, and how cybersecurity teams are becoming central to AI innovation. Chapters 00:00 Introduction to AI Security and eIceberg 02:45 The Evolution of AI Explainability 05:58 Runtime Protection and AI Safety 07:46 Adoption Patterns in AI Security 10:51 Agentic AI: Risks and Management 13:47 Building Effective Agentic AI Workflows 16:42 Governance and Compliance in AI 19:37 The Role of Cybersecurity in AI Innovation 22:36 Lessons Learned and Future Directions Episode # 166 Today's Guest: Alexander Schlager, Founder and CEO of AIceberg.ai He's founded a next-generation AI cybersecurity company that's revolutionizing how we approach digital defense. With a strong background in enterprise tech and a visionary outlook on the future of AI, Alexander is doing more than just developing tools — he's restoring trust in an era of automation. Website: AIceberg.ai Linkedin: Alexander Schlager What Listeners Will Learn: Why real-time AI security and runtime protection are essential for safe deployments How explainable AI builds trust with users and regulators The unique risks of agentic AI and how to manage them responsibly Why AI safety and governance are becoming strategic priorities for companies How education, awareness, and upskilling help close the AI skills gap Why natural language processing (NLP) is becoming the default interface for enterprise technology Keywords: AI security, generative AI, agentic AI, explainability, runtime protection, cybersecurity, compliance, AI governance, machine learning Resources: AIceberg.ai
Why listen: Critical vulnerabilities are lurking in the chips that power our devices, AI isn't just a tool—it's a weapon, and space-based systems are now front-lines in the war for cyber dominance. If you care about enterprise security, national infrastructure, or future tech risk, this conversation will change the way you think. In this episode, you'll discover: What chip-level vulnerabilities really mean for enterprise security—and how one weak link can compromise entire systems The double-edged nature of AI: how it can strengthen defenses and create new attack vectors Emerging threats in space cybersecurity, including satellite networks, communication infrastructure, and regulatory gaps Concrete strategies from experts for anticipating and mitigating these risks Featuring Angela Brescia (CEO, Synderys), Trent Teyema (Founder & President, CSG Strategies), and Dr. David Bray (Distinguished Chair, Accelerator, Stimson Center) — leaders at the intersection of tech, defense, and policy. Tune in every Friday at 11 AM PT / 2 PM ET for DisrupTV — your weekly deep dive into enterprise technology, innovation, and digital transformation. If you find value in this episode, please subscribe, rate & review, and share with someone who cares about the future of security.
Tim Berners-Lee's Call to Ban Addictive Algorithms & The Future of Tesla In this episode of Hashtag Trending, host Jim Love discusses Tim Berners-Lee's call to ban addictive algorithms designed to keep users hooked. We also delve into the mystery behind bricking SSDs after a Windows update, the potential shift in Tesla's focus from cars to autonomy under Elon Musk, and a new study identifying 32 ways AI could malfunction. Additionally, we touch upon the latest moves by Musk's Starlink to become a global mobile carrier and the broader implications of AI advancements. Don't miss these key tech updates! 00:00 Introduction and Headlines 00:26 Tim Berners-Lee on Addictive Algorithms 01:39 Mystery of the Bricking SSDs Solved 03:25 Elon Musk's Shifting Focus from Cars to Autonomy 04:32 Starlink's Ambitious Expansion Plans 05:55 AI Risks and Future Prospects 07:28 Show Wrap-Up and Contact Information
In this episode, we explore how the merger of DigitalGuest and Chicostay is streamlining hotel operations with unified digital tools, while AI-powered scams on platforms like Airbnb are challenging trust and security in short-term rentals.Are you new and want to start your own hospitality business?Join our Facebook groupFollow Boostly and join the discussion:YouTube LinkedInFacebookWant to know more about us? Visit our websiteStay informed and ahead of the curve with the latest insights and analysis.
State attorneys general are turning up the heat on Big Tech. Last week, 27 AGs filed an amicus brief urging the Eleventh Circuit to uphold Florida's law restricting social media access for children, framing the measure as content-neutral and necessary to protect youth mental health. Days later, 44 AGs sent a joint NAAG letter to leading AI companies warning them to safeguard children from exploitation and inappropriate content, making clear they will use every enforcement tool available. For legal, compliance, and marketing teams, these actions underscore the growing regulatory focus on online platforms, addictive features, and AI-driven risks. Companies in the tech, digital media, and AI sectors should expect heightened scrutiny and prepare for aggressive, coordinated enforcement. Hosted by Simone Roach. Based on a blog post by Paul L. Singer, Abigail Stempson, Beth Bolen Chun and Andrea deLorimier.
In this episode of CISO Tradecraft, host G Mark Hardy sits down with Tomas Roccia, a senior threat researcher at Microsoft, to delve into the evolving landscape of AI and cybersecurity. From AI-enhanced threat detection to the complexities of tracking cryptocurrency used in cybercrime, Tomas shares his extensive experience and insights. Discover how AI is transforming both defensive and offensive strategies in cybersecurity, learn about innovative tools like Nova for adversarial prompt detection, and explore the sophisticated techniques used by cybercriminals in high-profile crypto heists. This episode is packed with valuable information for cybersecurity professionals looking to stay ahead in a rapidly changing field. Defcon presentation: Where is my crypto Dude? https://media.defcon.org/DEF%20CON%2033/DEF%20CON%2033%20presentations/Thomas%20Roccia%20-%20Where%E2%80%99s%20My%20Crypto%2C%20Dude%20The%20Ultimate%20Guide%20to%20Crypto%20Money%20Laundering%20%28and%20How%20to%20Track%20It%29.pdf GenAI Breaches Generative AI Breaches: Threats, Investigations, and Response - Speaker Deck https://speakerdeck.com/fr0gger/generative-ai-breaches-threats-investigations-and-response Transcripts: https://docs.google.com/document/d/1ZPkJ9P7Cm7D_JdgfgNGMH8O_2oPAbnlc Chapters 00:00 Introduction to AI and Cryptocurrencies 00:27 Welcome to CISO Tradecraft 00:55 Guest Introduction: Tomas Roccia 01:06 Tomas Roccia's Background and Career 02:51 AI in Cybersecurity: Defensive Approaches 03:19 The Democratization of AI: Risks and Opportunities 06:09 AI Tools for Cyber Defense 08:09 Challenges and Limitations of AI in Cybersecurity 09:20 Microsoft's AI Tools for Defenders 12:13 Open Source AI Security: Project Nova 18:37 Community Contributions and Open Source Projects 19:30 Case Study: Babit Crypto Hack 22:12 Money Laundering Techniques in Cryptocurrency 23:01 AI in Tracking Cryptocurrency Transactions 26:09 Sophisticated Attacks and Money Laundering 33:50 Future of AI and Cryptocurrency 38:17 Final Thoughts and Advice for Security Executives 41:28 Conclusion and Farewell
In this episode of CISO Tradecraft, host G Mark Hardy sits down with Tomas Roccia, a senior threat researcher at Microsoft, to delve into the evolving landscape of AI and cybersecurity. From AI-enhanced threat detection to the complexities of tracking cryptocurrency used in cybercrime, Tomas shares his extensive experience and insights. Discover how AI is transforming both defensive and offensive strategies in cybersecurity, learn about innovative tools like Nova for adversarial prompt detection, and explore the sophisticated techniques used by cybercriminals in high-profile crypto heists. This episode is packed with valuable information for cybersecurity professionals looking to stay ahead in a rapidly changing field. Defcon presentation: Where is my crypto Dude? https://media.defcon.org/DEF%20CON%2033/DEF%20CON%2033%20presentations/Thomas%20Roccia%20-%20Where%E2%80%99s%20My%20Crypto%2C%20Dude%20The%20Ultimate%20Guide%20to%20Crypto%20Money%20Laundering%20%28and%20How%20to%20Track%20It%29.pdf GenAI Breaches Generative AI Breaches: Threats, Investigations, and Response - Speaker Deck https://speakerdeck.com/fr0gger/generative-ai-breaches-threats-investigations-and-response Transcripts: https://docs.google.com/document/d/1ZPkJ9P7Cm7D_JdgfgNGMH8O_2oPAbnlc Chapters 00:00 Introduction to AI and Cryptocurrencies 00:27 Welcome to CISO Tradecraft 00:55 Guest Introduction: Tomas Roccia 01:06 Tomas Roccia's Background and Career 02:51 AI in Cybersecurity: Defensive Approaches 03:19 The Democratization of AI: Risks and Opportunities 06:09 AI Tools for Cyber Defense 08:09 Challenges and Limitations of AI in Cybersecurity 09:20 Microsoft's AI Tools for Defenders 12:13 Open Source AI Security: Project Nova 18:37 Community Contributions and Open Source Projects 19:30 Case Study: Babit Crypto Hack 22:12 Money Laundering Techniques in Cryptocurrency 23:01 AI in Tracking Cryptocurrency Transactions 26:09 Sophisticated Attacks and Money Laundering 33:50 Future of AI and Cryptocurrency 38:17 Final Thoughts and Advice for Security Executives 41:28 Conclusion and Farewell
Nassau Financial Group's Chief Investment Officer, Joe Orofino joins Paul Tyler to analyze the current economic landscape and its impact on retirement planning. They examine how Federal Reserve rate cuts affect annuity products, discuss inflation expectations hovering around 3%, and explore the looming Social Security funding crisis. Orofino explains why higher long-term interest rates benefit the annuity industry and shares insights on portfolio diversification strategies, including concerns about the S&P 500's heavy concentration in AI and tech stocks. Key topics include treasury rates, tariff impacts, sovereign wealth funds, and annuities. Learn more at www.thatannuityshow.com
Don't let your small business fall victim to devastating cyber attacks! Expert joins us t discuss AI Risks exposed. We explore how hackers use AI to target small businesses. From AI Social engineering to data theft, learn what you need to know to protect your business from cyber threats. Send us a textGrowth without Interruption. Get peace of mind. Stay Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com Support the show
Worried about AI stealing your job? ...or are you ignoring it until it goes away? This aversion might be blocking you from pathways to real improvements in workforce capability. Will Egan (CEO of Ausmed) joins Zoe, Michelle and Karen to outline the three phases of AI adoption: Risks, Opportunities and Governence. Together, they breakdown the fears around accuracy, bias and job disruption, before shifting into the real, practical opportunities AI offers. From productivity and augmentation (not replacement), to the reflective, Generative Learning Experiences being developed at Ausmed. Separate the reality from the hype and start rethinking how Artificial Intelligence could actually strengthen your workforce. Contact the show at podcast@ausmed.com.au Follow Ausmed on LinkedIn, Facebook & Instagram EVENT INFO: Building Workforce Capability with AI | Gold Coast, QLDSept 29, 2025 — evening prior to the Aging Australia National Conference.*allocation exhausted → join the waitlist here Resources: AI Bias and Cultural Safety in Aged Care | Guide Using AI in Healthcare Education | Guide How AI and Machine Learning Is Impacting Nursing | Guide What does AI mean for healthcare in Australia? | Guide Why Reflection Matters | Guide Thinking Differently About Change (Pt. 1) | Thought Leadership Smart Strategies for Healthcare Education | Thought Leadership Learn More About Ausmed: https://lnk.bio/ausmed/organisationsSee omnystudio.com/listener for privacy information.
In this episode of Project Synapse, the team delves into a plethora of AI tools and technologies, rekindling their original playful approach to understanding AI's latest advancements. Marcel Gagner takes the lead, showcasing various tools like Google's Nano Banana, Gemini 2.5 Image Generator, and the emergent Genie 3, among others. The discussion highlights real-world physics, world models, and interactive environments. They also explore the use of voice cloning and digital twins with Heygen, and music generation with Suno. The episode emphasizes the importance of educating, monitoring, and involving oneself in AI technology, particularly for parents with children interacting with AI systems. 00:00 Introduction to Project Synapse 00:50 Meet Marcel Gagner 02:33 AI Tools and Subscriptions 08:38 Exploring Google's Gemini 10:43 Creating Custom Images and Videos 32:31 Storybook Creation with AI 41:12 The Importance of Monitoring Kids' AI Usage 41:51 Parental Involvement and AI Risks 45:50 AI and Music Generation Tools 49:45 Creating Personalized AI Content 54:01 Exploring Advanced AI Tools and Ethics 56:25 The Future of AI in Creative Fields 59:34 Interactive AI Worlds and Final Thoughts
What happens when an executive quietly outsources performance reviews to ChatGPT? Or when your C-suite is loudly preaching about AI adoption while refusing to touch the tools themselves? In this episode, I sit down with Talk HR to Me columnist and Head of People at Quantum Metric, Alana Fallis, to tackle real listener questions in a live advice-column format.We dig into the messy realities of AI in the workplace—from misplaced trust in automated reviews, to the awkward theater of “innovation” at the executive level, to the human side of employee fears around automation. And yes, we even unpack the HR dilemma of whether an employee in recovery should be allowed to stock the breakroom fridge with non-alcoholic beer.Related Links:Join the People Managing People community forumSubscribe to the newsletter to get our latest articles and podcastsConnect with Alana on LinkedInCheck out Quantum MetricTalk HR to MeSupport the show
EP 403 - AI is being sold as a miracle productivity tool, but is it actually killing our ability to think? We revisit our conversation with AI expert, author and speaker David Birss who explores the hidden dangers of generative AI - from outsourcing imagination to the slow erosion of human originality.We explore the myth of AI productivity, unpacking why most companies are implementing AI the wrong way, why “adequacy” is replacing excellence, and how the obsession with productivity is leading to burnout rather than breakthroughs. David explains his Sensible AI Manifesto, showing why businesses must use AI to augment skills, not automate talent.From ChatGPT in the workplace to the risk of Gen Z losing brain power, this episode covers the biggest questions around AI:Is generative AI replacing excellence with adequacy?How can AI increase output without destroying originality?What are the real risks of AI for business, education, and society?Could AI be weaponised - and are governments already falling behind?Essential listening if you're searching for the truth about AI in business, creativity, and the future of work. *For Apple Podcast chapters, access them from the menu in the bottom right corner of your player*Spotify Video Chapters:00:00 BWB with David Birss00:43 Meet David - AI Expert and Innovator01:41 The Sensible AI Manifesto: Origins and Purpose03:24 AI in Business: Misconceptions and Realities04:16 The Impact of AI on Productivity and Workload10:17 The Future of AI: Risks and Ethical Considerations17:29 AI in Warfare and Global Security25:41 Training and Education: The UK vs. The US34:10 The Importance of Effective AI Prompting37:30 David's Multifaceted Career: Music, Comedy, and AI43:11 Spotting Opportunities in Technology44:03 Embracing AI and Art44:47 Developing Effective AI Prompts46:59 Challenges and Misconceptions in AI49:05 Dealing with Change and Innovation50:55 The Importance of Human Potential59:04 Quickfire - Get To Know David01:07:12 !Business or Bullshit Quiz!businesswithoutbullshit.meWatch and subscribe to us on YouTubeFollow us:InstagramTikTokLinkedinTwitterFacebookIf you'd like to be on the show, get in contact - mail@businesswithoutbullshit.meBWB is powered by Oury Clark
The Medcurity Podcast: Security | Compliance | Technology | Healthcare
In this episode, Joe Gellatly and Daniel Schwartz discuss today's most pressing security challenges—including zero trust, ransomware evolution, data loss prevention, and the risks tied to AI-powered “fast fashion” software.They share what teams can do now to stay secure without waiting for regulations to catch up.Connect with Daniel Schwartz on LinkedIn: https://www.linkedin.com/in/daniel-schwartz-cybersecurity/ Learn more about Medcurity: https://medcurity.com #Healthcare #Cybersecurity #Compliance #HIPAA #ZeroTrust #Ransomware #DataLossPrevention #AIinHealthcare #MFA #PHISecurity
The release of OpenAI GPT-5 marks a significant turning point in AI development, but maybe not the one most enthusiasts had envisioned. The latest version seems to reveal the natural ceiling of current language model capabilities with incremental rather than revolutionary improvements over GPT-4. Sid and Andrew call back to some of the model-building basics that have led to this point to give their assessment of the early days of the GPT-5 release.• AI's version of Moore's Law is slowing down dramatically with GPT-5• OpenAI appears to be experiencing an identity crisis, uncertain whether to target consumers or enterprises• Running out of human-written data is a fundamental barrier to continued exponential improvement• Synthetic data cannot provide the same quality as original human content• Health-related usage of LLMs presents particularly dangerous applications• Users developing dependencies on specific model behaviors face disruption when models change• Model outputs are now being verified rather than just inputs, representing a small improvement in safety• The next phase of AI development may involve revisiting reinforcement learning and expert systems* Review the GPT-5 system card for further informationFollow The AI Fundamentalists on your favorite podcast app for more discussions on the direction of generative AI and building better AI systems.This summary was AI-generated from the original transcript of the podcast that is linked to this episode.What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Peter Campbell is the principal consultant at Techcafeteria, a micro-consulting firm dedicated to helping nonprofits make more affordable and effective use of technology to support their missions. He recently published a free download powerpoint on Managing AI Risk and had time to talk with Carolyn about his thoughts on developing AI policies with an eye to risk, where the greatest risks lie for nonprofits using AI, and how often to review your policies as the technology changes rapidly.The takeaways: AI tools are like GPS (which is itself an AI). You are the expert; they are not able to critically analyze their own output even though they can mimic authority. Using AI tools for subjects where you have subject expertise allows you to correct the output. Using AI tools for subjects where you have no knowledge adds risk. Common AI tasks at nonprofits move from low-level risks such as searching your own inbox for an important email to higher-risk activities more prone to consequential errors, such as automation and analysis.Common AI risks include inaccuracy, lack of authenticity, reputational damage, and copyright and privacy violations.AI also has risk factors associated with audience: your personal use probably has pretty low risk that you will be fooled or divulge sensitive information to yourself, but when you use AI to communicate with the public, the risk increases for your nonprofit.How to Manage AI Risks at Nonprofits? Start with an AI Policy. Review it often as the technology and tools are changing rapidly.Use your own judgement. A good rule of thumb is to use AI tools to create things that you are already knowledgeable about, so that you can easily assess the accuracy of the AI output. Transparency matters. Let people know AI was used and how it was used. Use an “Assisted by AI” disclaimer when appropriate.Require a human third party review before sharing AI created materials with the public. State this in your transparency policy/disclaimers. Be honest about the roles of AI and humans in your nonprofit work.Curate data sources, and always know what your AI is using to create materials or analysis. Guard against bias and harm to communities you care about.“I've been helping clients develop Artificial Intelligence (AI) policies lately. AI has lots of innovative uses and every last one of them has some risk associated with it, so I regularly urge my clients to get the policies and training in place before they let staff loose with the tools. Here is a generic version of a powerpoint explaining AI risks and policies for nonprofits. “Peter Campbell, Techcafeteria _______________________________Start a conversation :) Register to attend a webinar in real time, and find all past transcripts at https://communityit.com/webinars/ email Carolyn at cwoodard@communityit.com on LinkedIn Thanks for listening.
In the evolving debate about Artificial Intelligence, where do you stand: are you a "doomer" or a "bloomer"? Is AI merely a tool, or something more akin to an intelligence that subtly shifts our perception of ourselves? This episode unpacks the profound implications of our growing reliance on AI chatbots. Our guest, Abdu Murray—an attorney, psychologist, author, and minister—sheds crucial light on the escalating dangers of AI, particularly for young minds. He reveals how AI, designed to please, often confirms biases without challenge, even to the point of encouraging self-harm or, tragically, inciting suicide. Beyond these extreme risks, Abdu explores how constant, indiscriminate AI use can atrophy our cognitive abilities and, most critically, erode our fundamental human need for genuine connection. As many of us find ourselves in a symbiotic relationship with our mobile phones, limiting our worldview to a small screen and treating AI almost like a new deity, Abdu's brave solution to this silent loss of humanity might just be the urgent, yet surprisingly simple, answer we've needed all along. Did you enjoy this episode and would like to share some love?
When AI systems hallucinate, run amok, or fail catastrophically, the consequences for enterprises can be devastating. In this must-watch CXOTalk episode, discover how to anticipate and prevent AI failures before they escalate into crises.Join host Michael Krigsman as he explores critical AI risk management strategies with two leading experts:• Lord Tim Clement-Jones - Member of the House of Lords, Co-Chair of UK Parliament's AI Group• Dr. David A. Bray - Chair of the Accelerator at Stimson Center, Former FCC CIOWhat you'll learn:✓ Why AI behaves unpredictably despite explicit programming✓ How to implement "pattern of life" monitoring for AI systems✓ The hidden dangers of anthropomorphizing AI✓ Essential board-level governance structures for AI deployment✓ Real-world AI failure examples and their business impact✓ Strategies for building appropriate skepticism while leveraging AI benefitsKey ideas include treating AI as "alien interactions" rather than human-like intelligence, the convergence of AI risk with cybersecurity, and why smaller companies have unique opportunities in the AI landscape.This discussion is essential viewing for CEOs, board members, CIOs, CISOs, and anyone responsible for AI strategy and risk management in their organization.Subscribe to CXOTalk for more expert insights on technology leadership and AI:
This episode is sponsored by Natoma. Visit https://www.natoma.id/ to learn more.Join Jeff from the IDAC Podcast as he dives into a deep conversation with Paresh Bhaya, the co-founder of Natoma. In this sponsored episode, Paresh shares his journey into the identity space, discusses how Natoma helps enterprises accelerate AI adoption without compromising security, and provides insights into the rising importance of MCP and A2A protocols. Learn about the challenges and opportunities at the intersection of AI and security, the importance of dynamic access controls, and the significance of ensuring proper authentication and authorization in the growing world of agentic AI. Paresh also delights us with his memorable hike up Mount Whitney. Don't miss out!00:00 Introduction and Sponsor Announcement00:34 Guest Introduction: Paresh Bhaya from Natoma01:14 Paresh's Journey into Identity04:04 Natoma's Mission and AI Security06:25 The Story Behind Natoma's Name09:29 Natoma's Unique Approach to AI Security18:32 Understanding MCP and A2A Protocols25:20 Community Development and Adoption25:56 Agent Interactions and Security Challenges27:19 Navigating Product Development29:17 Ensuring Secure Connections36:10 Deploying and Managing MCP Servers42:40 Shadow AI and Governance44:17 Personal Anecdotes and ConclusionConnect with Paresh: https://www.linkedin.com/in/paresh-bhaya/Learn more about Natoma: https://www.natoma.id/Connect with us on LinkedIn:Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/Visit the show on the web at idacpodcast.comKeywords:IDAC, Identity at the Center, Jeff Steadman, Jim McDonald, Natoma, Paresh Bhaya, Artificial Intelligence, AI, AI Security, Identity and Access Management, IAM, Enterprise Security, AI Adoption, Technology, Innovation, Cybersecurity, Machine Learning, AI Risks, Secure AI, #idac
Send us a textShe's the legal powerhouse behind IBM's AI ethics strategy — and she makes law fun. In this encore episode, we revisit a fan favorite: Christina Montgomery, formerly IBM's Chief Privacy and Trust Officer, now Chief Privacy and Trust Officer, GM. From guarding the gates of generative AI risk to advising on global regulation, Christina gives us a front-row seat to what's now, what's next, and what needs rethinking when it comes to trust, synthetic data, and the future of AI law.
Send us a textShe's the legal powerhouse behind IBM's AI ethics strategy — and she makes law fun. In this encore episode, we revisit a fan favorite: Christina Montgomery, formerly IBM's Chief Privacy and Trust Officer, now Chief Privacy and Trust Officer, GM. From guarding the gates of generative AI risk to advising on global regulation, Christina gives us a front-row seat to what's now, what's next, and what needs rethinking when it comes to trust, synthetic data, and the future of AI law.
The Cybersecurity Today episode revisits a discussion on the risks and implications of AI hosted by Jim Love, with guests Marcel Gagné and John Pinard. They discuss the 'dark side of AI,' covering topics like AI misbehavior, the misuse of AI as a tool, and the importance of data protection in production environments. The conversation delves into whether AI can be conscious and the ethical considerations surrounding its deployment, particularly in highly regulated industries like finance. They emphasize the need for responsible use, critical thinking, and ongoing oversight to mitigate potential risks while capitalizing on AI's benefits. The episode concludes with a call for continued discussion and engagement through various platforms. 00:00 Introduction to Cybersecurity Today 00:33 Exploring the Dark Side of AI 02:31 AI Misbehavior and Security Concerns 07:35 Speculative Risks and Consciousness 26:09 AI in Corporate Settings 31:49 Human Weakness in Security 32:37 Social Engineering Tactics 33:08 Security in Engineering Systems 33:42 AI Data Storage and Security 35:16 AI Data Retrieval Concerns 39:36 Testing Security in Development 41:37 AI in Regulated Industries 43:57 Bias and Decision Making in AI 47:18 Critical Thinking and Debate Skills 55:06 The Role of AI as a Consultant 01:02:21 The Future of AI and Responsibility 01:04:55 Conclusion and Contact Information
Anne Bradley is the Chief Customer Officer at Luminos. Anne helps in-house legal, tech, and data science teams use the Luminos platform to manage the automated AI risk, compliance, and approval processes, statistical testing, and legal documentation. Anne also serves on the Board of Directors of the Future of Privacy Forum, a nonprofit that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. In this episode… AI is being integrated into everyday business functions, from diagnosing cancer to translating conversations and powering customer service chatbots and autonomous vehicles. While these tools deliver value, they also bring privacy, security, and ethical risks. As organizations dive into adopting AI tools, they often do so before performing risk assessments, establishing governance, and implementing privacy and security guardrails. Without safeguards and internal processes in place, companies may not fully understand how the tools function, what data they collect, or the risk they carry. So, how can companies efficiently assess and manage AI risk as they rush to deploy new tools? Managing AI risk requires governance and the ability to test AI tools before deploying them. That's why companies like Luminos provide a platform to help companies manage and automate the AI risk compliance approval processes, model testing, and legal documentation. This platform allows teams to check for toxicity, hallucinations, and AI bias even when an organization uses high-risk tools like customer-facing chatbots. Embedding practical controls, like pre-deployment testing and assessing vendor risk early, can also help organizations implement AI tools safely and ethically. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Anne Bradley, Chief Customer Officer at Luminos, about how companies can assess and mitigate AI risk. Anne explains the impact of deepfakes on public trust and the need for a regulatory framework to reduce harm. She shares why AI governance, AI use-case risk assessments, and statistical tools are essential for helping companies monitor outputs, reduce unintended consequences, and make informed decisions about high-risk AI deployments. Anne also highlights why it's important for legal and compliance teams to understand business objectives driving an AI tool request before evaluating its risk.
What if your voice could be stolen? In Part Two, Dr. Tanusree Sharma reveals the hidden risks behind voice AI: how the same recordings that powered tools like Siri and Alexa are now being cloned, weaponized, and monetized without consent. She introduces PRAC3-a bold new framework blending privacy, reputation, and accountability with traditional consent models-and calls AI leaders to rethink how they handle voice data before trust is lost for good. From creative rights to biometric identity, this conversation is a must-listen for anyone shaping the future of synthetic speech. Join us and explore why voice governance can't wait.
Join G Mark Hardy in this special episode of CISO Tradecraft as he interviews Ross Young, the creator of the OWASP Threat and Safeguard Matrix (TaSM). Ross shares his extensive cybersecurity background and discusses the development and utility of the TaSM, including its applications in threat modeling and risk management. Additionally, Ross introduces his upcoming book, 'Cybersecurity's Dirty Secret: How Most Budgets Are Wasted,' and provides insights on maximizing cybersecurity budgets. Don't miss this episode for essential knowledge on enhancing your cybersecurity leadership and strategies. OWASP Threat and Safeguard Matrix - https://owasp.org/www-project-threat-and-safeguard-matrix/ Transcripts - https://docs.google.com/document/d/1anGewI3XccGnXoV3oE2h7BfelY5QxiSL/ Chapters 00:00 Introduction to the Threat and Safeguard Matrix 00:30 Meet Ross Young: Cybersecurity Expert 01:08 Ross Young's Career Journey 01:59 The Upcoming Book: Cybersecurity's Dirty Secret 03:04 Introduction to the Threat and Safeguard Matrix (TaSM) 03:48 Understanding the TaSM Framework 07:10 Applying the TaSM to Real-World Scenarios 19:32 Using TaSM for Threat Modeling and Risk Committees 21:58 Extending TaSM Beyond Cybersecurity 23:52 AI Risks and the TaSM 24:43 Conclusion and Final Thoughts
Join G Mark Hardy in this special episode of CISO Tradecraft as he interviews Ross Young, the creator of the OWASP Threat and Safeguard Matrix (TaSM). Ross shares his extensive cybersecurity background and discusses the development and utility of the TaSM, including its applications in threat modeling and risk management. Additionally, Ross introduces his upcoming book, 'Cybersecurity's Dirty Secret: How Most Budgets Are Wasted,' and provides insights on maximizing cybersecurity budgets. Don't miss this episode for essential knowledge on enhancing your cybersecurity leadership and strategies. OWASP Threat and Safeguard Matrix - https://owasp.org/www-project-threat-and-safeguard-matrix/ Transcripts - https://docs.google.com/document/d/1anGewI3XccGnXoV3oE2h7BfelY5QxiSL/ Chapters 00:00 Introduction to the Threat and Safeguard Matrix 00:30 Meet Ross Young: Cybersecurity Expert 01:08 Ross Young's Career Journey 01:59 The Upcoming Book: Cybersecurity's Dirty Secret 03:04 Introduction to the Threat and Safeguard Matrix (TaSM) 03:48 Understanding the TaSM Framework 07:10 Applying the TaSM to Real-World Scenarios 19:32 Using TaSM for Threat Modeling and Risk Committees 21:58 Extending TaSM Beyond Cybersecurity 23:52 AI Risks and the TaSM 24:43 Conclusion and Final Thoughts
If you manage care about protecting yourself, your loved ones—and your organization-- this episode offers actionable takeaways you can use today. This is part of our official Cyber Crime Junkies podcast series—subscribe wherever you listen! ✅ **Don't forget to like, subscribe, and hit the bell
In TechSurge's Season 1 Finale episode, we explore an important debate: should AI development be open source or closed? AI technology leader and UN Senior Fellow Senthil Kumar joins Michael Marks for a deep dive into one of the most consequential debates in artificial intelligence, exploring the fundamental tensions between democratizing AI access and maintaining safety controls.Sparked by DeepSeek's recent model release that delivered GPT-4 class performance at a fraction of the cost and compute, the discussion spans the economics of AI development, trust and transparency concerns, regulatory approaches across different countries, and the unique opportunities AI presents for developing nations.From Meta's shift from closed to open and OpenAI's evolution from open to closed, to practical examples of guardrails and the geopolitical implications of AI governance, this episode provides essential insights into how the future of artificial intelligence will be shaped not just by technological breakthroughs, but by the choices we make as a global community.If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform. Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and news about Season 2 of the TechSurge podcast. Thanks for listening! Links:Slate.ai - AI-powered construction technology: https://slate.ai/World Economic Forum on open-source AI: https://www.weforum.org/stories/2025/02/open-source-ai-innovation-deepseek/EU AI Act overview: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
n this episode, we explore the rise of AI in Hollywood through the lens of actors and artists. We discuss the promise of AI tools—like virtual readers for self-tapes—and how they could free creatives to focus on their craft, but also warn of the risks when AI replaces human storytelling. Our guest stresses the need for diverse ethical oversight in AI development, drawing parallels to how Facebook's unintended global impact stemmed from a lack of diverse perspectives at creation. Learn why we need more “naysayers” guiding AI's creative applications, where to draw the line between useful automation and creative displacement, and how tech-savvy actors can advocate for their future. Tune in for a timely conversation on balancing innovation and ethics in Hollywood's AI era.Target KeywordsAI in HollywoodHollywood AI ethicsActors and AI toolsAI creative jobs riskAI entertainment futureTags: AI, Hollywood, AI Ethics, Actors, AI in Entertainment, Creative AI Tools, Self-Tapes, Ethical AI, Tech in Film, AI Risks, Storytelling, Virtual Readers, AI Oversight, Diversity in AI, Creative Automation, AI Jobs, Film Industry Trends, Casting Tech, AI Development, Actor Advocacy, Innovation, Digital Ethics, Future of Acting, Machine Learning, Entertainment Technology, Tech Experts, Artist Perspectives, AI Regulation, Career Impact, PodcastEpisodeHashtags: #AIinHollywood #HollywoodEthics #ActorsAndAI #CreativeAI #EntertainmentTech #AIrisks #AItools #FilmInnovation #Storytelling #EthicalAI #DiversityInTech #SelfTapes #CastingTech #AIoversight
Send us a text00:00 - Intro00:53 - Harvey Eyes $5B Primary Valuation Amid Legal AI Surge01:58 - Wealthfront Preps IPO After Strong $290M Revenue02:42 - Snyk Acquires Invariant to Secure AI Risks03:47 - PlayAI In Acquisition Talks With Meta04:46 - OpenAI and Microsoft Clash Over AGI Clause06:12 - Kalshi Hits $2B Primary Valuation Amid Legal Wins07:00 - Polymarket Nears $1B Valuation With $200M Raise07:49 - Melio Acquired by Xero at $2.5B
Join Tom Fox and hundreds of other GRC professionals in the city that never sleeps, New York City, on July 9 & 10 for one of the top conferences around, #Risk New York. The current US landscape, shaped by evolving policies, rapid advancements in AI, and shifting global dynamics, demands adaptive strategies and cross-functional collaboration. At #RISK New York, you will master the New Regulatory Reality by getting ahead of US regulatory shifts and their impact. Conquer AI and Tech Risk by Safeguarding Your Organization in an AI-Driven World and Understanding the Implications of Major Tech Investments. Navigate Financial and Crypto Volatility by Protecting Your Assets and Exploring Solutions in a Dynamic Market. Strengthen Your GRC Framework by Leveraging Governance, Risk, and Compliance for Strategic Advantage. Protect Digital Trust by addressing challenges in cybersecurity and data privacy, and combating misinformation. All while meeting with the country's top #Risk management professionals. In this episode, Tom Fox talks with Gwen Hassan, the Chief Compliance Officer for Unisys Corporation, about her role and the upcoming #RiskNYC conference. Gwen shares insights into Unisys' operations, including the various technologies and services they provide, and highlights her responsibilities in managing global ethics, compliance, and trade compliance risks. She also gives a teaser about her panel presentation on the compliance and ethics risks associated with artificial intelligence, stressing the importance of understanding AI's impact on company culture and regulatory compliance. Gwen expresses her excitement about the conference, emphasizing the value of engaging with fellow risk management experts. Resources: #Risk Conference Series #RiskNYC—Tickets and Information Gwen Hassan on LinkedIn Learn more about your ad choices. Visit megaphone.fm/adchoices
Sherweb has launched a white-label self-service portal aimed at empowering managed service providers (MSPs) and their clients by streamlining operational tasks. This innovative platform enables clients to manage their technology licenses, subscriptions, and payments independently, reducing the need for service providers to handle routine inquiries. According to Rick Stern, Senior Director of Platform at Sherweb, this autonomy not only expedites the resolution of simple requests but also allows MSPs to concentrate on strategic initiatives. The portal features automated invoicing, curated service catalogs, and integrated chat support, and is already in use by over 450 MSPs following a successful pilot program.The podcast also discusses the evolving landscape of artificial intelligence (AI) pricing models, with companies like Globant and Salesforce adopting usage-based approaches. Globant has introduced subscription-based AI pods that allow clients to access AI-powered services through a token-based system, moving away from traditional effort-based billing. Salesforce is experimenting with flexible pricing structures, including conversation and action-based models, to better align with the value delivered by AI services. These shifts indicate a critical inflection point in how AI services are monetized, emphasizing the need for IT service providers to rethink their offerings in light of usage-based economics.Concerns regarding the unauthorized use of generative AI tools in organizations are highlighted by a report from Compromise, which reveals that nearly 80% of IT leaders have observed negative consequences from such practices. The survey indicates significant worries about privacy and security, with many IT leaders planning to adopt data management platforms and AI monitoring tools to oversee generative AI usage. Additionally, advancements in AI are showcased through a Stanford professor's AI fund manager that outperformed human stock pickers, while a study reveals limitations in AI's ability to make clinical diagnoses from radiological scans.The podcast concludes with a discussion on the role of the Chief Information Security Officer (CISO), which is facing an identity crisis due to its increasing complexity and the misalignment of its responsibilities. Experts suggest reevaluating the CISO role to better address modern cybersecurity threats. The episode also touches on the implications of generative AI in education, highlighting concerns about its impact on critical thinking and learning processes. Overall, the podcast emphasizes the need for IT service providers to navigate the evolving landscape of AI and cybersecurity with a focus on governance, accountability, and sustainable practices. Four things to know today 00:00 Sherweb's White-Labeled Portal Signals MSP Shift Toward Scalable, Client-Centric Service Models03:31 AI Forces Billing Revolution: Globant and Salesforce Redefine How Tech Services Are Priced06:49 From Shadow AI to Specialized Tools: Why Governance, Not Hype, Defines AI's Next Phase12:46 From CISOs to Classrooms to Code: Why AI Forces a Strategic Rethink Across the Enterprise This is the Business of Tech. Supported by: https://www.huntress.com/mspradio/https://cometbackup.com/?utm_source=mspradio&utm_medium=podcast&utm_campaign=sponsorship All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
TakeawaysThe AAFA conference focused heavily on tariffs and their impact on supply chains.Manufacturers are diversifying their supply chains to mitigate risks.Companies are looking to increase agility in manufacturing strategies.Collaboration and communication are crucial for managing tariff impacts.Sustainability can drive cost reduction while meeting ESG goals.World Retail Congress highlighted a shift toward more resilient supply chains.Retailers are focusing on building tighter relationships with top suppliers.Sustainability discussions have matured to include financial considerations.AI is being leveraged for contract management and climate control in real estate.Creating seamless shopping experiences is essential for customer retention. Chapters00:00 Navigating Tariffs and Supply Chain Strategies06:08 Insights from World Retail Congress11:57 AI Innovations in Real Estate18:09 Enhancing Customer Experiences in Retail Coresight Research premium subscribers can read our top takeaways from World Retail Congress in our event coverage report: World Retail Congress 2025 Insights: Consensus on Tariffs Floor, AI Risks in Adaptive Apparel, Smart Scaling in FocusRead more insights on AI, tariffs and sustainability with in-depth reports from Coresight Research.
Welcome to RIMScast. Your host is Justin Smulison, Business Content Manager at RIMS, the Risk and Insurance Management Society. Justin interviews Chris Maguire about his professional journey and what led him to focus on the intersection of legal, compliance, and innovation. This leads to a discussion about AI and predictive analytics. Chris shares examples of General Counsel and compliance offices using AI to improve risk forecasting and decision-making. Chris comments on the expanding role of Compliance in the General Counsel's office. Listen to Chris's take on the importance of values. He shares some of the core values of Thomson Reuters. Key Takeaways: [:01] About RIMS and RIMScast. [:17] About this episode of RIMScast. We will talk about how technology is driving innovation in compliance, risk, and the legal profession, with Chris Maguire of Thomson Reuters. [:41] RIMS-CRMP Workshops! The next RIMS-CRMP-FED Exam Prep will be presented in conjunction with AFERM and led by instructor Joseph Mayo. This is a two-day course, June 2nd and 3rd. Register by May 26th. [1:02] The next RIMS-CRMP Exam Prep Workshop will be presented in conjunction with NAIT on June 10th and 11th. Register by June 9th. That course will be led by former RIMS President, Nowell Seaman. [1:20] Links to these courses can be found on the Certification Page of RIMS.org and through this episode's show notes. [1:27] Virtual Workshops! On June 12th, Pat Saporito will host “Managing Data for ERM”, and she will return on June 26th to present the very popular new course, “Generative AI for Risk Management”. [1:45] A link to the full schedule of virtual workshops can be found on the RIMS.org/education and RIMS.org/education/online-learning pages. A link is also in this episode's show notes. [1:56] We are already making preparations for the RIMS ERM Conference 2025 on November 17th and 18th in Seattle, Washington. RIMS is accepting educational session submissions through May 20th. [2:14] The best submissions will address current and future challenges facing ERM practitioners as well as provide leading practices and concrete takeaways for a diverse audience of risk professionals from industries or organizations of varied sizes, disciplines, functions, and roles. [2:30] These include officers, leaders, managers, and students. The link to the submission form is in this episode's show notes. If you are listening on the day of this episode's release, this is the last call for submissions, so get them in! [2:46] Let's get on with the show! How is your organization navigating regulatory uncertainty in 2025? Are you leveraging advancements in technology to help achieve your goals? Our guest this week is Chris Maguire, the General Manager for Corporates Risk at Thomson Reuters. [3:06] We are going to talk about how technology is driving innovation in compliance, risk, and legal. We will talk about how AI and predictive analytics are reshaping corporate legal and compliance functions, and more. Let's get to it! [3:22] Interview! Chris Maguire, welcome to RIMScast! [3:29] Chris Maguire started in a Big Four firm in the '90s, in the auto practice. It was a great way to learn business and how they worked, focusing on understanding financial controls and risk, and how to make sure that companies were behaving correctly. [3:59] After an MBA, Chris started working for Thomson Reuters. He has been with them for about 20 years in the legal tech space. He started on the strategy side and transitioned after several years to driving the commercial teams in the risk business. [4:24] Now, Chris has the role of looking at product and industry strategy for corporations. Thomson Reuters is at the intersection of legal, risk, and compliance, and how they affect enterprises. [5:07] Chris says that 20 years ago, AI was not a fast-moving industry. There have been dramatic changes in the last few years. AI adoption by Thomson Reuters customers has doubled in the last year. Generative AI has been seen in a wide range of tasks. It started with drafting NDAs. [5:38] Salespeople are always asking for NDAs and how they can be drafted more quickly and easily. Now AI conducts legal research or helps draft a research memo or a complaint from a particular point of view. We're seeing it in drafting HR employment policies and rote tasks. [6:21] Chris explains the use of AI prompts tied to data sources, such as your data, data from Thomson Reuters, or other data providers. Chris is also seeing big data AI used a lot in analyzing outside spending and looking for cost savings. [7:14] Chris tells how AI helps in decision-making, using the example of knowing the vendors you choose for your supply chain and knowing your customers. AI can weed through all the news out there to make sure you're not dealing with a sanctioned entity. [8:22] AI can help with reputation risk. Is there forced labor in your supply chain? That matters to your reputation. It's not just whether a country is sanctioned, but what the individual entities in your supply chain are doing. There's a lot of focus on reputation today. [9:10] Justin recently had the Chief Impact Officer of EcoVadis on RIMScast Episode 329. They talked about forced labor and human trafficking in the supply chain. Use AI to help identify where forced labor and human trafficking are big risks, avoid them, and report them. [9:35] This is important on the corporate side and the government side of the business. Chris says it will be interesting to see the effect of tariffs. Thomson Reuters has updated about 50 million changes to its global trade products so far this year, compared to 100 million in 2024. [10:16] Plug Time! RIMS Webinars! We are back on May 22nd, with GRC, a TÜV SÜD Company, and their newest session, “Asset Valuations in 2025: Managing Tariffs, Inflation, and Rising Insurance Scrutiny”. [10:33] On June 5th, Zywave joins us to discuss “Today's Escalating Risk Trajectory: What's the Cause and What's the Solution?”. [10:413] On June 17th, Origami Risk returns to present “Strategic Risk Financing in an Unstable Economy: Leveraging Technology for Efficiency and Cost Reduction”. [10:54] More webinars will be announced soon and added to the RIMS.org/Webinars page. Go there to register. Registration is complimentary for RIMS members. [11:05] Spencer Educational Foundation's Grants program is starting soon. Spencer's goal to help build a talent pipeline of risk management and insurance professionals is achieved, in part, by its collaboration with risk management and insurance educators across the U.S. and Canada. [11:23] Since 2010, Spencer has awarded over $3.3 million in General Grants to support over 130 student-centered experiential learning initiatives at universities and RMI non-profits. Spencer's 2026 application process is now open through July 30th, 2025. [11:43] General Grant awardees are typically notified at the end of October. Learn more about Spencer's General Grants through the Programs tab of SpencerEd.org. [11:54] Back to the Conclusion of my Interview with Chris Maguire of Thomson Reuters! [12:27] Chris refers to RIMScast Episode 335 with Jeff from Academy Sports. Jeff talked about how the Compliance function now sits in the General Counsel's office. At Thomson Reuters, more of the Compliance group has moved into the General Counsel's office in the last year. [12:48] The General Counsels are being charged with understanding the full weight of risk across an organization, from reputational risk to who you should or should not do business with. [13:16] The Sales organization at Thomson Reuters has discussed that a lot with the GC office, from a reputation, sanctions, and everything perspective. A lot of GC offices now include the Compliance role. [13:50] Chris's ERM philosophy is Trust. Companies need to trust who they are doing business with. Companies need to trust that their employees have what they need to make decisions not to deal with a risky customer, but to follow the laws and rules of global companies. It's trust. [14:29] There is so much change going on. Chris talks about values that resonate. One Thomson Reuters value is Act Fast, Learn Fast. You have to move and learn. Companies can help you, but it is on individuals to take the responsibility to act fast and learn fast about what is changing. [14:59] Thomson Reuters is bound by the Trust Principles. It started with Reuters in the 1940s around WWII, but it goes back to its 150 years of legal content. [15:17] The information Thomson Reuters provides its customers has to be free from bias. It has to be right. It has to be updated. It can't be an opinion about a philosophy. It has to be fact-based. It has to provide customers with the information they need to get work done. [15:36] Applying AI on top of trusted, unbiased, correct, up-to-date information is going to be vital, moving forward. Act fast, learn fast, and trust. [15:57] Chris believes the legal industry hasn't always been the fastest-moving industry. The technology is now there to allow us to move more quickly and learn more quickly. That's an exciting thing! [16:23] Chris says AI is no longer a future concept. It's here. It's transforming our lives; it's starting to transform our business environment. If you don't adapt quickly, you're going to be at a significant disadvantage. [16:36] For people in General Counsel's offices, people in compliance functions, the value is your expertise, your knowledge, and you as a human, and what you can bring to the situation. [16:48] If AI can help you get there, and give you a platform on which to add your judgment and expertise, knowledge, and professional opinion, that's a hugely valuable thing. [17:01] Thomson Reuters doesn't see AI taking away jobs. We see people who use AI, potentially taking away the jobs of people who don't use AI. It all comes back to the humans and how they use it. There's never been a time when Thomson Reuter's expertise has been more important. [17:34] Chris, it has been such a pleasure to have you here on RIMScast! I do appreciate that you listened to some previous episodes! Get my unique download count up there! [I7:50] I appreciate that we're reaching a very important segment of our audience and our RIMS membership. I think they're going to learn a lot in this episode. Thank you! [18:02] Special thanks to Chris Maguire for joining us here on RIMScast. Links to RIMS coverage about AI, legal, and compliance risks are in this episode's show notes. [18:13] Plug Time! You can sponsor a RIMScast episode for this, our weekly show, or a dedicated episode. Links to sponsored episodes are in the show notes. [18:41] RIMScast has a global audience of risk and insurance professionals, legal professionals, students, business leaders, C-Suite executives, and more. Let's collaborate and help you reach them! Contact pd@rims.org for more information. [19:00] Become a RIMS member and get access to the tools, thought leadership, and network you need to succeed. Visit RIMS.org/membership or email membershipdept@RIMS.org for more information. [19:18 ] Risk Knowledge is the RIMS searchable content library that provides relevant information for today's risk professionals. Materials include RIMS executive reports, survey findings, contributed articles, industry research, benchmarking data, and more. [19:34] For the best reporting on the profession of risk management, read Risk Management Magazine at RMMagazine.com. It is written and published by the best minds in risk management. [19:48] Justin Smulison is the Business Content Manager at RIMS. You can email Justin at Content@RIMS.org. [19:55] Thank you all for your continued support and engagement on social media channels! We appreciate all your kind words. Listen every week! Stay safe! Links: RIMS Texas Regional 2025 — August 3‒5 | Advance registration rates now open. ERM Conference 2025 — Call for Submissions (Through May 20) RIMS-Certified Risk Management Professional (RIMS-CRMP) RISK PAC | RIMS Advocacy RIMS Risk Management magazine “Balancing Innovation and Compliance When Implementing AI” — Risk Management magazine, April 2025 RIMS Now The Strategic and Enterprise Risk Center Spencer Educational Foundation — General Grants 2026 — Application Deadline July 30, 2025 2025 Coast-To-Coast Risk Management Challenge — Applications Open Through May 23 RIMS Webinars: RIMS.org/Webinars “Asset Valuations in 2025: Managing Tariffs, Inflation, and Rising Insurance Scrutiny” | Sponsored by GRC, a TÜV SÜD Company | May 22, 2025 “Today's Escalating Risk Trajectory: What's the Cause & What's the Solution?” | Sponsored by Zywave | June 5, 2025 “Strategic Risk Financing in an Unstable Economy: Leveraging Technology for Efficiency and Cost Reduction” | Sponsored by Origami Risk | June 17, 2025 Upcoming RIMS-CRMP Prep Virtual Workshops: RIMS-CRMP-FED Exam Prep with AFERM — June 2‒3, 2025 | Presented by RIMS and AFERM RIMS-CRMP Exam Prep Virtual Workshop — June 10‒11, 2025 | Presented by RIMS and NAIT Full RIMS-CRMP Prep Course Schedule “Managing Data for ERM” | June 12 | Instructor: Pat Saporito “Generative AI for Risk Management” | June 26 | Instructor: Pat Saporito See the full calendar of RIMS Virtual Workshops RIMS-CRMP Prep Workshops Related RIMScast Episodes: “(Re)Humanizing Leadership in Risk Management with Holly Ransom” “AI and Regulatory Risk Trends with Caroline Shleifer” Sponsored RIMScast Episodes: “The New Reality of Risk Engineering: From Code Compliance to Resilience” | Sponsored by AXA XL (New!) “Change Management: AI's Role in Loss Control and Property Insurance” | Sponsored by Global Risk Consultants, a TÜV SÜD Company “Demystifying Multinational Fronting Insurance Programs” | Sponsored by Zurich “Understanding Third-Party Litigation Funding” | Sponsored by Zurich “What Risk Managers Can Learn From School Shootings” | Sponsored by Merrill Herzog “Simplifying the Challenges of OSHA Recordkeeping” | Sponsored by Medcor “Risk Management in a Changing World: A Deep Dive into AXA's 2024 Future Risks Report” | Sponsored by AXA XL “How Insurance Builds Resilience Against An Active Assailant Attack” | Sponsored by Merrill Herzog “Third-Party and Cyber Risk Management Tips” | Sponsored by Alliant “RMIS Innovation with Archer” | Sponsored by Archer “Navigating Commercial Property Risks with Captives” | Sponsored by Zurich “Breaking Down Silos: AXA XL's New Approach to Casualty Insurance” | Sponsored by AXA XL “Weathering Today's Property Claims Management Challenges” | Sponsored by AXA XL “Storm Prep 2024: The Growing Impact of Convective Storms and Hail” | Sponsored by Global Risk Consultants, a TÜV SÜD Company “Partnering Against Cyberrisk” | Sponsored by AXA XL “Harnessing the Power of Data and Analytics for Effective Risk Management” | Sponsored by Marsh “Accident Prevention — The Winning Formula For Construction and Insurance” | Sponsored by Otoos “Platinum Protection: Underwriting and Risk Engineering's Role in Protecting Commercial Properties” | Sponsored by AXA XL “Elevating RMIS — The Archer Way” | Sponsored by Archer RIMS Publications, Content, and Links: RIMS Membership — Whether you are a new member or need to transition, be a part of the global risk management community! RIMS Virtual Workshops On-Demand Webinars RIMS-Certified Risk Management Professional (RIMS-CRMP) RISK PAC | RIMS Advocacy RIMS Strategic & Enterprise Risk Center RIMS-CRMP Stories — Featuring RIMS President Kristen Peed! RIMS Events, Education, and Services: RIMS Risk Maturity Model® Sponsor RIMScast: Contact sales@rims.org or pd@rims.org for more information. Want to Learn More? Keep up with the podcast on RIMS.org, and listen on Spotify and Apple Podcasts. Have a question or suggestion? Email: Content@rims.org. Join the Conversation! Follow @RIMSorg on Facebook, Twitter, and LinkedIn. About our guest: Chris Maguire, General Manager, Corporates Risk at Thomson Reuters Production and engineering provided by Podfly.
AI is no longer on the horizon. It's part of how people and products work today. And as AI finds its way into more business applications and processes, it can create new risks. On today's Tech Bytes, sponsored by Palo Alto Networks, we talk about how Palo Alto Networks is addressing those risks so that... Read more »
AI is no longer on the horizon. It's part of how people and products work today. And as AI finds its way into more business applications and processes, it can create new risks. On today's Tech Bytes, sponsored by Palo Alto Networks, we talk about how Palo Alto Networks is addressing those risks so that... Read more »
Remote work is driving a significant startup boom, reshaping the IT services market. A recent study indicates that companies with higher levels of remote work during the COVID-19 pandemic have seen a notable increase in employee startups, with an estimated 11.6% of new business formations attributed to this trend. Despite major corporations reinstating return-to-office mandates, remote work adoption in the U.S. has risen from 19.9% in late 2022 to 23.6% in early 2025, highlighting a growing demand for tools and services that support distributed teams. This shift presents both opportunities and challenges for employers, as they risk losing key talent to new ventures while also facing higher employee attrition rates.The insurance industry is beginning to address the risks associated with artificial intelligence (AI) by offering new products to cover potential losses from AI-related errors. Lloyds of London has introduced a policy that protects businesses from legal claims arising from malfunctioning AI systems, reflecting a growing recognition of AI as an operational risk. This development raises important questions about accountability and liability when AI systems fail, as seen in recent incidents involving customer service chatbots. As insurers start to underwrite AI risks, companies must adapt their service level agreements and governance structures to meet new requirements.The Cybersecurity and Infrastructure Security Agency (CISA) has announced a significant change in how it shares information, focusing on urgent alerts related to emerging threats while reducing routine updates. This shift, coupled with budget cuts that could reduce CISA's funding by 17%, raises concerns about the agency's capacity to respond to increasing cyber threats. IT services firms and cybersecurity vendors must adapt to this new landscape, as the responsibility for threat detection and response shifts more towards the private sector. Organizations that previously relied on CISA for support may find themselves facing increased operational risks due to reduced visibility and slower response times.In a related development, Microsoft has extended support for its Office applications on Windows 10 until October 2028, allowing users more time to transition to Windows 11. This decision reflects a broader trend in the technology sector, where companies are adapting their support strategies to meet user needs. By decoupling the upgrade cycles for Windows and Office, Microsoft acknowledges the resistance to forced upgrades and the importance of maintaining enterprise customer relationships. This extension provides IT service providers with additional time for operational planning while emphasizing the ongoing need for modernization in the long term. Four things to know today 00:00 Remote Work Fuels Startup Surge, Alters IT Talent Strategies Amid Growing Demand for Flexibility05:07 From Chatbot Lawsuits to Pontifical Warnings: AI Errors Now Seen as Business and Social Risk07:57 CISA Alert Shift and Budget Cuts Signal Rising Cybersecurity Burden for Private Sector10:08 Office Gets a Lifeline on Windows 10: Microsoft Decouples OS and App Upgrades Through 2028 Supported by: https://syncromsp.com/ All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
In this episode, Drex highlights former HCA CSO Paul Connolly's practical cyber insurance guidance, introduces OpenAI's new "Operator" AI agent that can independently perform web tasks, and examines the emerging organizational risks of "shadow AI" as employees implement unauthorized AI solutions in their workflows without proper oversight.Remember, Stay a Little Paranoid X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer
In this episode of Life of a CISO, Dr. Eric Cole dives deep into the dominating force of 2025: artificial intelligence. While AI is everywhere—embedded in nearly every conversation and technology—the real concern, he explains, isn't just about its capabilities but the risks it brings, especially in cybersecurity and data privacy. Dr. Cole breaks AI down into its two primary types: machine learning, which relies on data sets, and expert systems, which mimic expert decision-making through logical rules. He shares how AI isn't new, recounting his own early work building simple expert systems back in college, but warns that today's AI is only as good—or as dangerous—as the data it consumes. Dr. Cole emphasizes that data is the real power behind AI, not the algorithms. Using TikTok as an example, he highlights how data collected over years can predict behaviors and influence markets, creating national security and privacy concerns. He also discusses why big players like Amazon might seek access to such rich behavioral data to maintain dominance in e-commerce. Drawing attention to the eerie accuracy of modern predictive systems, Dr. Cole calls on CISOs and security professionals to take responsibility: every interaction with AI is feeding it data, and that data needs to be protected. He urges leaders to ask tough questions about where their data goes, how it's used, and whether they are unknowingly contributing to systems that could expose sensitive information.
What is AI actually doing in our daily lives? Can you benefit from it without losing control of your privacy or purpose? And how do we stay human in a world where machines are learning to think?In this episode of The Courage To Be™ podcast, host Tania Vasallo talks with Trish Lopez, founder of Teeniors, about AI for beginners. Trish explains what AI is, how it's already woven into our routines—from Netflix and Google to Siri and smartwatches—and why understanding it is crucial, even if you never plan to use it. She shares a personal story about using AI to draft a federal grant budget, saving time but learning to question the accuracy. Trish also dives into the ethical concerns of data mining, job loss, and tech addiction, urging listeners to stay informed and connected. This episode is a must-listen for anyone curious, cautious, or simply trying to keep up with the digital age.Whether it's courage in business or in life, The Courage To Be podcast explores what it takes to face challenges and grow—from navigating menopause to shifting your mindset, improving your health, and stepping into your full potential.To easily find episodes by theme:(*) marks our Think and Grow Rich series—stories and insights inspired by the book.(M) marks our Menopause series—real talk about change, identity, and strength.• Find Trish Lopez's offering at www.teeniors.com• Download your FREE Think and Grow Rich PDF book, the book that has made millions of millionaires! Click here: https://bit.ly/4fa6iXCWant a chance to win a 7-night/8-day complimentary vacation at one of over 3,000 resorts worldwide (valued at $2,000–$5,000)? One lucky listener is selected every month! (Airfare, taxes, and fees not included.)Here's how to enter:Leave a rating and written review of The Courage To Be podcast on Apple Podcasts.Before hitting “submit,” take a screenshot of your reviewEmail the screenshot to help.thecouragetobe@gmail.com with the subject line “gift.”Want to increase your chances? Leave thoughtful comments on different podcast episodes on YouTube—each comment counts as a bonus entry when you send us a screenshot of it too!Every entry gets you closer to your dream getaway. Good luck!If you want a quick video on how to rate and review the podcast on Apple Podcast click here - https://bit.ly/3JXUsnhIf you'd love to watch the video version of our interviews, be sure to subscribe to the podcast's YouTube channel. - https://bit.ly/3FhRW79If you enjoyed this episode. We think you'll enjoy these other episodes:• 129: What Happens When Seniors Get Schooled by Teens in Tech? with Trish Lopez - https://youtu.be/vnsea_fYJAY• 128: This Strategy Built a Million-Dollar Launch with Laura Sprinkle - https://youtu.be/YbfDxE7hm3gCONNECT WITH TANIA:FACEBOOK - Tania VasalloYOUTUBE - @thecouragetobeINSTAGRAM - @thecouragetobepodcastTIKTOK - @thecouragetobepodcastListen to The Courage To Be - https://apple.co/3Vnk1TOIN THIS EPISODE:00:00 –
Rural Healthcare and small/midsized businesses are being tipped over the edge. Not by Tariffs or high interest rates, but by investments needed into protecting their people and systems from disruption. How can you protect yourself from something you don't know? A risk you don't understand. What if you could think like a hacker… and use that power to protect your business?In this exclusive interview, we sit down with Chris Roberts, world-renowned cybersecurity expert, CISO and self-defined “Hacker” to explore:
Cybersecurity startups are experiencing a significant revenue surge as threats associated with artificial intelligence continue to multiply. Companies like ChainGuard have reported a remarkable seven-fold increase in annualized revenue, reaching approximately $40 million, while Island anticipates its revenue will hit $160 million by the end of the year. The rise in cyber attacks, particularly a 138% increase in phishing sites since the launch of ChatGPT, has created a greater demand for cybersecurity solutions. A recent report from Tenable highlights that 91% of organizations have misconfigured AI services, exposing them to potential threats, emphasizing the urgent need for organizations to adopt best practices in cybersecurity.Intel is undergoing a strategic reset under its new CEO, Lip Bu Tan, who announced plans to spin off non-core assets to focus on custom semiconductor development. While the specifics of what constitutes core versus non-core assets remain unclear, this move aims to streamline operations and enhance innovation in the semiconductor space. However, Intel's past struggles with execution raise questions about the effectiveness of this strategy. The company must leverage its strengths while shedding distractions to remain competitive in the evolving semiconductor landscape.Google has made strides in email security by allowing enterprise Gmail users to apply end-to-end encryption, a feature previously limited to larger organizations. This democratization of high-security email comes in response to rising email attacks, enabling users to control their encryption keys and reduce the risk of data interception. Meanwhile, Apple has addressed a significant vulnerability in its iOS 18.2 passwords app that exposed users to phishing attacks, highlighting the importance of rapid response to security flaws.CrowdStrike and SnapLogic are enhancing their partner ecosystems to improve security operations and streamline integration processes. CrowdStrike's new Services Partner program aims to promote the adoption of its next-gen security technology, while SnapLogic's Partner Connect program focuses on collaboration with technology and consulting partners. Additionally, OpenAI has increased its bug bounty program rewards, reflecting the need for ongoing vigilance in cybersecurity as AI becomes more prevalent. The convergence of AI and cybersecurity presents both challenges and opportunities, necessitating proactive measures to safeguard sensitive information. Four things to know today 00:00 Cybersecurity Startups See Revenue Surge as AI Threats Multiply—Are We Prepared?04:44 Intel's Strategic Reset: Spinning Off Non-Core Assets to Boost Custom Chip Development06:09 Google Brings Enterprise-Level Encryption to Gmail as Apple Patches Major iOS Vulnerability08:56 CrowdStrike and SnapLogic Step Up Partnerships While OpenAI Sweetens Bug Bounty Reward Supported by: https://syncromsp.com/ Join Dave April 22nd to learn about Marketing in the AI Era. Signup here: https://hubs.la/Q03dwWqg0 All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
ABOUT THE GUESTToday's guest is Steven Weigler, the Founder and Executive Counsel of leading US law firm EmergeCounselSM that offers sophisticated business and intellectual property counsel to entrepreneurs worldwide. Steven has developed a deep expertise in the evolving field of eCommerce law, guiding hundreds of online businesses from their initial concept through to successful sale. With decades of legal experience, Steven also brings a unique perspective, having served as a Senior Attorney for a Fortune 50 communications company and founded and led an educational technology startup as CEO and General Counsel for seven years. To learn more about Steven and his work please visit these links:Website: https://emergecounsel.com/LinkedIn: https://www.linkedin.com/in/stevenweigler/Facebook: https://www.facebook.com/emergecounselInstagram: https://www.instagram.com/emergecounselX: https://x.com/EmergeCounselYouTube: https://www.youtube.com/@Emergecounsel/featuredABOUT THE HOSTMy name is Dave Barr and I am the Founder and Owner of RLB Purchasing Consultancy Limited.I have been working in Procurement for over 25 years and have had the joy of working in a number of global manufacturing and service industries throughout this time.I am passionate about self development, business improvement, saving money, buying quality goods and services, developing positive and effective working relationships with suppliers and colleagues, and driving improvement through out the supply chain.Now I wish to share this knowledge and that of highly skilled and competent people with you, the listener, in order that you may hopefully benefit from this information.CONTACT DETAILS@The Real Life BuyerEmail: david@thereallifebuyer.co.ukWebsite: https://linktr.ee/thereallifebuyerFor Purchasing Consultancy services:https://rlbpurchasingconsultancy.co.uk/Email: contact@rlbpurchasingconsultancy.co.ukFind and Follow me @reallifebuyer on Facebook, Instagram, X, Threads and TikTok.Click here for some Guest Courses - https://www.thereallifebuyer.co.uk/guest-courses/Click here for some Guest Publications - https://www.thereallifebuyer.co.uk/guest-publications