POPULARITY
What if the hackers are actually the heroes?In this mind-blowing episode, host David Mauro sits down with Matt Toussain, elite military cyber warrior, DEF CON speaker, and Founder of Open Security, to reveal how offensive security and real-world hacking tactics are helping businesses reduce risk, fight cybercrime, and stay ten steps ahead of threats.
In this episode of the Risk Management Show, we explore AI Risks in Accounting and how to avoid some common pitfalls with Mike Whitmire, CEO of FloQast, the leading accounting transformation platform powered by AI agents. Mike shares his journey from auditor to tech innovator and offers deep insights into the challenges and risks AI introduces in financial reporting, compliance, and accounting operations. We discussed critical topics, including the dangers of AI hallucinations, the limitations of generic AI tools, and what businesses must consider when deploying AI in accounting. Mike also highlights the evolving role of accountants, the talent gap, and how technology like FloQast empowers accounting professionals to automate workflows without compromising accuracy. Additionally, he provides actionable advice on adapting to AI-driven changes in the field. If you're interested in risk management, cyber security, sustainability, or want to hear from industry leaders like Mike, this episode is packed with valuable insights. If you want to be our guest or suggest someone, send your email to info@globalriskconsult.com with the subject line: "Podcast Guest Inquiry."
In this episode of GRC Chat, we discussed the transformative potential and pressing risks of artificial intelligence with Joe Sutherland, a technology consultant, founder of JL Sutherland & Associates, and director of the Center for AI Learning at Emory University. Joe shared insights into the challenges of regulating AI, the dangers of deepfakes, and the misconceptions businesses face when leveraging data and AI. Drawing from his expertise and his book "Analytics the Right Way" co-authored with Tim Wilson, Joe highlighted how businesses can adopt smarter AI strategies and the pivotal role of leadership in navigating technology's future. If you want to be our guest or suggest someone for the show, send your email to info@globalriskconsult.com with the subject line "Guest Suggestion." Join us as we explore the intersection of risk management, cyber security, and sustainability with industry leaders!
Cybersecurity startups are experiencing a significant revenue surge as threats associated with artificial intelligence continue to multiply. Companies like ChainGuard have reported a remarkable seven-fold increase in annualized revenue, reaching approximately $40 million, while Island anticipates its revenue will hit $160 million by the end of the year. The rise in cyber attacks, particularly a 138% increase in phishing sites since the launch of ChatGPT, has created a greater demand for cybersecurity solutions. A recent report from Tenable highlights that 91% of organizations have misconfigured AI services, exposing them to potential threats, emphasizing the urgent need for organizations to adopt best practices in cybersecurity.Intel is undergoing a strategic reset under its new CEO, Lip Bu Tan, who announced plans to spin off non-core assets to focus on custom semiconductor development. While the specifics of what constitutes core versus non-core assets remain unclear, this move aims to streamline operations and enhance innovation in the semiconductor space. However, Intel's past struggles with execution raise questions about the effectiveness of this strategy. The company must leverage its strengths while shedding distractions to remain competitive in the evolving semiconductor landscape.Google has made strides in email security by allowing enterprise Gmail users to apply end-to-end encryption, a feature previously limited to larger organizations. This democratization of high-security email comes in response to rising email attacks, enabling users to control their encryption keys and reduce the risk of data interception. Meanwhile, Apple has addressed a significant vulnerability in its iOS 18.2 passwords app that exposed users to phishing attacks, highlighting the importance of rapid response to security flaws.CrowdStrike and SnapLogic are enhancing their partner ecosystems to improve security operations and streamline integration processes. CrowdStrike's new Services Partner program aims to promote the adoption of its next-gen security technology, while SnapLogic's Partner Connect program focuses on collaboration with technology and consulting partners. Additionally, OpenAI has increased its bug bounty program rewards, reflecting the need for ongoing vigilance in cybersecurity as AI becomes more prevalent. The convergence of AI and cybersecurity presents both challenges and opportunities, necessitating proactive measures to safeguard sensitive information. Four things to know today 00:00 Cybersecurity Startups See Revenue Surge as AI Threats Multiply—Are We Prepared?04:44 Intel's Strategic Reset: Spinning Off Non-Core Assets to Boost Custom Chip Development06:09 Google Brings Enterprise-Level Encryption to Gmail as Apple Patches Major iOS Vulnerability08:56 CrowdStrike and SnapLogic Step Up Partnerships While OpenAI Sweetens Bug Bounty Reward Supported by: https://syncromsp.com/ Join Dave April 22nd to learn about Marketing in the AI Era. Signup here: https://hubs.la/Q03dwWqg0 All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
ABOUT THE GUESTToday's guest is Steven Weigler, the Founder and Executive Counsel of leading US law firm EmergeCounselSM that offers sophisticated business and intellectual property counsel to entrepreneurs worldwide. Steven has developed a deep expertise in the evolving field of eCommerce law, guiding hundreds of online businesses from their initial concept through to successful sale. With decades of legal experience, Steven also brings a unique perspective, having served as a Senior Attorney for a Fortune 50 communications company and founded and led an educational technology startup as CEO and General Counsel for seven years. To learn more about Steven and his work please visit these links:Website: https://emergecounsel.com/LinkedIn: https://www.linkedin.com/in/stevenweigler/Facebook: https://www.facebook.com/emergecounselInstagram: https://www.instagram.com/emergecounselX: https://x.com/EmergeCounselYouTube: https://www.youtube.com/@Emergecounsel/featuredABOUT THE HOSTMy name is Dave Barr and I am the Founder and Owner of RLB Purchasing Consultancy Limited.I have been working in Procurement for over 25 years and have had the joy of working in a number of global manufacturing and service industries throughout this time.I am passionate about self development, business improvement, saving money, buying quality goods and services, developing positive and effective working relationships with suppliers and colleagues, and driving improvement through out the supply chain.Now I wish to share this knowledge and that of highly skilled and competent people with you, the listener, in order that you may hopefully benefit from this information.CONTACT DETAILS@The Real Life BuyerEmail: david@thereallifebuyer.co.ukWebsite: https://linktr.ee/thereallifebuyerFor Purchasing Consultancy services:https://rlbpurchasingconsultancy.co.uk/Email: contact@rlbpurchasingconsultancy.co.ukFind and Follow me @reallifebuyer on Facebook, Instagram, X, Threads and TikTok.Click here for some Guest Courses - https://www.thereallifebuyer.co.uk/guest-courses/Click here for some Guest Publications - https://www.thereallifebuyer.co.uk/guest-publications
Tune into this pivotal episode of The MindShare Podcast, where we explore the dynamic real estate market this spring with Ryan Gilmour, Broker of Record at RE/MAX Realty Enterprises Inc. From the impacts of tariffs to the innovative uses of AI in real estate, Ryan sheds light on how agents can thrive in a fluctuating market. Discover strategic insights and expert advice to excel in your real estate career. Don't miss this deep dive into what it takes to succeed amidst the challenges and opportunities of today's market conditions.We're joined by Ryan Gilmour, a top broker with extensive experience and insights into the real estate industry.[6:31] Spring Market Forecast: Ryan discusses his predictions for this spring's real estate market and what trends to watch.[12:46] Tariff Impact Analysis: Understand how current tariffs are affecting the real estate market and what long-term effects we might expect.[15:56] Agent Concerns and Guidance: Insights into the most pressing questions from agents at Ryan's brokerage and the advice he's giving.[21:51] Emerging Opportunities: Key opportunities for buyers and sellers this season that you should not overlook.[25:56] Industry-wide Effects: How current market trends are reshaping the broader real estate landscape.[30:13] Day-to-Day Brokerage Insights: Ryan shares his daily priorities and the core activities that drive his brokerage business.[34:05] Organizational Strategies: Tips on staying organized and effective in a hectic business environment.[38:14] The Role of Relationships in Modern Real Estate: Evaluating the importance of personal connections in a digitally dominated market.[42:05] Lead Generation and Conversion Tactics: Effective strategies that Ryan's office uses to nurture and convert leads.[49:39] Real Estate and AI: Exploring current and future implications of artificial intelligence in the industry.[52:24] AI Risks and Precautions: What professionals in the field need to be cautious about regarding AI.[59:10] Traits of Top Performers: Unpacking the key characteristics that define successful, award-winning realtors.[1:01:48] Overcoming Challenges: Ryan's advice to anyone facing obstacles in their career with a can-do attitude.[1:05:03] Future Plans for RE/MAX Realty Enterprises Inc and Specialists: What's next for Ryan's companies.[1:09:58] Measuring Success: How Ryan defines a successful day in the world of real estate.[1:14:01] Final Expert Advice: Essential actions real estate agents should take right now to make the most of this spring market.Thanks for tuning in to this episode of The MindShare PodCast with our special guest: Ryan Gilmour, Broker of Record at RE/MAX Realty Enterprises Inc., as we talked about the real estate market, the impact of tariffs, and marketing technology including lead gen and AI.Stay connected and empowered in your real estate journey with insights from the experts. Subscribe, share, and rate us on your favorite podcast platform. Visit www.mindshare101.com for more resources and connect with us to elevate your real estate career.Get your FREE gift on my homepage at www.mindshare101.com just for tuning in!I'd also be really grateful if you could take a quick second to go www.ratethispodcast.com/mindshare101 to rate the show for me.And we haven't connected yet, send me a message!Facebook: facebook.com/mindshare101 Instagram: instagram.com/davidgreenspan101Youtube: youtube.com/@DavidGreenspanLinkedin: linkedin.com/in/mindshare101
Can we create machines that think—and still protect what makes us human? In this eye-opening conversation, Dr. Christopher DiCarlo, acclaimed philosopher and author of Building a God: The Ethics of Artificial Intelligence and the Race to Control It, lays out the philosophical and practical roadmap for ethical AI development. As artificial intelligence accelerates past human expectations, Dr. DiCarlo explores the ethical dilemmas and regulatory voids we now face—and how we can build systems that align with our highest values, not just our fastest code. From “chain of thought” in large language models to the hidden implications of information theory, this episode is a must-listen for anyone concerned about the future of AI and the delicate balance between progress and precaution. Join in now to discover: Why AI governance is no longer optional—but urgent. How philosophy and science intersect to decode human reasoning. What “relation of systems” means for understanding superintelligence. How to pursue AI innovation while managing existential risks. Whether you're an AI researcher, tech visionary, or just AI-curious—this one's for you. To learn more about Dr. DiCarlo and his ongoing work, be sure to visit his website! Episode also available on Apple Podcasts: https://apple.co/38oMlMr
Summary: In this episode, Ellis Pratt explores the critical issue of data privacy for technical writers using AI tools and chatbots. He delves into the potential risks, from data leaks and copyright infringement to compliance violations and intellectual property concerns. The episode also provides practical solutions and strategies for mitigating these risks, empowering technical writers to leverage AI responsibly and ethically. Key Discussion Points: The Promise and Peril of AI: AI offers significant productivity gains for technical writers (content creation, first drafts, automation of tasks), but introduces critical privacy risks. Potential Risks of Using AI: Data Leaks: Inputted data becoming part of the AI model, accessible to others. Copyright Infringement: AI generating content based on competitor data. Data Breaches: Risk of AI providers being hacked. Data Sovereignty: Data stored in different countries potentially conflicting with regulations. Compliance Violations: Risks related to regulated industries (healthcare, finance). Intellectual Property Rights: Ambiguity over who owns AI-generated content. Practical Solutions and Mitigation Strategies: Sanitising Content: Replace sensitive data (API keys, product names) with placeholders. Generic Examples: Use generic rather than actual customer data. Limiting Data Input: Provide only the minimum amount of data required. Review and Redact: Carefully review content before inputting to AI. Check Public Domain Status: Determine if the content is already publicly available. AI Provider Privacy Policies: Review data retention policies and opt-out options. Choosing Secure Tools: Select tools with better data deletion options (e.g., Google GeminiAI Studio, Claude). Managing Data Controls: Understand how to control data collection settings (e.g., ChatGPT). Private/Managed LLMs: Consider private, self-hosted, or managed AI models for sensitive data. Develop Policies and Procedures: Create guidelines for team use of AI, tiered approaches based on document sensitivity. Content Filters: Implement filters to check for sensitive information. Audits and Assessments: Engage IT security for impact assessments and security audits. Actionable Takeaways: Prioritise Data Sanitisation: Make it a core practice before using any AI tool. Review Privacy Policies: Understand the data handling practices of your AI providers. Implement Security Measures: Protect proprietary and confidential information through policies, technology, and human oversight. Collaborate with Security and Legal: Engage relevant internal teams to ensure compliance and minimize risk. Start Small and Stay Informed: Gradually introduce AI with low-risk documentation and keep up to date on the latest privacy risks and solutions. Quotes: "AI and chatbots offer in technical writing…a huge promise of a way to be more efficient and more effective in what we do. But…we do need to be aware that there is a privacy risk, and we need to address that." "AI…is both a powerful productivity tool and a potential risk. So we need to think about those two aspects and manage it." "So we're going to be on a tightrope, a privacy tightrope." Want Help Improving Your Documentation? Cherryleaf specializes in fixing developer portals and technical documentation. If you're struggling with user feedback, contact us at info@cherryleaf.com for expert guidance. CC Flickr image: Stock Catalog
In this episode of the Campus Technology Insider podcast, host Rhea Kelly, editor in chief of Campus Technology, discusses the Educause 2025 AI Landscape Study with Senior Researcher Jenay Robert. They delve into the study's history, methodology, and key findings, including shifts in attitudes towards AI, policy impacts, and the emerging digital divide between larger and smaller institutions. Robert also highlights the importance of effective communication, AI literacy, and community engagement in higher education. Listen as they explore how institutions are adapting to AI's growing presence and the ethical responsibilities involved. 00:00 Introduction and Guest Welcome 00:29 Overview of the Educause AI Landscape Study 02:07 Key Findings and Comparisons 04:07 New Questions and Methodology 08:41 AI Costs and Budgeting 13:46 Perceptions of AI Risks and Benefits 23:00 Final Thoughts and Recommendations 28:48 Conclusion and Farewell Resource links: Educause 2025 AI Landscape Study Educause 2024 AI Landscape Study Educause Shop Talk podcast Educause Library on AI Music: Mixkit Duration: 29 minutes Transcript: Coming Soon
Welcome to Insurance Covered, the podcast that covers everything insurance. In this episode Peter is joined by Chris Moore, head of Apollo ibott 1971, and in this episode they discuss the underwriting of new AI risks. In this episode we cover:The work Chris and Apollo ibott 1971 do.New technology leads to new risks which leads to new insurance.Insurance must adapt to new risks generated by AI.The insurance industry needs to embrace innovation and change.The role insurtechs play in this evolution of insurance.What the future of insurance will look like.We hope you enjoyed this episode, if you did please subscribe to be notified when new episodes release. Hosted on Acast. See acast.com/privacy for more information.
In this episode of Cybersecurity Today with Jim Love, explore the growing concerns surrounding DeepSeek AI's censorship and lack of guardrails, the rise of 'Shadow AI' in workplaces, and how cybercriminals exploit major cloud providers like AWS and Azure. Learn about a phishing scam targeting Microsoft single sign-on that's been undetected for six years, and get insights into the critical measures needed to safeguard against these evolving threats. 00:00 Introduction to Cybersecurity Today 00:25 DeepSeek AI: Censorship and Security Concerns 02:56 Shadow AI: The Rise of Unauthorized Generative Tools 05:05 Cloud Providers Exploited by Cybercriminals 07:31 Phishing Scams Targeting Microsoft Single Sign-On 09:03 Conclusion and Listener Engagement
This post should not be taken as a polished recommendation to AI companies and instead should be treated as an informal summary of a worldview. The content is inspired by conversations with a large number of people, so I cannot take credit for any of these ideas.For a summary of this post, see the threat on X. Many people write opinions about how to handle advanced AI, which can be considered “plans.” There's the “stop AI now plan.” On the other side of the aisle, there's the “build AI faster plan.” Some plans try to strike a balance with an idyllic governance regime. And others have a “race sometimes, pause sometimes, it will be a dumpster-fire” vibe. ---Outline:(02:33) The tl;dr(05:16) 1. Assumptions(07:40) 2. Outcomes(08:35) 2.1. Outcome #1: Human researcher obsolescence(11:44) 2.2. Outcome #2: A long coordinated pause(12:49) 2.3. Outcome #3: Self-destruction(13:52) 3. Goals(17:16) 4. Prioritization heuristics(19:53) 5. Heuristic #1: Scale aggressively until meaningful AI software RandD acceleration(23:21) 6. Heuristic #2: Before achieving meaningful AI software RandD acceleration, spend most safety resources on preparation(25:08) 7. Heuristic #3: During preparation, devote most safety resources to (1) raising awareness of risks, (2) getting ready to elicit safety research from AI, and (3) preparing extreme security.(27:37) Category #1: Nonproliferation(32:00) Category #2: Safety distribution(34:47) Category #3: Governance and communication.(36:13) Category #4: AI defense(37:05) 8. Conclusion(38:38) Appendix(38:41) Appendix A: What should Magma do after meaningful AI software RandD speedupsThe original text contained 11 images which were described by AI. --- First published: January 29th, 2025 Source: https://www.lesswrong.com/posts/8vgi3fBWPFDLBBcAx/planning-for-extreme-ai-risks --- Narrated by TYPE III AUDIO. ---Images from the article:
In this Talk To Me About A&E podcast, we review the recently released ACEC Guidelines on the use of AI by Design Professional Firms. Dan Buelow is joined by Mark Blankenship, Director of Risk Management for WTW A&E and co-chair of the sub committee on AI for the ACEC Risk Management Committee. Dan and Mark discuss the inherent risks of AI when it comes to the design profession and important guidelines firms can adopt in managing this evolving technology – specifically: Governance and Approval Process; Sources and Tools and Guidelines for Practice.
Onstage at Outsolve's HR Gumbo Conference in New Orleans, Keith Sonderling, the former Commissioner of the EEOC, joins Chad & Cheese to discuss major trends in employment discrimination and the evolving role of AI in HR. He notes a significant spike in discrimination charges post-recession, particularly age discrimination, followed by increases in sexual harassment, equal pay, and racial discrimination claims due to various societal movements and events. Sonderling highlights the broad applicability of the Executive Order on Cybersecurity across all sectors and the challenges of managing discrimination claims, especially with the rise in religious exemptions post-COVID vaccine mandates. He also addresses the complexities of returning to office post-pandemic, disability discrimination, particularly mental health claims, and generational workplace dynamics. The conversation delves into the legal implications of AI in hiring, emphasizing the need for bias audits and the potential for AI to reduce traditional hiring biases if properly implemented. Lastly, the guys touch on the legislative landscape for AI in HR and the risks of fraud in emerging tech like the metaverse, concluding with the importance of clear policies and verification processes to ensure fairness and compliance.
Onstage at Outsolve's HR Gumbo Conference in New Orleans, Keith Sonderling, the former Commissioner of the EEOC, joins Chad & Cheese to discuss major trends in employment discrimination and the evolving role of AI in HR. He notes a significant spike in discrimination charges post-recession, particularly age discrimination, followed by increases in sexual harassment, equal pay, and racial discrimination claims due to various societal movements and events. Sonderling highlights the broad applicability of the Executive Order on Cybersecurity across all sectors and the challenges of managing discrimination claims, especially with the rise in religious exemptions post-COVID vaccine mandates. He also addresses the complexities of returning to office post-pandemic, disability discrimination, particularly mental health claims, and generational workplace dynamics. The conversation delves into the legal implications of AI in hiring, emphasizing the need for bias audits and the potential for AI to reduce traditional hiring biases if properly implemented. Lastly, the guys touch on the legislative landscape for AI in HR and the risks of fraud in emerging tech like the metaverse, concluding with the importance of clear policies and verification processes to ensure fairness and compliance.
Onstage at Outsolve's HR Gumbo Conference in New Orleans, Keith Sonderling, the former Commissioner of the EEOC, joins Chad & Cheese to discuss major trends in employment discrimination and the evolving role of AI in HR. He notes a significant spike in discrimination charges post-recession, particularly age discrimination, followed by increases in sexual harassment, equal pay, and racial discrimination claims due to various societal movements and events. Sonderling highlights the broad applicability of the Executive Order on Cybersecurity across all sectors and the challenges of managing discrimination claims, especially with the rise in religious exemptions post-COVID vaccine mandates. He also addresses the complexities of returning to office post-pandemic, disability discrimination, particularly mental health claims, and generational workplace dynamics. The conversation delves into the legal implications of AI in hiring, emphasizing the need for bias audits and the potential for AI to reduce traditional hiring biases if properly implemented. Lastly, the guys touch on the legislative landscape for AI in HR and the risks of fraud in emerging tech like the metaverse, concluding with the importance of clear policies and verification processes to ensure fairness and compliance.
Open Tech Talks : Technology worth Talking| Blogging |Lifestyle
Welcome to Open Tech Talks, where we explore AI, cybersecurity, and the evolving role of technology in businesses. In this episode, we speak with Terry Ziemniak, a Fractional CISO and Partner at TechCXO, who has worked with companies across the healthcare, retail, manufacturing, and government sectors. The discussion covers cybersecurity risks, strategies for companies of all sizes, the challenges of adopting Generative AI, and governance frameworks for mitigating AI-related risks. Whether you're a business leader, security professional, or AI enthusiast, this episode provides practical insights into managing security in today's rapidly changing digital landscape. Before going further, let's understand what these Fractional Roles are: Fractional leadership is an emerging trend that allows companies to hire high-level expertise without committing to a full-time executive role. For example, Fractional CISOs provide the same security leadership as traditional CISOs. Still, they are part-time or project-based, making cybersecurity expertise accessible to businesses that may not have the budget for a full-time role. By following the best security and AI governance practices, organizations can protect themselves against threats while leveraging technology to drive growth. Episode # 154 Today's Guest: Terry Zimnick, Fractional CISO, Terry Ziemniak is a Partner in TechCXO's Product and Technology practice. He is a cybersecurity leader sought by boards, investors, and senior management teams to assist healthcare, service organizations, retail, and manufacturing companies—as well as government agencies and contractors—in his role as a fractional CISO. Website: TechCXO Linkedin: Terry What Listeners Will Learn: Understanding Fractional Roles in Cybersecurity: How businesses leverage fractional Chief Information Security Officers (CISOs) for cost-effective security leadership. Cybersecurity Landscape: Current threats, vulnerabilities, and trends shaping cybersecurity in different industries. Security Practices in Large vs. Small Companies: The challenges faced by small and medium-sized businesses (SMBs) in implementing cybersecurity measures compared to larger organizations. Challenges in Adopting Generative AI: Key obstacles companies face when integrating AI and large language models (LLMs) into their workflows. Generative AI Risk and Governance: The importance of AI governance frameworks and how businesses can structure policies to ensure compliance and security. Cybersecurity Strategies for SMBs: Practical steps for small businesses to strengthen security and mitigate risks without requiring large budgets. Resources: TechCXO
Send us a textAre you prepared to protect your finances in the digital age? In this eye-opening episode, I sit down with fraud prevention expert Robert Persichitte to uncover how AI and cryptocurrency are reshaping the world of scams and fraud.We explore:The rise of AI-powered scams and how they trick even the savviest individuals.Lessons from major financial failures like Silicon Valley Bank and crypto collapses.The critical difference between scams, rip-offs, and outright fraud.Proven steps to secure your money, personal data, and passwords against modern threats.If you've ever wondered how to stay one step ahead of fraudsters or navigate the risks of crypto and AI safely, this episode is a must-listen.Connect With Robert:Website: delagify.comLinkedIn: https://www.linkedin.com/company/delagify/Support the showHOW TO SUPPORT THE WALK 2 WEALTH PODCAST: 1. Subscribe, Rate, & Review us on Apple Podcasts, Spotify, YouTube, or your favorite podcast platform. 2. Share Episodes with your family, friends, and co-workers.3. Whether you're just starting your business or your business is established, ChatGPT can help you take your business to the next level. Get Instant Access To My List of Top 10 ChatGPT Prompts To Save You Time, Energy, & Money: HTTPS://WWW.STOPANDSTARE.MEDIA/AI
It's a compendium of highlights from just one season in the long-running, award-winning PEACE TALKS RADIO series. You'll hear clips from our series about "Bridging Political Division", as well as from our programs on"Solutions to Gun Violence", "Intergenerational Connection", "Healing through Psychedelics", "AI: Risks and Benefits for Peace", "When Digital Addiction Threatens Family Peace", and more.
In this episode of AI, Government, and the Future, host Marc Leh engages in an insightful conversation with Dr. Eva-Marie Muller-Stuler, Partner at EY and leader of the Data & AI practice for the Middle East and North Africa. Dr. Muller-Stuler brings her extensive experience in AI governance and data science to discuss the critical intersection of AI, ethics, and democracy.
A Note from James: Money GPT. I mean, we've all heard about the incredible potential of AI, and I've shared my optimism about its future in many episodes. But today, we have Jim Rickards back on the show, and he's here to offer a more skeptical perspective. You might remember our earlier discussion where Jim laid out a masterclass on the economy, its history, and what might unfold over the next few years. Now, he's back with insights from his new book, Money GPT, diving into what we should watch out for when it comes to AI and its impact on the economy. Let's get into this compelling discussion with Jim Rickards. Episode Description: In this episode, James Altucher welcomes back bestselling author Jim Rickards to discuss his latest book, Money GPT. Jim delves into the transformative power of AI, highlighting both its immense benefits and the potential risks it poses, particularly to the global economy and financial markets. Drawing on his experience building AI models for the CIA, Jim explains how AI is reshaping industries and warns of its unintended consequences. The conversation spans the accelerating role of AI in finance, its vulnerabilities, and its parallels with nuclear decision-making processes. Whether you're optimistic or cautious about AI, this episode will challenge your perspective with fresh insights and historical context. What You'll Learn: How AI is amplifying financial market volatility and increasing systemic risks. The concept of "cybernetics" as a solution to mitigate market crashes. The differences between AI's success in music and its limitations in writing. Why AI's self-referential feedback loops could worsen over time. The parallels between AI in finance and its potential misuse in nuclear decision-making. Timestamped Chapters: [00:01:30] Introduction: Revisiting AI and its role in the economy. [00:03:24] The dual nature of AI: Power and risk. [00:06:50] GPT breakthroughs and the future of language models. [00:11:34] Why AI excels in music but struggles with writing. [00:19:46] The rise of passive investing and its dangers. [00:23:14] Cybernetics: A strategy to stabilize financial markets. [00:39:15] The risks of removing humans from critical decision-making chains. [00:47:53] Will AI replace jobs faster than it creates new ones? Additional Resources: Jim Rickards' book, Money GPT. Related episode: The History and Future of the Economy with Jim Rickards.
Exclusive interview with Perry Carpenter, a multi-award-winning author, podcaster, and cybersecurity expert. Perry's latest book, FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions, offers invaluable insights into navigating the complexities of AI-driven deception. Send us a textHave a Guest idea or Story for us to Cover? You can now text our Podcast Studio direct. Text direct (904) 867-446 Get peace of mind. Get Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com Imagine setting yourself apart from the competition because your organization is always secure, always available, and always ahead of the curve. That's NetGain Technologies – your total one source for cybersecurity, IT support, and technology planning.
Tresa Stevens, regional head of cyber at Allianz Commercial, discusses the surge in large cyber claims, attributing it to increased data breaches, evolving AI-powered threats, and the growing need for robust cyber hygiene practices.
On this episode of The Insightful Leader, why your trade secrets may not be safe, and other considerations.
AI has been integrated into almost every aspect of our lives, from everyday software we use at work, to the algorithms that determine what content is recommended to us at home. While extraordinary in its capabilities, it isn't infallible and will open up everyone to new and emerging risks. Legislation and regulations are finally catching up to the rapid adoption of this technology, such as the EU AI Act and new Best Practice Standards such as ISO 42001. For those looking to integrate AI in a safe and ethical manner, ISO 42001 may be the answer. Today Rachel Churchman, Technical Director at Blackmores, explains what ISO 42001 is, why you should conduct an ISO 42001 Gap analysis and what's involved with taking the first step towards ISO 42001 Implementation. You'll learn · What is ISO 42001? · What are the key principles of ISO 42001? · Why is ISO 42001 Important for companies either using or developing AI? · Why conduct an ISO 42001 Gap Analysis? · What should you be looking at in an ISO 42001 Gap Analysis? Resources · Register for our ISO 42001 Workshop · Isologyhub In this episode, we talk about: [00:30] Join the isologyhub – To get access to a suite of ISO related tools, training and templates. Simply head on over to isologyhub.com to either sign-up or book a demo. [02:05] Episode summary: Rachel Churchman joins Steph to discuss what ISO 42001 is, it's key principles and the importance of implementing ISO 42001 regardless of if you're developing AI or simply just utilising it. Rachel will also explain the first step towards implementation – an ISO 42001 Gap Analysis. [02:45] Upcoming ISO 42001 Workshop– We have an upcoming ISO 42001 workshop where you can learn how to complete an AI System Impact Assessment, which is a key tool to help you effectively assess the potential risks and benefits of utilising AI. Rachel Churchman, our Technical Director, will be hosting that workshop on the 5th December at 2pm GMT, but places are limited so make sure you register your place sooner rather than later! [03:20] The impact of AI – AI is everywhere, and has largely outpaced any sort of regulation or legislation up until very recently. These are both needed as AI is like any other technology, and will bring it's own risks, which is why a best practice Standard for AI Management has been created. If you'd like a more in-depth breakdown of ISO 42001, check out our previous episodes: 166 & 173 [04:30] A brief summary of ISO 42001 – ISO 42001 is an Internationally recognised Standard for developing an Artificial Intelligence Management System. It provides a comprehensive framework for organisations to establish, implement, maintain, and continually improve how they implement and develop or consume AI in their business. It aims to ensure that AI risks are understood and mitigated and that AI systems are developed or deployed in an ethical, secure, and transparent manner, taking a fully risk-based approach to responsible use of AI. Much like other ISO Standards, it follows the High-Level Structure and therefore can be integrated with existing ISO Management systems as many of the core requirements are very similar in nature. [05:45] Why is ISO 42001 important for companies both developing and using AI? – AI is now becoming commonplace in our world, and has been for some time. A good example is the use or Alexa or Siri - both of these are Large Language AI Models that we all use routinely in our lives. But AI is now being introduced in many technologies that we consume in our working lives - all designed to help make us more efficient and effective. Some examples being: · Microsoft 365 Copilot · GitHub Copilot · Google Workspace · Adobe Photoshop · Search Engines i.e. Google Organisations need to be aware of where they're consuming AI in their business as it may have crept in without them being fully aware. Awareness and governance of AI is crucial for several reasons: For companies using AI they need to ensure they have assessed the potential risks of the AI such as unintended consequences and negative societal impacts, or potential commercial data leakage. They also need to ensure that if they are using AI to support decision making, that they have ensured that decisions made or supported by AI systems are fair and unbiased. It's not all about risk - organisations can also use AI to streamlining processes helping to become more efficient and effective, or it could support innovation in ways previously not considered. For companies developing AI, the standard promotes the ethical development and deployment of AI systems, ensuring they are fair, transparent, and accountable. It provides a structured approach to risk assessment and governance associated with AI, such as bias, data privacy breaches, and security vulnerabilities. And for all, using ISO 42001 as the best practice framework, organisations can ensure that their AI initiatives are aligned with ethical principles, legal requirements, and industry best practices. This will ultimately lead to more trustworthy, reliable, and beneficial AI systems for all. [10:00] Clause 7.4 Communication – The organisation shall determine the internal and external communications relevant to the system, and that includes what should be communicated when and to who. [09:00] What are the key principles outlined in ISO 42001? – · Fairness and Non-Discrimination - ensuring AI systems treat all individuals and groups fairly and without bias. · Transparency and Explainability - Making AI systems understandable and accountable by providing clear explanations of their decision-making processes. · Privacy and Security - Protecting personal data and privacy while ensuring the security of AI systems. · Safety and Security - Prioritising the safety and well-being of individuals and the environment by mitigating potential risks associated with AI systems. · Environmental & Social - Considering the impact of AI on the environment and society, promoting sustainable and responsible practices. · Accountability and Human Oversight - Maintaining human control and responsibility for AI systems, ensuring they operate within ethical and legal boundaries. You'll often hear the term 'Human in the loop'. This is vital to ensure that AI is sanity checked by a human to ensure it hasn't hallucinated or result ‘drifted' in any way. [11:10] Why conduct an ISO 42001 Gap Analysis? What is the main aim? – Any gap analysis is a strategic planning activity to help you understand where you are, where you want to be and how you're going to get there. The ISO 42001 gap analysis will identify gaps and pinpoint areas where your AI practices need to meet the ISO 42001 requirements. It aims to conduct a systematic review of how your organisation uses or develops AI to then assess your current AI management practices against the requirements of the ISO 42001 standard. This analysis will then help you to identify any "gaps" where your current practices do not fully meet the standard's requirements. It also helps organisations to understand 'what good looks like' in terms of responsible use of AI. It will help you to prioritise improvement areas that may require immediate attention, and those that can be addressed in a phased approach. It will help you to understand and mitigate the risks associated with AI. It will also help you to develop a roadmap for compliance to include plans with clear actions identified that can then be project managed through to completion, and as with all ISO standards it will support and enhance AI Governance. [13:15] Does an ISO 42001 gap analysis differ from gap analysis for other standards? – Ultimately, no. The ISO 42001 gap analysis doesn't differ massively from other ISO standard gap analysis, so anyone who already has an ISO Standard and has been through the gap analysis process will be familiar with it. In terms of likeness, ISO 42001 is similar in nature to ISO 27001 in as much as there is a supporting 'Annex' of controls and objectives that need to be considered by the organisation. Therefore the questions being asked will extend beyond the standard High Level Structure format. Now is probably a good time to note that the Standard itself is very informative and includes additional annex guidance information to include · implementation guidance for the specific AI controls, · an Annex for potential AI-related organisational objectives and risk sources, · and an Annex that provides guidance on use of the AI management system across domains and sectors and integration with other management system standards. [14:55] What should people be looking at in an ISO 42001 gap analysis? – The Gap Analysis will include areas such as looking at the 'Context' of your organisation to better understand what it is that you do, or the issues you are facing internally and externally in relation to AI - both now and in the reasonably foreseeable future, and also how you currently engage with AI in your business. This will help to identify your role in terms of AI. It will also look at all the main areas typically captured within any ISO standard to include leadership and governance, policy, roles and responsibilities, AI Risks and your approach to risk assessment and treatment and AI system impact assessments. It also looks at AI objectives, the support resources you have in place to manage requirements, awareness within your business for AI best practice and use, through to KPI's, internal audit, management review and how you manage and track issues through to completion in your business. The AI specific controls look more in-depth at Policies related to AI, your internal organisation in relation to key roles & responsibilities and reporting of concerns, The resources for AI Systems, how you assess the impacts of AI Systems, The AI system lifecycle (AI Development), Data for AI Systems, Information provided to interested parties of AI Systems, and the use of AI Systems and 3rd party and customer relationships. [18:10] Who should be involved in an ISO 42001 Gap analysis? – An ISO 42001 gap analysis looks at AI from a number of different angles to include organisational governance that includes strategic plans, policies and risk management, through to training and awareness of AI for all staff, through to technical knowledge of how and where AI is either used or potentially developed within the organisation. This means that it is likely that there will need to be multiple roles involved over the duration of a gap Analysis. At Blackmores we always provide a Gap Analysis 'Agenda' that clearly defines what will be covered over the duration of the gap analysis, and who typically could be involved in the different sessions. We find this is the best way to help organisations plan the support needed to answer all the questions required. It's also important to treat the gap analysis as a 'drains up' review, to help get the most benefit out of the gap analysis. This will ensure that all gaps are identified so that a plan can then be devised to support the organisation to bridge these gaps, putting them on the path to AI best practice for their business. If you'd find out more about ISO 42001 implementation, register for our upcoming Workshop on the 5th December 2024. If you'd like to book a demo for the isologyhub, simply contact us and we'd be happy to give you a tour. We'd love to hear your views and comments about the ISO Show, here's how: ● Share the ISO Show on Twitter or Linkedin ● Leave an honest review on iTunes or Soundcloud. Your ratings and reviews really help and we read each one. Subscribe to keep up-to-date with our latest episodes: Stitcher | Spotify | YouTube |iTunes | Soundcloud | Mailing List
Episode 54 | AI, CIO ChallengesThe Big Themes:Community Summit NA: Wayne talks about attending Community Summit and how the event stands out by offering user-driven insights that focus on real-world experiences with Microsoft ecosystems, rather than vendor-led marketing. The Summit provides a platform where users share practical, firsthand knowledge, allowing attendees to gain a deeper understanding of how Microsoft technologies work in real business settings. By fostering a community of real-world users, the Summit helps businesses navigate the complexities of implementing new technologies.The CIO's evolving role amid AI demands: There's growing pressure on CIOs to adopt AI-driven strategies, reflecting a shift where AI, in particular, generative AI, is no longer just an optional tool but an expectation from executives and boards. This trend is reshaping industries differently. While sectors like finance can afford significant R&D investments in AI, others, such as retail, face tighter budgets and must be more strategic.Balancing innovation with cost control in AI Investments: They discuss the growing complexity and cost of AI investments, with Wayne urging CIOs to carefully evaluate the financial implications of adopting AI technologies. He points out that as AI tools become more integrated into business operations, their costs, including ongoing subscription fees, are likely to rise. CIOs must be strategic about which AI tools and platforms they adopt, ensuring that they do not overcommit financially.The Big Quote: “We're drowning in information, we've got to move faster, and some of the GenAI and traditional AI stuff is taking a lot . . . away from us so we can move to operate at a higher level of reasoning, of thinking, of overseeing."
“Artificial Intelligence” gets all the buzz today.
In today's episode of The Banking & Payments Show podcast, we will be discussing the potential risks that AI poses to financial institutions. In the 'Headlines' segment, we will examine an article from BBC.com titled "Could AI Trading Bots Transform the World of Investing," which discusses risk-related issues such as AI bots making financial decisions autonomously. In the 'Rankings' segment, we will rank the 5 AI risk categories that financial institutions must address in terms of importance. Join the conversation as host Rob Rubin chats with analysts Jacob Bourne and Grace Broadbent. Follow us on Instagram at: https://www.instagram.com/emarketer/ For sponsorship opportunities contact us: advertising@emarketer.com For more information visit: https://www.emarketer.com/advertise/ Have questions or just want to say hi? Drop us a line at podcast@emarketer.com For a transcript of this episode click here: https://www.emarketer.com/content/podcast-banking-payments-show-ai-risks-banks © 2024 EMARKETER TikTok for Business is a global platform designed to give brands and marketers the solutions to be creative storytellers and meaningfully engage with the TikTok community. With solutions that can deliver seamlessly across every marketing touchpoint, TikTok for Business offers brands an opportunity for rich storytelling through a portfolio of full-screen video formats that appear natively within the user experience. Visit tiktok.com/business for more information.
California is suing ExxonMobil over the oil giant's alleged “campaign of deception” to convince the public that recycling is a viable solution for plastic waste, when less than 10% of plastics are recycled. Also, to meet the tremendous energy needs of artificial intelligence Microsoft has inked a major power purchase deal with the owners of Three Mile Island in Pennsylvania, where a nuclear power reactor underwent a partial meltdown in 1979. Its unaffected twin reactor operated until 2019 and could provide a carbon-free source of power for AI, if it can get past the hurdles of getting the plant back online. And for students and scientists who are transgender or gender nonconforming, field research can bring unique challenges and risks. How institutions can help ensure field research settings are safer and more inclusive of trans people. -- What issues are you most interested in having Living on Earth cover in the 2024 election season? Let us know by sending us a written or audio message at comments@loe.org. Learn more about your ad choices. Visit megaphone.fm/adchoices
A recent survey reveals that while 80% of IT leaders express confidence in their recovery strategies post-ransomware attacks, nearly 70% have paid ransoms despite having policies against it. The episode emphasizes the importance of proactive defense strategies, as Tenable's research shows that only 3% of vulnerabilities pose significant risks, urging organizations to prioritize their cybersecurity efforts effectively.Host Dave Sobel also addresses the alarming rise in ransomware incidents, which increased by 33% globally over the past year, with the U.S. and UK experiencing significant spikes. The discussion includes insights into the tactics employed by attackers, such as living-off-the-land techniques that allow them to evade detection. Additionally, the episode highlights the shift in scam operations towards smaller, more targeted schemes, reflecting a trend of increased efficiency and profitability for cybercriminals.The episode further explores the U.S. Department of Labor's expanded cybersecurity guidance for employee benefit plans, emphasizing the fiduciary responsibility to mitigate risks. The new guidelines outline best practices for maintaining cybersecurity programs and conducting risk assessments. Sobel also discusses the launch of a new incident reporting portal by CISA, encouraging organizations to report cyber incidents to enhance community resilience against threats.Finally, the episode delves into the findings of a Washington University study that uncovers significant data privacy risks associated with GPT applications in OpenAI's GPT store. The study reveals that a majority of these applications fail to disclose their data collection practices adequately, raising concerns about user data exposure. Sobel concludes by discussing vulnerabilities in AI platforms, such as Microsoft 365 Copilot, and the need for IT service providers to focus on AI-specific security strategies to ensure compliance and protect sensitive information. Four things to know today00:00 Ransomware Recovery Gaps Expose Overconfidence: Why IT Providers Must Focus on Real-World Incident Testing04:51 CISA's Incident Reporting Portal and Expanded DOL Guidance: Why IT Providers Must Enhance Cybersecurity Services 08:26 Washington University Study Uncovers Data Privacy Risks in GPT Store10:21 CrowdStrike and Intel Face Critical Moments Supported by: https://timezest.com/mspradio/https://www.coreview.com/msp Pulseway Event: https://www.pulseway.com/v2/land/webinar-nexus-msp?rfid=vendor/?partnerref=vendor All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessoftech.bsky.social
Guest: Sander Schulhoff, CEO and Co-Founder, Learn Prompting [@learnprompting]On LinkedIn | https://www.linkedin.com/in/sander-schulhoff/____________________________Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinView This Show's Sponsors___________________________Episode NotesIn this episode of Redefining CyberSecurity, host Sean Martin engages with Sander Schulhoff, CEO and Co-Founder of Learn Prompting and a researcher at the University of Maryland. The discussion focuses on the critical intersection of artificial intelligence (AI) and cybersecurity, particularly the role of prompt engineering in the evolving AI landscape. Schulhoff's extensive work in natural language processing (NLP) and deep reinforcement learning provides a robust foundation for this insightful conversation.Prompt engineering, a vital part of AI research and development, involves creating effective input prompts that guide AI models to produce desired outputs. Schulhoff explains that the diversity of prompt techniques is vast and includes methods like the chain of thought, which helps AI articulate its reasoning steps to solve complex problems. However, the conversation highlights that there are significant security concerns that accompany these techniques.One such concern is the vulnerability of systems when they integrate user-generated prompts with AI models, especially those prompts that can execute code or interact with external databases. Security flaws can arise when these systems are not adequately sandboxed or otherwise protected, as demonstrated by Schulhoff through real-world examples like MathGPT, a tool that was exploited to run arbitrary code by injecting malicious prompts into the AI's input.Schulhoff's insights into the AI Village at DEF CON underline the community's nascent but growing focus on AI security. He notes an intriguing pattern: many participants in AI-specific red teaming events were beginners, which suggests a gap in traditional red teamer familiarity with AI systems. This gap necessitates targeted education and training, something Schulhoff is actively pursuing through initiatives at Learn Prompting.The discussion also covers the importance of studying and understanding the potential risks posed by AI models in business applications. With AI increasingly integrated into various sectors, including security, the stakes for anticipating and mitigating risks are high. Schulhoff mentions that his team is working on Hack A Prompt, a global prompt injection competition aimed at crowdsourcing diverse attack strategies. This initiative not only helps model developers understand potential vulnerabilities but also furthers the collective knowledge base necessary for building more secure AI systems.As AI continues to intersect with various business processes and applications, the role of security becomes paramount. This episode underscores the need for collaboration between prompt engineers, security professionals, and organizations at large to ensure that AI advancements are accompanied by robust, proactive security measures. By fostering awareness and education, and through collaborative competitions like Hack A Prompt, the community can better prepare for the multifaceted challenges that AI security presents.Top Questions AddressedWhat are the key security concerns associated with prompt engineering?How can organizations ensure the security of AI systems that integrate user-generated prompts?What steps can be taken to bridge the knowledge gap in AI security among traditional security professionals?___________________________SponsorsImperva: https://itspm.ag/imperva277117988LevelBlue: https://itspm.ag/attcybersecurity-3jdk3___________________________Watch this and other videos on ITSPmagazine's YouTube ChannelRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist:
Guest: Sander Schulhoff, CEO and Co-Founder, Learn Prompting [@learnprompting]On LinkedIn | https://www.linkedin.com/in/sander-schulhoff/____________________________Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinView This Show's Sponsors___________________________Episode NotesIn this episode of Redefining CyberSecurity, host Sean Martin engages with Sander Schulhoff, CEO and Co-Founder of Learn Prompting and a researcher at the University of Maryland. The discussion focuses on the critical intersection of artificial intelligence (AI) and cybersecurity, particularly the role of prompt engineering in the evolving AI landscape. Schulhoff's extensive work in natural language processing (NLP) and deep reinforcement learning provides a robust foundation for this insightful conversation.Prompt engineering, a vital part of AI research and development, involves creating effective input prompts that guide AI models to produce desired outputs. Schulhoff explains that the diversity of prompt techniques is vast and includes methods like the chain of thought, which helps AI articulate its reasoning steps to solve complex problems. However, the conversation highlights that there are significant security concerns that accompany these techniques.One such concern is the vulnerability of systems when they integrate user-generated prompts with AI models, especially those prompts that can execute code or interact with external databases. Security flaws can arise when these systems are not adequately sandboxed or otherwise protected, as demonstrated by Schulhoff through real-world examples like MathGPT, a tool that was exploited to run arbitrary code by injecting malicious prompts into the AI's input.Schulhoff's insights into the AI Village at DEF CON underline the community's nascent but growing focus on AI security. He notes an intriguing pattern: many participants in AI-specific red teaming events were beginners, which suggests a gap in traditional red teamer familiarity with AI systems. This gap necessitates targeted education and training, something Schulhoff is actively pursuing through initiatives at Learn Prompting.The discussion also covers the importance of studying and understanding the potential risks posed by AI models in business applications. With AI increasingly integrated into various sectors, including security, the stakes for anticipating and mitigating risks are high. Schulhoff mentions that his team is working on Hack A Prompt, a global prompt injection competition aimed at crowdsourcing diverse attack strategies. This initiative not only helps model developers understand potential vulnerabilities but also furthers the collective knowledge base necessary for building more secure AI systems.As AI continues to intersect with various business processes and applications, the role of security becomes paramount. This episode underscores the need for collaboration between prompt engineers, security professionals, and organizations at large to ensure that AI advancements are accompanied by robust, proactive security measures. By fostering awareness and education, and through collaborative competitions like Hack A Prompt, the community can better prepare for the multifaceted challenges that AI security presents.Top Questions AddressedWhat are the key security concerns associated with prompt engineering?How can organizations ensure the security of AI systems that integrate user-generated prompts?What steps can be taken to bridge the knowledge gap in AI security among traditional security professionals?___________________________SponsorsImperva: https://itspm.ag/imperva277117988LevelBlue: https://itspm.ag/attcybersecurity-3jdk3___________________________Watch this and other videos on ITSPmagazine's YouTube ChannelRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist:
How can CPA firms maximize the use of generative artificial intelligence (AI) while mitigating its risks? CNA Insurance's Sarah Ference discusses concerns about data privacy, bias and the reliability of generative AI's content as well as steps firms can take to ensure that these tools are used in a responsible and beneficial manner. Resources:AI/Automation Knowledge Hub Generative AI and risks to CPA firmsCNA InsuranceThe information, examples and suggestions presented in this podcast have been developed from sources believed to be reliable, but they should not be construed as legal or other professional advice. CNA accepts no responsibility for the accuracy or completeness of this podcast and recommends the consultation with competent legal counsel and/or other professional advisors before applying this material in any particular factual situations. This material is for illustrative purposes and is not intended to constitute a contract. Please remember that only the relevant insurance policy can provide the actual terms, coverages, amounts, conditions and exclusions for an insured. All products and services may not be available in all states and may be subject to change without notice. “CNA” is a registered trademark of CNA Financial Corporation. Certain CNA Financial Corporation subsidiaries use the “CNA” trademark in connection with insurance underwriting and claims activities. Copyright © 2024 CNA. All rights reserved.
Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind. For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".Thanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifMusic composition: Eleni Shaw Camera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
TakeawaysIncreased reliance on AI raises concerns about the potential loss of skills and job elimination or displacement.Cost cutting and aggressive store closures are driven by the belief that AI can replace human talent.Cyber attacks and deepfakes pose significant risks to organizations and individuals.The environmental impact of AI's energy consumption is a growing concern.The erosion of creativity and originality is a potential consequence of excessive reliance on AI.Misinformation and biased decision-making are risks associated with AI-driven data analysis.Reskilling employees and protecting intellectual property are crucial in the AI era.Finding the right balance between AI and human critical thinking is essential for successful risk mitigation.Chapters00:00Introduction and Overview01:32Concerns over Skills and Job Elimination07:14The Risks of Cyber Attacks and Deepfakes08:40The Environmental Impact of AI's Energy Consumption12:46The Erosion of Creativity and Originality14:39The Risks of Misinformation and Biased Decision-Making15:35Reskilling Employees and Protecting Intellectual Property16:32Balancing AI and Human Critical Thinking17:30Closing Remarks
Timestamps: 0:00 MadebyGoogle '24 0:17 Pixel 9 / 9 Pro / 9 Pro XL / 9 Pro Fold 1:10 Pixel Watch 3, Pixel Buds Pro 2 1:56 Pixel 9 AI features 2:54 Gemini Live 3:34 Ryzen 9 9950X reviews, other Ryzen leaks 5:04 Meta shuts down CrowdTangle 6:58 QUICK BITS INTRO 7:10 Grok-2 with image generator 7:53 Valve confirms SteamOS on ROG Ally 8:27 CUDA translation layer ZLUDA shut down 9:04 Deep-Live-Cam 9:45 MIT lab collects AI Risks like pokemon News Sources: https://lmg.gg/x2Wtu Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of AI, Government, and the Future, host Marc Leh engages in an insightful conversation with Dr. Eva-Marie Muller-Stuler, Partner at EY and leader of the Data & AI practice for the Middle East and North Africa. Dr. Muller-Stuler brings her extensive experience in AI governance and data science to discuss the critical intersection of AI, ethics, and democracy.
Worried about AI taking over your coaching practice? You're not alone. Cathy Smith uncovers the growing concerns around AI, sharing real-life examples of how relying too much on it can backfire as it did for some at a recent awards night. But before you ditch AI altogether, learn how to make it work for you without losing your authentic voice. Get insider tips on using AI as your secret weapon—without letting it take control. This episode is packed with practical advice and thought-provoking insights that every coach needs to hear. Curious? Episode 255 of Small Business Talk for Coaches is one that you can't afford to miss.Support the show: https://smallbusinesstalk.com.auSee omnystudio.com/listener for privacy information.
Each week, the leading journalists in legal tech choose their top stories of the week to discuss with our other panelists. This week's topics: 00:00 - Introductions 03:20 - The Last Roll: Reflecting On 4 Years Of Change (Selected by Niki Black) 17:03 - Generative AI Risk in Legal Research: Is the Fault in the Technology or in Ourselves? Answer is BOTH (Selected by Jean O'Grady) 24:30 - New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient' In the Use of Generative AI (Selected by Bob Ambrogi) 38:23 - Supreme Court on Clean Air and Water (Selected by Joe Patrice) 44:00 - AI benchmarking group to form (Selected by Niki Black
Slowly but surely, AI is taking over. What does it mean to live in an age where we can outsource our thinking to machines? According to Tomas Chamorro-Premuzic, it's no less than a fundamental restructuring of what it means to be human and a questioning of our essence. Learn how to future-proof yourself and maintain what makes us human. “If you want to future-proof yourself in the age of AI … the worst thing you can do is be lazy.”“If we are at the mercy of AI, free will isn't even an illusion anymore. It's just completely gone.”
In this solo episode of AI, Government, and the Future, host Alan Pentz explores the critical intersection of AI and national security. He discusses recent developments in AI, highlighting the work of Leopold Ashen Brenner and the potential for an "intelligence explosion." Alan explores the growing importance of AI in national security, the merging of consumer and national security technologies, and the challenges of AI alignment. He emphasizes the need for a national project approach to AI development and the importance of maintaining a technological edge over competitors like China.
A new compliance cottage industry surrounds artificial intelligence. We are at such an early stage of AI development, and companies are still figuring out how they can employ the technology. However, some industries, such as financial institutions, have been using AI for fraud detection and other issues. These early adopters will likely set the tone for AI compliance practices. There is no question that AI holds terrific promise. The hype surrounding AI is just that -- hype. Until there is more certainty surrounding AI technology, we will witness a lot of bloviating. But this aside, corporate boards, senior executives, and business developers need to pay attention until the dust settles. The AI industry is moving so fast that the sooner we start to focus, the nimbler our response will be.Luckily, ethics and compliance principles are easily adaptable to AI risks. In this episode of Corruption, Crime, and Compliance, Michael Volkov discusses how the compliance profession is more than capable of building effective compliance programs around AI operations.Financial institutions have been using AI for fraud detection and other issues. They are at the forefront of developing compliance practices around AI.Companies need to embrace AI's promise and not get overwhelmed by all the hype. Corporate boards, senior executives, and business developers must pay attention until the dust settles.The AI industry is moving fast, and companies need to focus on what's happening. Compliance has to be nimble and quick, just like the technology.Like every aspect of a business, any new technology presents risks, and AI certainly presents risks that need to be mitigated. This, in turn, leads to the necessary question: How should a company structure its AI risk and compliance program?AI can be a very productive tool. It can easily end up reducing costs and increasing efficiency. More efficient companies can help the economy expand and create new opportunities for growth.Financial institutions, tech companies, pharmaceutical, medical device and transportation logistics industries are likely to be significant users of AI technology.Generative AI use may increase the risk of fraud and will need to incorporate risk mitigation costs and capabilities.Compliance professionals have the intelligence, professional capabilities, and integrity to rise to the challenge of AI technology and onboard a third party.ResourcesMichael Volkov on LinkedIn | TwitterThe Volkov Law Group
Tom Bodrovics welcomes Tony Anscombe, ESET Chief Security Evangelist, to discuss cybersecurity in the mining sector. With over three decades in IT and cybersecurity, Anscombe stresses that security fundamentals remain crucial despite technological advancements. He highlights vulnerabilities from remote locations, outdated technology, third parties, and activists/nation states. Mining companies face significant risks, including potential for fatalities and financial losses. A comprehensive cybersecurity framework is necessary, along with advanced technologies like EDR systems. The financial cost of cyber attacks can reach $14 trillion by 2027, affecting industries, including mining. Companies must prioritize cybersecurity and involve third parties to adhere to security policies. Anscombe also touches on the ethical implications and potential international collaboration in AI development. Time Stamp References:0:00 - Introduction0:30 - Tony's Background2:03 - Industrial Security6:47 - Potential Risks10:37 - Attack Vectors12:32 - 3rd Party Liability14:30 - AI & Cyber Security17:30 - Practical Solutions19:50 - Capable People20:58 - Global Impacts & Costs24:16 - Reporting & Regulations27:02 - Technical Glitches?30:04 - AI Risks & Benefits33:57 - Restricting AI?36:19 - Wrap Up Talking Points From This Episode Mining companies face significant cybersecurity risks due to remote locations, outdated technology, third parties, and activists/nation states. A comprehensive cybersecurity framework and advanced technologies like EDR systems are necessary to mitigate mining sector risks. The financial cost of cyber attacks can exceed $14 trillion by 2027, emphasizing the importance of prioritizing cybersecurity for all industries. Guest Linkshttps://www.welivesecurity.com/en/https://twitter.com/TonyAtESET Tony Anscombe is Chief Security Evangelist for ESET. With over 20 years of security industry experience, Anscombe is an established author, blogger and speaker on the current threat landscape, security technologies and products, data protection, privacy and trust, and Internet safety. His speaking portfolio includes industry conferences RSA, Black Hat, VB, CTIA, MEF, Gartner Risk and Security Summit and the Child Internet Safety Summit (CIS). He is regularly quoted in cybersecurity, technology and business media, including BBC, Dark Reading, the Guardian, the New York Times and USA Today, with broadcast appearances on Bloomberg, BBC, CTV, KRON and CBS. Anscombe is a current board member of the NCSA and FOSI. Tony is based in the USA and represents ESET globally.
Apple had a serious flaw in its Screen Time parental controls for years, Nvidia surpasses Apple's market value, partnership between Apple and OpenAI imminent, and our community's WWDC wishlist!Try Setapp Today!Get incredible apps like Downie, CleanShotX, Ulysses, Paste, and more all with one monthly subscription of $10 at setapp.com < click that link and Primary Tech will earn a small commission when you sign up!Watch on YouTube!Subscribe and watch our weekly episodes plus bonus clips at: youtube.com/@primarytechshowJoin the CommunityDiscuss new episodes, start your own conversation, and join the Primary Tech community here: social.primarytech.fmSupport the showJoin our member community and get an ad-free versions of the show, plus exclusive bonus episodes every week! Subscribe directly in Apple Podcasts or here: primarytech.memberful.com/joinReach out:Stephen's YouTube Channel@stephenrobles on Threads@stephenrobles on XStephen on MastodonJason's Inc.com Articles@jasonaten on Threads@JasonAten on XJason on MastodonWe would also appreciate a 5-star rating and review in Apple Podcasts and SpotifyPodcast artwork with help from Basic Apple Guy.Those interested in sponsoring the show can reach out to us at: podcast@primarytech.fmLinks from the show@ayfondo Pencil Photos from Japan Apple Store • ThreadsHow Broken Are Apple's Parental Controls? It Took 3 Years to Fix an X-Rated Loophole. - WSJThis Is the 1 Feature Apple Has to Fix in 2024 (It Has Nothing to Do With Green Bubbles) | IncSonos Ace Headphones: The Vibes Are Off - YouTubeInstagram confirms test of 'unskippable' ads | TechCrunchApple put a Thread smart home radio into its newest Macs and iPads - The VergeHumane Ai Pin Battery Warning - TwitterHumane warns AI Pin owners to ‘immediately' stop using its charging case - The VergeNvidia is now more valuable than Apple at $3.01 trillion - The VergeNvidia reveals H100 GPU for AI and teases ‘world's fastest AI supercomputer' - The VergeH100 Tensor Core GPU | NVIDIAWhy Is Apple (AAPL) Teaming Up With OpenAI? Both Companies Need Each Other - BloombergNow you can keep talking to ChatGPT while multitasking on iPhone - 9to5MacA Right to Warn about Advanced Artificial IntelligenceAn Electric New Era for Atlas | Boston DynamicsWWDC 2024 Wishlist [Official] | Primary Technology@stephenrobles • The most relatable • Threads (00:00) - Intro (04:18) - Pencil Tip Direction (07:12) - Apple Screen Time Fail (18:38) - Sonos Ace Headphones (26:04) - Instagram Unskippable Ads (32:35) - Apple Hid Thread in Devices (35:44) - Humane Ai Pin Issue (40:33) - Try Setapp Today! (45:52) - Nvidia Overtakes Apple in Market (53:23) - OpenAI X Apple Deal (01:02:19) - Open Letter on AI Risks (01:06:55) - WWDC Community Wishlist (01:21:16) - Periods Instead of Spaces ★ Support this podcast ★
Welcome to the EMEA Core Credit Weekly podcast by Reorg. This episode covers: The risks U.S. call center operator Foundever faces from AI advancements An analysis of vehicle outsourcing firm Zenith Recent changes in European sustainability-linked loans and bonds We value your feedback to help us improve the podcast experience. Please take a moment to complete this short survey and let us know how we're doing. Credits mentioned in this episode include Foundever and Zenith For more information on our latest events and webinars, visit reorg.com/resources/events-and-webinars/ Sign up for our weekly newsletter, Reorg on the Record: reorg.com/resources/reorg-on-the-record We're looking for feedback to improve the podcast experience! Please share your thoughts by taking the survey:https://www.research.net/r/Reorg_podcast_survey
Our strategist says the Fed holding rates higher for longer makes him more bullish on near-term stock prices. He'll tell us why and give us two names he likes outside of tech. Plus, is the uptick in stock splits a positive sign for the market? We'll look at how these names typically perform after the split and which companies could be next. And our mystery stock of the day is getting an upgrade, sending shares up 9%. We'll reveal it and hear from the analyst who says the AI rewards outweigh the risks.
This presentation was recorded at MindFest, held at Florida Atlantic University, CENTER FOR THE FUTURE MIND, spearheaded by Susan Schneider. Center for the Future Mind (Mindfest @ FAU): https://www.fau.edu/future-mind/Please consider signing up for TOEmail at https://www.curtjaimungal.org Support TOE: - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - TOE Merch: https://tinyurl.com/TOEmerch Follow TOE: - *NEW* Get my 'Top 10 TOEs' PDF + Weekly Personal Updates: https://www.curtjaimungal.org - Instagram: https://www.instagram.com/theoriesofeverythingpod - TikTok: https://www.tiktok.com/@theoriesofeverything_ - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything
IntelBroker claims to have breached a Europol online platform. The U.S. and China are set to discuss AI security. U.S. agencies warn against BlackBasta ransomware operators. A claimed Russian group attacks British local newspapers. Cinterion cellular modems are vulnerable to malicious SMS attacks. A UK IT contractor allegedly failed to report a major data breach for months. Generative AI is a double edged sword for CISOs. Reality Defender wins the RSA Conference's Innovation Sandbox competition. Our guest is Chris Betz, CISO of AWS, discussing how to build a strong culture of security. Solar storms delay the planting of corn. Our 2024 N2K CyberWire Audience Survey is underway, make your voice heard and get in the running for a $100 Amazon gift card. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Guest Chris Betz, CISO of AWS, discussing how to build a strong culture of security. In his blog, Chris writes about how AWS's security culture starts at the top, and it extends through every part of the organization. Selected Reading Europol confirms web portal breach, says no operational data stolen (Bleeping Computer) US and China to Hold Discussions on AI Risks and Security (BankInfo Security) CISA, FBI, HHS, MS-ISAC warn critical infrastructure sector of Black Basta hacker group; provide mitigations (Industrial Cyber) 'Russian' hackers deface potentially hundreds of local British news sites (The Record) Cinterion IoT Cellular Modules Vulnerable to SMS Compromise (GovInfo Security) MoD hack: IT contractor concealed major hack for months (Computing) AI's rapid growth puts pressure on CISOs to adapt to new security risks (Help Net Security) Reality Defender Wins RSAC Innovation Sandbox Competition (Dark Reading) Solar Storms are disrupting farmer GPS systems during critical planting time (The Verge) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc.
With the Dow and S&P 500 looking to snap 3-day losing streaks after a record run-up for stocks, Carl Quintanilla and Jim Cramer engaged in a wide-ranging discussion about a question posed in the latest Delivering Alpha survey: Has the market run too far, too fast? Also in focus: Commerce and Treasury Departments' new recommendations on regulating AI, Apple's Q1 stock slump and J.P. Morgan's note on China iPhone sales, China's President Xi meets with U.S. CEOs, meme stocks back in the spotlight as Trump Media surges again and GameStop tumbles, Mark Zuckerberg and Jensen Huang's "jersey swap." Squawk on the Street Disclaimer
In this thought-provoking panel discussion, orthopedic surgeon Yoshihiro Katsuura and premedical students Kie Shidara, Maria Llose, and James Schmidt come together to examine the growing influence of artificial intelligence in the health care industry. From the potential benefits of AI-driven efficiency to the concerns surrounding job displacement among physicians, they navigate the delicate balance between technological advancement and patient-centered care. Drawing parallels between AI integration in health care and recent debates in the entertainment industry, the panel offers valuable insights on how health care professionals can advocate for ethical AI usage while prioritizing the well-being of their patients. As the health care landscape continues to evolve, it is crucial for stakeholders to engage in meaningful discussions about the future of medicine and the role of AI in delivering quality care. Join us as we explore the complex intersections of technology, efficiency, and compassion in health care, and discover how professionals can leverage AI to enhance, rather than replace, the human touch in patient care. Yoshihiro Katsuura is an orthopedic surgeon and author of The Spine Encyclopedia: Everything You've Wanted to Know about Back and Neck Pain but Were Too Afraid to Ask. Kie Shidara, Maria Llose, and James Schmidt are premedical students and research coordinators. He discusses the KevinMD article, "What doctors can learn from actors about artificial intelligence." Our presenting sponsor is Nuance, a Microsoft company. Do you spend more time on administrative tasks like clinical documentation than you do with patients? You're not alone. Clinicians report spending up to two hours on administrative tasks for each hour of care provided. Nuance, a Microsoft company, is committed to helping clinicians restore the balance with Dragon Ambient eXperience – or DAX for short. DAX is an AI-powered, voice-enabled solution that helps physicians cut documentation time in half. DAX Copilot combines proven conversational and ambient AI with the most advanced generative AI in a mobile application that integrates directly with your existing workflows. DAX Copilot can be easily enabled within the workflow of the Dragon Medical application to bring the power of ambient technology to more clinicians faster while leveraging the proven and powerful capabilities used by over 550,000 physicians. Explore DAX Copilot today. Visit https://nuance.com/daxinaction to see a 12-minute DAX Copilot demo. Discover clinical documentation that writes itself and reclaim your work-life balance. VISIT SPONSOR → https://nuance.com/daxinaction SUBSCRIBE TO THE PODCAST → https://www.kevinmd.com/podcast RECOMMENDED BY KEVINMD → https://www.kevinmd.com/recommended GET CME FOR THIS EPISODE → https://earnc.me/ibtiU3 Powered by CMEfy.