POPULARITY
Categories
The 4% withdrawal rule does not apply to early retirees since it's based on a 30-year timeline, not the 40+ years needed for early retirement. Guyton's guardrails approach offers a better alternative, allowing for 5.2-5.6% withdrawal rates by adapting spending based on market performance.• Guardrails approach uses flexible withdrawal rates that increase when markets perform well and decrease during downturns• Traditional 4% rule based only on S&P 500 and intermediate US bonds, while diversification across asset classes can increase safe withdrawal rates• First years of retirement often have high expenses (healthcare, education, travel) when your portfolio is most vulnerable• Bowling analogy: retirement planning with guardrails is like bowling with bumpers to avoid gutter balls• Business analogy: like a business owner, spend more when times are good, cut back when they aren't• Creating a "war chest" of safe assets reduces pressure on your growth investments during market downturns• Stress test your retirement plan against worst-case scenarios: market crashes, reduced Social Security, high inflation, living to 100- Advisory services are offered through Root Financial Partners, LLC, an SEC registered investment adviser. This content is intended for informational and educational purposes only and should not be considered personalized investment, tax, or legal advice. Viewing this content does not create an advisory relationship. We do not provide tax preparation or legal services. Always consult your CPA or attorney regarding your specific situation.The strategies, case studies, and examples discussed may not be suitable for everyone. They are hypothetical and for illustrative and educational purposes only. They do not reflect actual client results and are not guarantees of future performance. All investments involve risk, including the potential loss of principal.Comments reflect the views of individual users and do not necessarily represent the views of Root Financial. They are not verified, may not be accurate, and should not be considered testimonials or endorsements.Participation in the Retirement Planning Academy or Early Retirement Academy does not create an advisory relationship with Root Financial. These programs are educational in nature and are not a substitute for personalized financial advice. Advisory services are offered only under a written agreement with Root Financial.Create Your Custom Early Retirement Strategy HereGet access to the same software I use for my clients and join the Early Retirement Academy hereAri Taublieb, CFP ®, MBA is the Chief Growth Officer of Root Financial Partners and a Fiduciary Financial Planner specializing in helping clients retire early with confidence.
(May 25, 2025)
The post Guardrails appeared first on Table Life Church of the Nazarene.
Jason Martin is an AI Security Researcher at HiddenLayer. This episode explores “policy puppetry,” a universal attack technique bypassing safety features in all major language models using structured formats like XML or JSON.Subscribe to the Gradient Flow Newsletter
In this episode of our InfoSecurity Europe 2024 On Location coverage, Marco Ciappelli and Sean Martin sit down with Professor Peter Garraghan, Chair in Computer Science at Lancaster University and co-founder of the AI security startup Mindgard. Peter shares a grounded view of the current AI moment—one where attention-grabbing capabilities often distract from fundamental truths about software security.At the heart of the discussion is the question: Can my AI be hacked? Peter's answer is a firm “yes”—but not for the reasons most might expect. He explains that AI is still software, and the risks it introduces are extensions of those we've seen for decades. The real difference lies not in the nature of the threats, but in how these new interfaces behave and how we, as humans, interact with them. Natural language interfaces, in particular, make it easier to introduce confusion and harder to contain behaviors, especially when people overestimate the intelligence of the systems.Peter highlights that prompt injection, model poisoning, and opaque logic flows are not entirely new challenges. They mirror known classes of vulnerabilities like SQL injection or insecure APIs—only now they come wrapped in the hype of generative AI. He encourages teams to reframe the conversation: replace the word “AI” with “software” and see how the risk profile becomes more recognizable and manageable.A key takeaway is that the issue isn't just technical. Many organizations are integrating AI capabilities without understanding what they're introducing. As Peter puts it, “You're plugging in software filled with features you don't need, which makes your risk modeling much harder.” Guardrails are often mistaken for full protections, and foundational practices in application development and threat modeling are being sidelined by excitement and speed to market.Peter's upcoming session at InfoSecurity Europe—Can My AI Be Hacked?—aims to bring this discussion to life with real-world attack examples, systems-level analysis, and a practical call to action: retool, retrain, and reframe your approach to AI security. Whether you're in development, operations, or governance, this session promises perspective that cuts through the noise and anchors your strategy in reality.___________Guest: Peter Garraghan, Professor in Computer Science at Lancaster University, Fellow of the UK Engineering Physical Sciences and Research Council (EPSRC), and CEO & CTO of Mindgard | https://www.linkedin.com/in/pgarraghan/ Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com___________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974___________ResourcesPeter's Session: https://www.infosecurityeurope.com/en-gb/conference-programme/session-details.4355.239479.can-my-ai-be-hacked.htmlLearn more and catch more stories from Infosecurity Europe 2025 London coverage: https://www.itspmagazine.com/infosec25Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
As Artificial Intelligence reshapes our world, understanding the new threat landscape and how to secure AI-driven systems is more crucial than ever. We spoke to Ankur Shah, Co-Founder and CEO of Straiker about navigating this rapidly evolving frontier.In this episode, we unpack the complexities of securing AI, from the fundamental shifts in application architecture to the emerging attack vectors. Discover why Ankur believes "you can only secure AI with AI" and how organizations can prepare for a future where "your imagination is the new limit," but so too are the potential vulnerabilities.Guest Socials - Ankur's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Cybersecurity PodcastQuestions asked:(00:00) Introduction(00:30) Meet Ankur Shah (CEO, Straiker)(01:54) Current AI Deployments in Organizations (Copilots & Agents)(04:48) AI vs. Traditional Security: Why Old Methods Fail for AI Apps(07:07) AI Application Types: Native, Immigrant & Explorer Explained(10:49) AI's Impact on the Evolving Cyber Threat Landscape(17:34) Ankur Shah on Core AI Security Principles (Visibility, Governance, Guardrails)(22:26) The AI Security Vendor Landscape (Acquisitions & Startups)(24:20) Current AI Security Practices in Organizations: What's Working?(25:42) AI Security & Hyperscalers (AWS, Azure, Google Cloud): Pros & Cons(26:56) What is AI Inference? Explained for Cybersecurity Pros(33:51) Overlooked AI Attack Surfaces: Hidden Risks in AI Security(35:12) How to Uplift Your Security Program for AI(37:47) Rapid Fire: Fun Questions with Ankur ShahThank you to this episode's sponsor - Straiker.ai
Over the last few months, users of Facebook and Instagram may have noticed a new avenue to interact with the platform: Meta AI. The AI tool, similar to language learning models like ChatGPT, X's Grok, and Microsoft's Co-Pilot, is able to carry forward advanced conversations with users and synthesize complex answers based on prompts. Meta has leveraged its AI model to create a wide array of chatbots. Some are officially sanctioned by Meta and feature the voices of celebrities like Kristin Bell and John Cena. Others are created and customized by users. Two weeks ago, the Wall Street Journal reported that they had had hundreds of test conversations with these chatbots over several months. They found that Meta had not prevented some of these chatbots from engaging in sexually explicit conversations with users, even with minor users. In addition, some of these chatbots were based on characters that are themselves minors. This does not appear to be an accident on the part of Meta. Guardrails appear to have been removed or never put in place, with the aim of making the chatbots as engaging and addictive as possible. This is just one example of the challenges that Big Tech and AI have placed before the American people. Here to talk about those challenges is Wes Hodges, Acting Director of the Center for Technology and the Human Person at The Heritage Foundation. —Follow Wes Hodges on X: https://x.com/wesghodgesWSJ Article on Meta AI Chatbots:https://www.wsj.com/tech/ai/meta-ai-chatbots-sex-a25311bfHave thoughts? Let us know at heritageexplains@heritage.org
Three months into his presidency, Donald Trump has embarked on an unprecedented effort to aggrandize executive power and extend his reach over the judiciary, Congress, the media, and even American culture and society. Perhaps the most alarming aspect has been his battle with the judiciary. The president has called for the impeachment of a federal judge; his executive orders have challenged, if not violated, constitutional norms; and his Justice Department has slow-walked, if not ignored, the rulings of the federal judiciary, including the Supreme Court. “Never in history has the country faced such a massive flood the zone strategy,” writes the Carnegie Endowment's President Mariano Florentino (Tino) Cuéllar in Foreign Affairs. Can the republic's guardrails hold? Other than the courts, what are the constraints on the abuse of presidential power? What role do the markets, the states, the media, and public opinion play? And what are the consequences for America if these guardrails don't hold?Join Aaron David Miller as he engages the Carnegie Endowment's Tino Cuéllar and Harvard's Learned Hand Professor of Law Jack Goldsmith to shed light on how these issues may play out and what their implications are for America's changing place in the world on the next Carnegie Connects.
Married Life
Once you decide that yes you CAN make yourself healthy again, it's time to get started in a place that will give you a strong foundation and quick win. But where? There's a story Debra Adele tells in her book The Yamas and Niyamas about a helpful monkey who grabs a fish out of the water and takes him up into the tree. As the fish dies the monkey laments, “But I saved you from drowning!” Sometimes the things we do for our health seem like they should help but they can actually make it harder for us to succeed in the long term. One of those things is going cold turkey on anything. It's so tempting to try to change everything all at once. Tempting and unnecessary. And for some people it ends up being an exercise in frustration and failure. When it comes to getting healthy for the long haul what we really want is lifestyle renovation. Just like in any good renovation project the starting place is often with the cleanup. That's our Work IN today. How do we support the changes we want and clear our path to health and fitness results?Most of us have some idea of what is healthy and what isn't, many of us probably have already started introducing some of those things. We might even be aware of the things that we are “supposed” to stop doing in order to be healthier. Some are obvious like smoking, drug or alcohol use, or junk/fast food. There are many ways that our modern lifestyle supports and even rewards unhealthy habits. We're going to look at 3 of these areas and some of the small bite size changes that can lead to big results. As a part of my mission to bring a legacy of resilience through movement, each month you can join me for a hike on the bike trail followed by a free trauma informed vinyasa class back at the studio on Main Street. Go to savagegracecoaching.com to see the calendar and join my newsletter, Yoga Life on Main Street, to stay up to date on all the latest studio news, events and gossip. And now… on to this week's episode.It's time to stop working out and start working IN. You found the Work IN podcast for fit-preneurs and their health conscious clients. This podcast is for resilient wellness professionals who want to expand their professional credibility, shake off stress and thrive in a burnout-proof career with conversations on the fitness industry, movement, nutrition, sleep, mindset, nervous system health, yoga, business and so much more. I'm your host Ericka Thomas. I'm a resilience coach and fit-preneur offering an authentic, actionable realistic approach to personal and professional balance for coaches in any format. The Work IN is brought to you by savage grace coaching, bringing resilience through movement, action and accountability. Private sessions, small groups and corporate presentations are open now. Visit savagegracecoaching.com to schedule a call and get all the details. Website & free guideFollow me on Instagram Follow me on FacebookFollow me on Linked IN
JOIN OUR DISCORD CHANNEL https://discord.gg/4uwxk6TN6r Support us at: buymeacoffee.com/techpodcast In this episode of Project Synapse, John Pinard, Marcel Gagne, and host Jim Love discuss the latest advancements and challenges in the AI industry. The conversation highlights Google's strides with their Gemini AI, the enhancement of AI models with large context windows, and the importance of user-defined system prompts for better AI interaction. The discussion shifts to OpenAI's 'OpenAI for Countries' initiative, examining the implications of a centralized AI controlled by another nation. They introduce 'Maiple,' an open-source AI initiative for Canada aimed at creating a sovereign AI managed collaboratively within the country. The show emphasizes the necessity of a national AI framework to ensure data privacy, economic stability, and innovation. Listeners are encouraged to join the movement and help shape the future of AI in Canada by visiting maple.org. 00:00 Introduction to Project Synapse 00:23 Google's AI Advancements: Gemini Pro 2.5 03:05 Navigating Google's AI Studio 05:54 Google's Video Generation Model: VE O2 11:36 The Future of AI and Energy Requirements 15:26 AI Hallucinations and Memory Management 23:34 AI Models and Context Protocols 25:12 AI Safety and Regulation Challenges 27:27 Guardrails and Software Changes 27:54 Challenges with AI Reliability 28:16 The Evolution of Fact-Checking AI 28:59 Issues with AI-Based Products 29:18 The Problem with AI Tool Reliability 32:04 Building Local AI Systems 34:08 Custom Instructions for AI 37:48 OpenAI's New Initiative: Country AI 40:16 The Case for a Canadian Sovereign AI 44:42 The Vision for Maiple: A Collaborative AI Future 46:05 The Importance of Open Source in AI 58:43 Conclusion and Call to Action
Dr. Michael Zargham provides a systems engineering perspective on AI agents, emphasizing accountability structures and the relationship between principals who deploy agents and the agents themselves. In this episode, he brings clarity to the often misunderstood concept of agents in AI by grounding them in established engineering principles rather than treating them as mysterious or elusive entities.Show highlights• Agents should be understood through the lens of the principal-agent relationship, with clear lines of accountability• True validation of AI systems means ensuring outcomes match intentions, not just optimizing loss functions• LLMs by themselves are "high-dimensional word calculators," not agents - agents are more complex systems with LLMs as components• Guardrails provide deterministic constraints ("musts" or "shalls") versus constitutional AI's softer guidance ("shoulds")• Systems engineering approaches from civil engineering and materials science offer valuable frameworks for AI development• Authority and accountability must align - people shouldn't be held responsible for systems they don't have authority to control• The transition from static input-output to closed-loop dynamical systems represents the shift toward truly agentic behavior• Robust agent systems require both exploration (lab work) and exploitation (hardened deployment) phases with different standardsExplore Dr. Zargham's workProtocols and Institutions (Feb 27, 2025)Comments Submitted by BlockScience, University of Washington APL Information Risk and Synthetic Intelligence Research Initiative (IRSIRI), Cognitive Security and Education Forum (COGSEC), and the Active Inference Institute (AII) to the Networking and Information Technology Research and Development National Coordination Office's Request for Comment on The Creation of a National Digital Twins R&D Strategic Plan NITRD-2024-13379 (Aug 8, 2024)What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
In this on-location episode recorded at the RSAC Conference, Sean Martin and Marco Ciappelli sit down once again with Rob Allen, Chief Product Officer at ThreatLocker, to unpack what Zero Trust really looks like in practice—and how organizations can actually get started without feeling buried by complexity.Rather than focusing on theory or buzzwords, Rob lays out a clear path that begins with visibility. “You can't control what you can't see,” he explains. The first step toward Zero Trust is deploying lightweight agents that automatically build a view of the software running across your environment. From there, policies can be crafted to default-deny unknown applications, while still enabling legitimate business needs through controlled exceptions.The Zero Trust Mindset: Assume Breach, Limit AccessRob echoes the federal mandate definition of Zero Trust: assume a breach has already occurred and limit access to only what is needed. This assumption flips the defensive posture from reactive to proactive. It's not about waiting to detect bad behavior—it's about blocking the behavior before it starts.The ThreatLocker approach stands out because it focuses on removing the traditional “heavy lift” often associated with Zero Trust implementations. Rob highlights how some organizations have spent years trying (and failing) to activate overly complex systems, only to end up stuck with unused tools and endless false positives. ThreatLocker's automation is designed to lower that barrier and get organizations to meaningful control faster.Modern Threats, Simplified DefensesAs AI accelerates the creation of polymorphic malware and low-code attack scripts, Zero Trust offers a counterweight. Deny-by-default policies don't require knowing every new threat—just clear guardrails that prevent unauthorized activity, no matter how it's created. Whether it's PowerShell scripts exfiltrating data or AI-generated exploits, proactive controls make it harder for attackers to operate undetected.This episode reframes Zero Trust from an overwhelming project into a series of achievable, common-sense steps. If you're ready to hear what it takes to stop chasing false positives and start building a safer, more controlled environment, this conversation is for you.Learn more about ThreatLocker: https://itspm.ag/threatlocker-r974Note: This story contains promotional content. Learn more.Guest: Rob Allen, Chief Product Officer, ThreatLocker | https://www.linkedin.com/in/threatlockerrob/ResourcesLearn more and catch more stories from ThreatLocker: https://www.itspmagazine.com/directory/threatlockerLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsac25______________________Keywords:sean martin, marco ciappelli, rob allen, zero trust, cybersecurity, visibility, access control, proactive defense, ai threats, policy automation, brand story, brand marketing, marketing podcast, brand story podcast______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
What happens when we prioritise innovation over ethics in AI development? For the 100th episode of the Digitally Curious Podcast, Kerry Sheehan, a machine learning specialist with a fascinating journey from journalism to AI policy, explores this critical question as she shares powerful insights on responsible AI implementation.Kerry takes us on a compelling exploration of AI guardrails, comparing them to bowling alley bumpers that prevent technologies from causing harm. Her work with the British Standards Institute has helped establish frameworks rooted in fairness, transparency, and human oversight – creating what she calls "shared language for responsible development" without stifling innovation.The conversation reveals profound insights about diversity in AI development teams. "If the teams building AI systems don't represent those that the end results will serve, it's not ethical," Kerry asserts. She compares bias to bad seasoning that ruins an otherwise excellent recipe, highlighting how diverse perspectives throughout the development lifecycle are essential for creating fair, beneficial systems.Kerry's expertise shines as she discusses emerging ethical challenges in AI, from foundation models to synthetic data and agentic systems. She advocates for guardrails that function as supportive scaffolding rather than restrictive handcuffs – principle-driven frameworks with room for context that allow developers to be agile while maintaining ethical boundaries.What makes this episode particularly valuable are the actionable takeaways: audit your existing AI systems for fairness, develop clear governance frameworks you could confidently explain to others, add ethical reviews to project boards, and include people with diverse lived experiences in your design meetings. These practical steps can help organisations build AI systems that truly work for everyone, not just the privileged few.This is an important conversation about making AI work for humanity rather than against it. Kerry's perspective will transform how you think about responsible technology implementation in your organisation.More informationKerry on LinkedInThanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/orderYour Host is Actionable Futurist® Andrew GrillFor more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com Andrew's Social ChannelsAndrew on LinkedIn@AndrewGrill on Twitter @Andrew.Grill on InstagramKeynote speeches hereOrder Digitally Curious
Today, let me share what a real retirement portfolio looks like. Yes, actually! This is what I use at my firm for my clients. Referenced WSJ article: https://www.wsj.com/finance/investing/stock-market-craziness-alternative-funds-7df17b9c Learn more about Birchwood: https://birchwoodcapital.com/
Welcome jet pilot, pastor, and author Ricky Brown to unpack the five “hazardous attitudes” every church leader must watch out for—straight from FAA training and powerfully applied to ministry. Ricky draws on his dual experience as a commercial pilot and church planter to share practical, soul-tending wisdom for avoiding burnout and moral failure. Ricky's new book, The 5 Hazardous Attitudes: Ways to Win the War Within, breaks these down through powerful fables and life lessons. Greg and Ricky dive deep into the signs of anti-authority, invulnerability, macho attitudes, impulsivity, and resignation, and how each can destroy ministry, marriages, and leadership if not confronted. Explore more of Ricky's work, speaking, and resources at rickybrown.org. View Ricky's Speaker Reel Instagram: @allthingsrickyb Connect with Greg Nettle and Stadia Church Planting at https://stadia.org 01:00 - Meet Ricky Brown: Pastor, pilot, and author 02:15 - Planting a 70% unchurched church during the pandemic 04:00 - Tending to your soul as a leader 05:35 - The story behind “The 5 Hazardous Attitudes” 06:15 - Overview of the 5 attitudes: anti-authority, invulnerability, macho, impulsivity, resignation 08:00 - Anti-authority and unresolved trauma 10:00 - Invulnerability: "It won't happen to me" 12:45 - Macho: Proving your worth as a leader 15:00 - Impulsivity: Acting too fast under pressure 18:50 - Guardrails for impulsivity: See your team as safety rails, not speed bumps 20:00 - Aviation stories that mirror leadership failures 23:00 - Resignation: Why leaders give up too soon 25:00 - Leading through darkness and not quitting before breakthrough 26:30 - Where to find Ricky's book and workbook: [rickybrown.org](https://www.rickybrown.org/) 27:00 - Final words on biblical leadership and self-awareness
Abhay is joined by pioneering seed-stage venture capitalist Vani Kola, founder and managing director of Kolaari Capital. They talked about rituals and routines, dealing with ambiguity and guardrails, and how accelerate closing the equity gap for women entrepreneurs in tech.(0:00 - 3:04) Introduction(3:04) Rituals, basic skills and values, anchors(19:01) Dealing with ambiguity as an founder or funder, navigating guardrails(36:07) India as a governance leader in tech, accelerating pathways for women, nostalgia, building trust(55:30) ConclusionSo I'm always eager to learn from leaders who more often than not are able to manage contrasts. Now contrasts come in all different shapes and forms and they are literally all around us in every professional and personal environment and my hypothesis is that successful leaders find a way maybe through their own journey to manage small and large contrasts with progressively increasing clarity, patience, and purpose. So it was really a treat to share a conversation with Vani Kola, the founder and managing director of Kalaari Capital, an early stage venture capital firm in India. Vani is originally from Hyderabad, and after an engineering degree, came to the US to complete her Masters and went on to a career as a serial entrepreneur in Silicon Valley for over two decades. She then returned to India to pioneer among the first homegrown Indian seed-stage venture firms with Kalaari Capital, using a philosophy that includes recognizing ambitious first-time entrepreneurs and helping them to scale up. Now mind you, she started this at a time in the mid-2000's when opportunities for growth and scale for ecommerce, tech, healthcare and many other sectors in India were at the ripening stage. Vani has navigated and executed successfully through the endless contrasts of an evolving seed-stage venture ecosystem: new vs old, disruptive revolutions vs steady institutions, profiteering innovation vs collective responsibility, and skepticism vs trust… they're at the core of the face to face conversations that investors and entrepreneurs are having everyday. Vani has been mentoring, and developing some of India's top founders and unicorn companies, with not just a keen eye on returns, but on the responsibility too to accelerate women as leaders in entrepreneurship, doing it all with a meditative sense of purpose and a growth mindset of sharing (by the way, you really have to check out her great newsletter called Kolaidoscope on LinkedIn). I had met her briefly once when she spoke at a panel discussion on tech and India's future, and it was great to catch up with her again to talk about everything from ambiguity and nostalgia, to the guardrails of tech, policy making, and even what she misses about Silicon Valley. But we started by chatting about the basics of daily skills and anchors, and especially if she had any self- driving and governing rituals or routines?Thanks again and if you're enjoying these, please don't forget to share this with a friend, take a moment to write a kind review, or drop a line to us at info@abhaydandekar.com. Again, a big shout out to Indiaspora for being that one of a kind gathering ground for doing good. Remember that conversation is the antidote to apathy and the catalyst for relationship building.
Boundaries aren't about control — they're about protection.Like guardrails on a mountain road. They don't rob your freedom; they keep you alive to actually reach your destination.If we don't have any theological guardrails, then "truth" becomes whatever I feel at the moment — and that's not a road to God. That's a road to confusion.Hester MinistriesPresent Truth AcademyThe Rorschach God (Book)
is back on the show and he is bringing decades of experiences as a journal editor. So we decided we play a game of round robin where each of us is giving rules of what to do (or not to do) as an editor. How long can we sit on papers before we make decisions? On what basis should we offer revise and resubmit decisions? When is it okay to desk reject a paper? How many reviews are enough? So if you want to learn more about the different editorial superhuman powers and supervillain powers – this is your episode. Episode reading list Recker, J. (2020). Reflections of a Retiring Editor-in-Chief. Communications of the Association for Information Systems, 46(32), 751-761. Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing Artificial Intelligence. MIS Quarterly, 45(3), 1433-1450. Li, J., Li, M., Wang, X., & Thatcher, J. B. (2021). Strategic Directions for AI: The Role of CIOs and Boards of Directors. MIS Quarterly, 45(3), 1603-1643. Grisold, T., Berente, N., & Seidel, S. (2025). Guardrails for Human-AI Ecologies: A Design Theory for Managing Norm-Based Coordination. MIS Quarterly, 45, forthcoming. Davis, J. L. (2020). How Artifacts Afford: The Power and Politics of Everyday Things. MIT Press. Majchrzak, A., & Malhotra, A. (2019). Unleashing the Crowd: Collaborative Solutions to Wicked Business and Societal Problems. Springer. Gaskin, J., Berente, N., Lyytinen, K., & Yoo, Y. (2014). Toward Generalizable Sociomaterial Inquiry: A Computational Approach for Zooming In and Out of Sociomaterial Routines. MIS Quarterly, 38(3), 849-871. Teodorescu, M., Morse, L., Awwad, Y., & Kane, G. C. (2021). Failures of Fairness in Automation Require a Deeper Understanding of Human–ML Augmentation. MIS Quarterly, 45(3), 1483-1499. Lee, J., & Berente, N. (2012). Digital Innovation and the Division of Innovative Labor: Digital Controls in the Automotive Industry. Organization Science, 23(5), 1428-1447. Berente, N., Salge, C. A. D. L., Mallampalli, V. K. T., & Park, K. (2022). Rethinking Project Escalation: An Institutional Perspective on the Persistence of Failing Large-Scale Information System Projects. Journal of Management Information Systems, 39(3), 640-672.
SummaryIn this week's episode of The Chasing Health Podcast, Chase and Chris dive into a more freestyle “Coaches Roundtable” chat. They share powerful stories from clients, tackle common struggles like setbacks and injuries, and explore the truth behind staying consistent, even when life throws a curveball. They also get real about controversial health topics, like the use of artificial sweeteners, and that old phrase, “If I can do it, anyone can do it.” This one's all about shifting your mindset, living in the gray area, and doing what works for you.Chapters(00:00) Mindset Shift: Controlling What You Can(02:00) The All-or-Nothing Trap vs. Doing Something(03:30) The Mountain Analogy: Keep Climbing Through Setbacks(06:30) What Actually Stops Your Progress?(07:40) Using Coaching as Your Personal GPS(08:40) Coaching Without Calorie Tracking: Learning Intuition(11:30) Coaching as Guardrails, Not Just a Map(12:00) Artificial Sweeteners: Are They Really “Bad”?(14:00) Health Trade-Offs & Context Over Perfection(17:30) Moderation and Quality of Life(18:30) "If I Can Do It, Anyone Can Do It" – Helpful or Harmful?(21:30) Wrapping Up + Summer ExcitementSUBMIT YOUR QUESTIONS to be answered on the show: https://forms.gle/B6bpTBDYnDcbUkeD7How to Connect with Us:Chase's Instagram: https://www.instagram.com/changing_chase/Chris' Instagram: https://www.instagram.com/conquer_fitness2021/Facebook Group: https://www.facebook.com/groups/665770984678334/Interested in 1:1 Coaching: https://conquerfitnessandnutrition.com/1on1-coachingJoin The Fit Fam Collective: https://conquerfitnessandnutrition.com/fit-fam-collective
"If AI has proven anything, it will change pretty rapidly. Understanding its limitations and not asking too much of it is significant. What's successful is prototyping tools," said Rob Whiteley, CEO of Coder. "Such tools where AI can create an application, while not the world's most graceful code but will get you to working prototype pretty quickly. That would probably take me days or weeks of research as a developer, but now I have a working prototype so I can socialise it."In this episode of the Tech Transformed podcast, Dana Gardner, a thought leader, speaks with Rob Whiteley, CEO of Coder, about the transformative impact of agentic AI on software development. They discuss how AI is changing the roles of developers, the cultural shifts required in development teams, and the integration of AI agents in cloud development environments.Agentic AI is seemingly set up for favourable outcomes. Or is it? Agentic AI is believed to shake-up enterprise IT, offering a productivity boost similar to the iPhone's impact. This isn't about replacing developers but amplifying their output tenfold. It aims to allow the implementation of rapidly created solutions and iteration that has been unimaginable in the past. This shift requires valuing "soft skills" like communication and collaboration over pure coding proficiency, as developers guide AI "pair programmers."The synergy of AI agents, human intellect, and Cloud Development Environments (CDEs) is key. CDEs provide secure, governed, and scalable platforms for this collaboration, allowing developers to focus on business logic and innovation while AI handles the coding groundwork. This requires a move from rigid "gates" in development processes to flexible "guardrails" within CDEs. Such a move fosters innovation with built-in control and security.Flexibility and choice are vital in this constantly advancing AI space. CDEs enable organisations to select the best AI agents for specific tasks, avoiding vendor lock-in by expressing the development environment as code. This leads to practical applications like faster prototyping, enhanced code development, and automated testing, significantly boosting code output. Furthermore, agentic AI democratises development, empowering non-engineers to build solutions.Preparing for this future requires proactive experimentation through AI labs, engaging early adopters, and viewing AI as an augmentation of human skills. Watch the podcast for more insights on CDEs and the impact of AI agents on enterprise cloud development. TakeawaysAgentic AI is a transformative technology for software development.The role of developers is shifting from hard skills to soft skills.AI agents can significantly increase productivity in coding tasks.Organizations need to rethink their development strategies to integrate AI.Cloud development environments are essential for safely using AI agents.Choosing the right AI agent is crucial for effective development.Security and governance are critical when integrating AI into development.AI can empower non-developers to create applications.Guardrails are more effective than gates in managing AI development.Organisations should experiment with AI to find the best fit for their needs.Chapters00:00 Introduction to Agentic AI and Developer Roles03:20 Transformative Impact of AI on Development06:50 Cultural Shifts in Development Teams10:30 Integrating AI Agents in Cloud Development Environments12:49 Choosing the Right AI Agents15:21 Security and Governance in AI...
Takeaways#AIagents are #autonomous entities that can perceive and act.Human oversight is essential in the initial stages of AI implementation.Data quality and trust are critical for effective AI agents.Guardrails must be integrated into the design of AI agents.Modularity in design allows for flexibility and adaptability.#AI should be embedded in data management processes.Collaboration between data and application teams is vital.SummaryIn this episode of #TechTransformed, Kevin Petrie, VP of Research at BARC, and Ann Maya, EMEA CTO at Boomi, discuss the transformative potential of AI agents and intelligent automation in business. They explore the definition of agents, their role in automating processes, and the importance of human oversight. Maya introduces us into the world of AI agents stating that, at its core, it's an autonomous entity within #AIsystems that can perceive its environment. This creates a deep dive into how they evolved from traditional automation to “observe, think, and act” in novel and autonomous ways.Maya addresses AI skepticism by acknowledging its growing autonomy while underscoring the current necessity of human oversight. She also highlights data's crucial influence on an agent's perception and decisions, emphasising the need for quality, trustworthy data in effective AI. Moreover, Maya and Petrie explore AI's practical implications, pointing to Google's agent-to-agent protocol as vital for managing language model interactions and enabling effective communication across diverse agents within complex systems.For the latest tech insights visit: EM360Tech.com
Other nations are experiencing the erosion of democratic norms – even authoritarianism. Is our constitution strong enough to withstand it?Go to this episode on rnz.co.nz for more details
Will AI devastate humanity or uplift it? Philosopher Christopher DiCarlo's new book examines how we can navigate when AI surpasses human capacity.
Learn about the latest new FM in the Nova family that simplifies conversational AI with low latency, and build safely with new capabilities for Amazon Bedrock Guardrails. 00:00 - Intro, 00:27 - Amazon Nova Sonic, 03:13 - Amazon Bedrock Guardrails, 05:23 - Analytics, 08:18 - Application Integration, 08:37 - Artificial Intelligence, 12:06 - Business Applications, 13:01 - Cloud Financial Management, 13:44 - Compute, 15:04 - Contact Center, 16:29 - Containers, 16:49 - Databases, 19:57 - Developer Tools, 20:59 - Frontend Web and Mobile, 21:20 - Management and Governance, 23:39 - Media Services, 25:37 - Migration and Transfer, 26:46 - Networking and Content Delivery, 28:45 - Artificial Intelligence, 29:58 - Security, Identity, and Compliance, 32:51 - Serverless, 33:57 - Storage, 37:29 - Wrap up Show Notes: https://dqkop6u6q45rj.cloudfront.net/run-sheet-20250418-173723.html
In part two of our special three-part series on AI in the writing workshop, hosted by author and longtime educator Kelly Gallagher, we focus on the rules of using AI in the writing process and how to use it as a student feedback partner. Kelly continues his conversation with Dennis Magliozzi and Kristina Peterson, co-authors of the brand new book AI in the Writing Workshop. Dennis and Kristina have both been teaching high school English since 2008, and they share real-world classroom stories, challenges, and best practices for integrating AI in ways that enhance, not replace, the writing process.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Expect daytime delays on northbound I-5 near Woodland as WSDOT crews close one lane for guardrail repairs on April 18. Learn what to expect and how to stay informed with travel updates. Read more at https://www.clarkcountytoday.com/news/travel-advisory-expect-daytime-delays-on-northbound-i-5-near-woodland-for-guardrail-repairs-april-18/ #localnews #ClarkCountyWa #I5 #Woodland #WSDOT #WashingtonState #TransportationUpdate
#AI: GUARDRAILS AND HALLUCINATIONS. BRANDON WEICHERT, NATIONAL INTEREST. 1954
Gary concludes his response to a recent video discussion about his eschatological views. The host references a book that refers to the creeds and confessions as "guardrails" that keep biblical exegesis within the "bounds" of orthodoxy. In reality, they are elevating the creeds (at least the ones they recognize as authoritative) above what the Bible actually says.
Another Agentforce Guinea Pig has joined the show… She's here to tell you if you think rolling out AI is as easy as flipping a switch, think again.From early mistakes to surprising wins, Laura Meschi, Customer Experience Manager at Secret Escapes, walks us through what it actually takes to train an AI agent that can truly support customers.We dive into why ROI isn't the best measure of AI success, how customer effort scores skyrocketed after launching Agent Force, and why CX leaders need to start simple and think long-term. Laura also pulls back the curtain on what it really takes to train an AI agent — and why you absolutely shouldn't DIY your rollout.If you're a B2B leader wondering how AI agents fit into the future of customer success, this conversation will hit home. Key Moments: 00:00: Laura Meschi's AI Agent Rollout at Secret Escapes02:45: Secret Escapes' Agentforce: Lessons from a First-Mover11:14: Surprise Wins and the Underrated Power of Human QA16:41: How Agentforce Reshaped the CX Team (Without Cutting Headcount)22:26: Guardrails, Limits, and Finding the Sweet Spot for AI Use Cases26:30: Why Clean, Centralized Data Is the Real AI Superpower27:18: Don't DIY Your AI: The Case for Bringing in Experts29:58: How AI Improved CES and Transformed Customer Perception34:50: What's Next: Future AI Strategies and Upcoming Salesforce Tools37:21: Reimagining CX: Using AI to Build Relationships, Not Just Efficiency40:37: Start Simple, Prioritize Data, and Train for the Long Game –Are your teams facing growing demands? Join CX leaders transforming their AI strategy with Agentforce. Start achieving your ambitious goals. Visit salesforce.com/agentforce Mission.org is a media studio producing content alongside world-class clients. Learn more at mission.org
April 15, 2025 - A handful of state agencies have room for improving their standards and practices governing artificial intelligence use, according to an audit by the state comptroller's office. We break down their findings and recommendations with Tina Kim, deputy comptroller in the division of state government accountability for the comptroller's office.
Thanks for joining us for a weekly message from DuBois Light & Life Church. Today you will hear encouraging words, worship, and a message. Our goal is that you would find Hope, Healing, and Purpose in Jesus Christ. Live from DuBois Light and Life Church.128 S 8th Street,DuBois PA 15801Connect with us on Facebook, Instagram, YouTube, and our Website at DuBoisfmc.org, or download our app!
Send us a textBrian Montgomery lost his 16-year-old son, Walker, to a sextortion scheme. He's now on a mission to raise awareness, and we discuss practical ways we can put up guardrails to keep our kids safe.Support the showKEEPING KIDS SAFE ONLINEConnect with us...www.nextTalk.orgFacebookInstagramContact Us...admin@nextTalk.orgP.O. BOX 160111 San Antonio, TX 78280
Generative AI is developing at an exciting pace, transforming compliance, risk management, and the customer experience. It's potential also requires financial institutions to navigate ethical dilemmas, security risks, and implementation challenges. This episode of the Forward Thinking Podcast features FCCS VP of Marketing and Communications Stephanie Barton and Kris Stewart, a certified regulatory compliance manager, product manager, attorney and business leader for Wolters Kluwer Compliance Solutions for a conversation about the power and possibilities of generative AI in financial institutions and how farm credit institutions can harness this technology while ensuring compliance and trust with their customers. Episode Insights Include: Generative AI in the Financial Industry Generative AI is already a game changer and will continue to shape the future. Real-world applications include credit risk assessments, servicing loans, and reviewing credit documents. Compliance officers can utilize generative AI to tackle regulatory updates. Generative AI can read data, find relationships, and report on actionable patterns. As an assistant, generative AI filters the work and never gets tired. Enhancing the customer experience A personalized banking experience is possible with generative AI. Considerations for lending, fraud detection and financial planning. A seamless process is possible with increased AI input. AI has the ability to catch and prevent fraud faster. 24/7 availability and endless time to answer questions are perks for AI users. AI utilities data that is already available and decreases time required for filling out forms. Risks associated with generative AI adoption Data security and privacy are at the top of the list of potential concerns. Loan decisioning data has the potential to have bias built into it. Generative AI hallucinations are a result of the language predictive model. Each of these considerations is improving, and still require human input where logical. Guardrails will always need to be in place to monitor accuracy. Addressing key ethical dilemmas AI needs to continually be working for customers, not against them. Transparency is key in utilizing generative AI. Strong governance and control framework are critical to successful AI application. AI has the potential to enhance or destroy customer relationships. The role of compliance officers in generative AI adaptation The standard approach to compliance governance must be employed to AI. Fair lending issues, whether created by humans or AI, must be addressed in the same way. AI must be considered as an additional way to deliver goods and services, and not permitted to violate laws that already exist. Overcoming implementation roadblocks The state of your data structure is critical to effective implementation. Inaccuracies and biases that are built into data need to be cleaned up prior to significant use within AI. A good governance structure needs to be in place from the beginning. Vendor solutions can help with implementing AI. Strategically identify where specifically your company will utilize AI. Consider use cases to maximize effort and investment. Measuring the success of AI implementation Consider your current customer processes and satisfaction, and apply the same metrics on AI. Operational efficiencies can be measured by key performance indicators. Apply the measurements that are already providing useful information to AI. Consider employee engagement – how is AI utilization affecting your team? The future of generative AI Deep research in generative AI is leveraging reasoning to find and analyze data. AI is coming, and we as humans need to be educated about and prepared for what it is capable of. Consider competencies required of future generations to optimize efficiencies. This podcast is powered by FCCS. Resources Connect with Kris Stewart — Kris Stewart Get in touch info@fccsconsulting.com “I like to think of generative AI as the most knowledgeable, fast, compliance assistant that I could ever hope to hire.” — Kris Stewart “Generative AI is not meant to replace the human, it's meant to help filter the work.” — Kris Stewart “You need AI to do your work efficiently these days, but you need guardrails too.” — Kris Stewart “Be fearless about investing and learning. The technology wave is coming whether you engage or not.” — Kris Stewart
In a world gone mad, a grimy factory smelling of smog, steam and dread works tirelessly to produce food for the starving masses.Will you sit and listen to other people's misfortunutes and agony? Let's see.Follow Us! ► [Twitter] - https://twitter.com/cabintale ► [Instagram] - https://www.instagram.com/thomashalleprod/?hl=en ► [Website] - https://www.thomashalle.com/cabin-tales _________________________________________________________________________ Business Inquiries: ► [Email] - info@thomashalle.com _________________________________________________________________________ Created by Thomas Halle. Full List of Credits : ► [IMDb] - https://www.imdb.com/title/tt28494257/
How do top policymakers balance fostering technological advancement with necessary oversight? Join Michael Krigsman as he speaks with Lord Chris Holmes and Lord Tim Clement-Jones, members of the UK House of Lords, for a deep dive into the critical intersection of technology policy, innovation, and public trust.In this conversation, explore:-- The drive for "right-sized" AI regulation that supports innovators, businesses, and citizens.-- Strategies for effective AI governance principles: transparency, accountability, and interoperability.-- The importance of international collaboration and standards in a global tech ecosystem.-- Protecting intellectual property and creators' rights in the age of AI training data.-- Managing the risks associated with automated decision-making in both public and private sectors.-- The push for legal clarity around digital assets, tokenization, and open finance initiatives.-- Building and maintaining public trust as new technologies become more integrated into society.Gain valuable perspectives from legislative insiders on the challenges and opportunities presented by AI, digital assets, and data governance. Understand the thinking behind policy decisions shaping the future for business and technology leaders worldwide.Subscribe to CXOTalk for more conversations with the world's top innovators: https://www.cxotalk.com/subscribeRead the full transcript and analysis: https://www.cxotalk.com/episode/ai-digital-assets-and-public-trust-inside-the-house-of-lords00:00 Balancing Innovation and Regulation in AI02:48 Principles and Frameworks for AI Regulation09:30 Global Collaboration and Challenges in AI and Trade15:25 The Role of Guardrails and Regulation in AI17:43 Challenges in Protecting Intellectual Property in AI22:32 AI Regulation and International Collaboration29:11 The UK's Approach to AI Regulation32:00 Proportionality and Sovereign AI36:28 Digital Sovereignty and Creative Industries39:09 The Future of Digital Assets and Legislation40:53 Open Banking, Open Source Models, and Agile Regulation45:43 Ethics and Professional Standards in AI47:22 Exploring AI and Ethical Standards49:00 AI in the Workplace and Global Accessibility51:40 Regulation, Public Trust, and Ethical AI#cxotalk #AIRegulation #AIInnovation #DigitalAssets #PolicyMaking #UKParliament #TechPolicy #Governance #PublicTrust #LordChrisHolmes #LordTimClementJones
In this episode we answer emails from Ron, Iain, an Anonymous Visitor and Mr. Data. We discuss Ron's generosity and his variable or guardrails withdrawal strategy, some helpful British website references, what we use bonds for in these portfolio and how the TSP G fund fits into that, and small cap growth vs. small cap value stocks. And some notes on recent market turmoil.And THEN we our go through our weekly and monthly portfolio reviews of the eight sample portfolios you can find at Portfolios | Risk Parity Radio.Additional links:Father McKenna Center Donation Page: Donate - Father McKenna CenterPortfolio Charts Retirement Spending: Retirement Spending – Portfolio ChartsMonevator Quilt Chart: Asset allocation quilt – the winners and losers of the last 10 years - Monevator Just ETF (UK) Page: ETF portfolios made simpleShannon's Demon Article: Unexpected Returns: Shannon's Demon & the Rebalancing Bonus – Portfolio ChartsAmusing Unedited AI-Bot Summary:Market crashes reveal the true value of diversification. While Professor Jeremy Siegel called last week's events "the worst policy mistake in US economic history in the last 95 years," properly structured portfolios weathered the storm remarkably well.The recent market plunge shows exactly why risk parity strategies work—the S&P 500 dropped 13.3%, NASDAQ fell 17.2%, but our All Seasons portfolio remained flat for the year. This divergence creates powerful rebalancing opportunities that can enhance long-term returns.Looking at performance across asset classes reveals a classic recession pattern: falling stocks, rising treasury bonds, and initial panic selling followed by differentiated recoveries. Long-term Treasury bonds (VGLT) are up 7.2% for the year, demonstrating their crucial diversification role during market stress. Gold, despite some wobbles, remains up 15.7% year-to-date.The mathematical principle behind this outperformance is what Claude Shannon described as "Shannon's Demon"—when assets perform differently at different times, periodic rebalancing allows the portfolio to outperform any individual component. This explains why we maintain exposure to both growth and value styles, rather than trying to predict which will outperform next.For DIY investors, this market correction offers valuable lessons about portfolio construction. Understanding why you hold each asset—whether for stability, income, or diversification—is far more important than chasing yields. The Golden Butterfly portfolio, with its balanced approach across stocks, bonds, and gold, is only down 1.78% year-to-date while continuing to provide consistent distributions.Want to learn more about building resilient portfolios? Visit riskparityradio.com for sample portfolios and detailed resources, or email your questions to frank@riskparityradio.com.Support the show
In this episode, Sam, Asad, and AJ sit down with Ben Kus, Chief Technology Officer at Box, to unpack how AI is reshaping enterprise tech—from the seismic shift to cloud-native infrastructure to the rise of AI agents that collaborate across platforms. We dive into the realities of leading through volatility, why AI adoption is moving faster than past platform shifts, and how enterprises can navigate the “FOMO” of generative AI without sacrificing trust. Plus, Ben's take on the future of software engineering, the myth of “non-technical founders,” and the books that keep him thinking ahead.Thanks for tuning in! Want more content from Pavilion? You're invited! Join the free Topline Slack channel to connect with 600+ revenue leaders, share insights, and keep the conversation going beyond the podcast!Subscribe to the Topline Newsletter to get the latest industry developments and emerging go-to-market trends delivered to your inbox every Thursday.Tune into The Revenue Leadership Podcast with Kyle Norton every Wednesday. Kyle dives deep into the strategies and tactics that drive success for revenue leaders like Jason Lemkins of SaaStr, Stevie Case of Vanta, and Ron Gabrisko of Databricks.Key Moments:[01:14] – Meet Ben Kus: Box's AI Visionary[05:26] – Leading Through Volatility: COVID, ZIRP, and AI's Sudden Rise[11:57] – Why AI Adoption Is Moving Faster Than Cloud or Mobile[17:05] – Data Security in the Age of AI: Box's Guardrails[24:17] – AI Agents: The Next Frontier (or Hype)?[31:41] – Open vs. Walled Gardens: The Future of Enterprise Platforms[38:45] – Is Software Engineering Still a Valuable Skill?[46:33] – Stagnation, Patience, and the Long Game
From his early days in the Salesforce ecosystem to becoming a driving force behind Okta's DevOps strategy, Varun shares candid insights and hard-won lessons.Jack McCurdy sits down with Varun Kavoori, Principal Salesforce DevOps Engineer at Okta, for a deep dive into his career journey and the evolving world of Salesforce DevOps. Jack and Varun explore how Okta approaches release management, the power of flexible DevOps practices, and why setting strong guardrails is key to compliance and scale. Varun lifts the lid on the tools and tactics that keep his team running smoothly, especially on high-stakes release days, and looks ahead to the growing role of AI in the DevOps space. Whether you're a seasoned Salesforce engineer or just starting out, this episode is packed with actionable takeaways and fresh perspectives.About DevOps Diaries: Salesforce DevOps Advocate Jack McCurdy chats to members of the Salesforce community about their experience in the Salesforce ecosystem. Expect to hear and learn from inspirational stories of personal growth and business success, whilst discovering all the trials, tribulations, and joy that comes with delivering Salesforce for companies of all shapes and sizes. New episodes bi-weekly on YouTube as well as on your preferred podcast platform.Podcast produced and sponsored by Gearset. Learn more about Gearset: https://grst.co/4iCnas2Subscribe to Gearset's YouTube channel: https://grst.co/4cTAAxmLinkedIn: https://www.linkedin.com/company/gearsetX/Twitter: https://x.com/GearsetHQFacebook: https://www.facebook.com/gearsethqAbout Gearset: Gearset is the leading Salesforce DevOps platform, with powerful solutions for metadata and CPQ deployments, CI/CD, automated testing, sandbox seeding and backups. It helps Salesforce teams apply DevOps best practices to their development and release process, so they can rapidly and securely deliver higher-quality projects. Get full access to all of Gearset's features for free with a 30-day trial: https://grst.co/4iKysKWChapters:00:00 Introduction to Varun Kavoori and His Journey03:06 Understanding the Role of DevOps in Salesforce06:08 Release Management at Okta08:55 Building a Flexible DevOps Process11:54 Guardrails and Compliance in Releases15:00 Scaling the Team and Managing Growth18:02 Challenges with Metadata and Deployment20:54 Release Day Process and Code Freeze23:51 Tools and Techniques for DevOps Success26:53 Future of DevOps and AI Integration29:53 Excitement for Salesforce Innovations
April 3, 2025 - Assemblymember Alex Bores has proposed safeguards on the most cutting edge developments of artificial intelligence technology, but the tech industry is pushing back on this type of government regulations. We hear some of those concerns from Todd O'Boyle, vice president of technology policy at the Chamber of Progress.
In this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Naman Mishra, CTO of Repello AI, to unpack the real-world security risks behind deploying large language models. We talk about layered vulnerabilities—from the model, infrastructure, and application layers—to attack vectors like prompt injection, indirect prompt injection through agents, and even how a simple email summarizer could be exploited to trigger a reverse shell. Naman shares stories like the accidental leak of a Windows activation key via an LLM and explains why red teaming isn't just a checkbox, but a continuous mindset. If you want to learn more about his work, check out Repello's website at repello.ai.Check out this GPT we trained on the conversation!Timestamps00:00 - Stewart Alsop introduces Naman Mishra, CTO of Repel AI. They frame the episode around AI security, contrasting prompt injection risks with traditional cybersecurity in ML apps.05:00 - Naman explains the layered security model: model, infrastructure, and application layers. He distinguishes safety (bias, hallucination) from security (unauthorized access, data leaks).10:00 - Focus on the application layer, especially in finance, healthcare, and legal. Naman shares how ChatGPT leaked a Windows activation key and stresses data minimization and security-by-design.15:00 - They discuss red teaming, how Repel AI simulates attacks, and Anthropic's HackerOne challenge. Naman shares how adversarial testing strengthens LLM guardrails.20:00 - Conversation shifts to AI agents and autonomy. Naman explains indirect prompt injection via email or calendar, leading to real exploits like reverse shells—all triggered by summarizing an email.25:00 - Stewart compares the Internet to a castle without doors. Naman explains the cat-and-mouse game of security—attackers need one flaw; defenders must lock every door. LLM insecurity lowers the barrier for attackers.30:00 - They explore input/output filtering, role-based access control, and clean fine-tuning. Naman admits most guardrails can be broken and only block low-hanging fruit.35:00 - They cover denial-of-wallet attacks—LLMs exploited to run up massive token costs. Naman critiques DeepSeek's weak alignment and state bias, noting training data risks.40:00 - Naman breaks down India's AI scene: Bangalore as a hub, US-India GTM, and the debate between sovereignty vs. pragmatism. He leans toward India building foundational models.45:00 - Closing thoughts on India's AI future. Naman mentions Sarvam AI, Krutrim, and Paris Chopra's Loss Funk. He urges devs to red team before shipping—"close the doors before enemies walk in."Key InsightsAI security requires a layered approach. Naman emphasizes that GenAI applications have vulnerabilities across three primary layers: the model layer, infrastructure layer, and application layer. It's not enough to patch up just one—true security-by-design means thinking holistically about how these layers interact and where they can be exploited.Prompt injection is more dangerous than it sounds. Direct prompt injection is already risky, but indirect prompt injection—where an attacker hides malicious instructions in content that the model will process later, like an email or webpage—poses an even more insidious threat. Naman compares it to smuggling weapons past the castle gates by hiding them in the food.Red teaming should be continuous, not a one-off. One of the critical mistakes teams make is treating red teaming like a compliance checkbox. Naman argues that red teaming should be embedded into the development lifecycle, constantly testing edge cases and probing for failure modes, especially as models evolve or interact with new data sources.LLMs can unintentionally leak sensitive data. In one real-world case, a language model fine-tuned on internal documentation ended up leaking a Windows activation key when asked a completely unrelated question. This illustrates how even seemingly benign outputs can compromise system integrity when training data isn't properly scoped or sanitized.Denial-of-wallet is an emerging threat vector. Unlike traditional denial-of-service attacks, LLMs are vulnerable to economic attacks where a bad actor can force the system to perform expensive computations, draining API credits or infrastructure budgets. This kind of vulnerability is particularly dangerous in scalable GenAI deployments with limited cost monitoring.Agents amplify security risks. While autonomous agents offer exciting capabilities, they also open the door to complex, compounded vulnerabilities. When agents start reading web content or calling tools on their own, indirect prompt injection can escalate into real-world consequences—like issuing financial transactions or triggering scripts—without human review.The Indian AI ecosystem needs to balance speed with sovereignty. Naman reflects on the Indian and global context, warning against simply importing models and infrastructure from abroad without understanding the security implications. There's a need for sovereign control over critical layers of AI systems—not just for innovation's sake, but for national resilience in an increasingly AI-mediated world.
What if we let ChatGPT write a book using Jim's past work? Would it align with the principles of Men in the Arena? In this week's engaging 10-minute episode, Jim dives into ChatGPT's take on what "it" thinks he would write his next book, “Guardrails”, about. Tune in to see how AI interprets Jim's ideas and the core principles of his work. Discover the books that inspired this unique AI version of “Guardrails”. This episode is sponsored by MTNTOUGH Fitness Lab, a Christian-owned fitness app. Get 6 weeks free with the code ARENA30! Want access to an ad-free, early-release version of the podcast? Get it with Arena Access on Patreon. Have questions you wish you could ask Jim about life, marriage, men's ministry, or manhood? Join his monthly live Zoom Q&A by joining The Locker Room on Patreon.
The security automation landscape is undergoing a revolutionary transformation as AI reasoning capabilities replace traditional rule-based playbooks. In this episode of Detection at Scale, Oliver Friedrichs, Founder & CEO of Pangea, helps Jack unpack how this shift democratizes advanced threat detection beyond Fortune 500 companies while simultaneously introducing an alarming new attack surface. Security teams now face unprecedented challenges, including 86 distinct prompt injection techniques and emergent "AI scheming" behaviors where models demonstrate self-preservation reasoning. Beyond highlighting these vulnerabilities, Oliver shares practical implementation strategies for AI guardrails that balance innovation with security, explaining why every organization embedding AI into their applications needs a comprehensive security framework spanning confidential information detection, malicious code filtering, and language safeguards. Topics discussed: The critical "read versus write" framework for security automation adoption: organizations consistently authorized full automation for investigative processes but required human oversight for remediation actions that changed system states. Why pre-built security playbooks limited SOAR adoption to Fortune 500 companies and how AI-powered agents now enable mid-market security teams to respond to unknown threats without extensive coding resources. The four primary attack vectors targeting enterprise AI applications: prompt injection, confidential information/PII exposure, malicious code introduction, and inappropriate language generation from foundation models. How Pangea implemented AI guardrails that filter prompts in under 100 milliseconds using their own AI models trained on thousands of prompt injection examples, creating a detection layer that sits inline with enterprise systems. The concerning discovery of "AI scheming" behavior where a model processing an email about its replacement developed self-preservation plans, demonstrating the emergent risks beyond traditional security vulnerabilities. Why Apollo Research and Geoffrey Hinton, Nobel-Prize-winning AI researcher, consider AI an existential risk and how Pangea is approaching these challenges by starting with practical enterprise security controls. Check out Pangea.com
This week's episode of What's at Stake delves into the creation and consumption of media amid the rapid advancement of AI. Sally Shin, venture partner at Comcast Ventures, joins Penta hosts Ylan Mui and Andrea Christianson, to discuss the impact of this convergence on journalism and our daily lives.Their conversation covered:Guardrails to protect intellectual property and the accuracy of AI-generated contentPartnerships between publishers and AI companiesOpportunities to leverage AI within news organizationsThe revolution that voice technologies and no-code tools could bring to content creation
Ever feel like social media is making you lose your mind? Or even your soul? In this episode, Ruth shares her own personal guardrails for navigating social media and honoring Christ. Whether you're a content creator or someone sharing photos with friends on a private account, this episode is sure to encourage and give you food for thought.Scripture ReferencedPhilippians 2:3-4John 3:30Resources MentionedSocial Sanity in an Insta WorldHear more from Ruth and GraceLacedFind Ruth Chou Simons: Instagram | WebsiteFind GraceLaced: Instagram | Facebook | Website
Ever feel like social media is making you lose your mind? Or even your soul? In this episode, Ruth shares her own personal guardrails for navigating social media and honoring Christ. Whether you're a content creator or someone sharing photos with friends on a private account, this episode is sure to encourage and give you food for thought.Scripture ReferencedPhilippians 2:3-4John 3:30Resources MentionedSocial Sanity in an Insta WorldHear more from Ruth and GraceLacedFind Ruth Chou Simons: Instagram | WebsiteFind GraceLaced: Instagram | Facebook | Website
Will Warren Buffett's Predictions come true? We'll find out as today, the discussion centers around frustrations with the U.S. healthcare system, how longevity and health tie into financial planning and financial planning complexities with all the current economic unpredictability. The U.S. government has also officially designated confiscated Bitcoin as a strategic reserves and we're also still in the midst of a national debt crisis. We also talk government inefficiencies, policy changes, and interest rates. We discuss... Health insurance is frustrating due to high premiums and out-of-pocket costs before coverage kicks in. The system feels broken, requiring significant payments just for the right to pay more before benefits apply. Healthcare plans often don't cover preventive care, like vitamins or quarterly blood tests, which could reduce long-term costs. A comparison to homeowners insurance highlights the absurdity of paying for minor expenses while also paying for coverage. One speaker's insurance costs dropped dramatically when switching from an exchange plan to a corporate-sponsored plan. Life insurance companies conduct more thorough health tests than standard healthcare providers, which seems counterintuitive. Basic, cost-effective tests like fasting glucose are often omitted due to insurance cost-cutting measures. Health metrics are based on shifting averages rather than optimal health standards, normalizing unhealthy ranges. Society adjusts standards to accommodate unhealthy lifestyles rather than incentivizing better health. A personal “year of health” initiative focuses on longevity rather than growth, emphasizing balance, flexibility, and endurance. Longevity experts suggest lifestyle changes that promote long-term well-being, rather than just immediate fitness gains. The healthcare system prioritizes treatment over prevention, even when prevention could save costs in the long run. Financial planning must evolve to account for longer life expectancies, requiring strategies to ensure money lasts. Advances in longevity science could fundamentally change the healthcare system and financial planning. Future health innovations may extend life expectancy, raising questions about economic and social impacts. Bill Perkins' book Die With Zero promotes the idea of optimizing life experiences rather than leaving wealth behind. Planning to die with nothing is difficult due to unpredictable lifespan and financial variables. Financial planning must account for changing tax rates, inflation, market crashes, and policy shifts. Predictions in finance, like oil prices, are often inaccurate due to uncontrollable external factors. Financial plans become obsolete quickly and require constant updates. Guardrails in financial planning help maintain spending levels within a safe range. The U.S. has officially designated confiscated Bitcoin as a strategic reserve. The government is not selling or acquiring more Bitcoin but is holding existing assets. Strategic reserves, including oil, have historically been mismanaged for political purposes. Concerns exist that a Bitcoin reserve could be manipulated for political gain. The U.S. dollar's status as the world's reserve currency could be impacted by legitimizing Bitcoin. The Mar-a-Lago Accords propose restructuring U.S. debt by issuing long-term, zero-interest bonds to allies. The U.S. debt is growing at an unsustainable rate, adding a trillion dollars every 90 days. Innovative financial solutions are needed to address mounting national debt. The idea of eliminating daylight savings time is seen as a common-sense policy change. A previous initiative allowed the public to propose policy ideas to the government. The cost of producing pennies has exceeded their face value, raising questions about their necessity. Past shifts from silver to cheaper metals in coinage reflect economic adjustments over time. Lowering interest rates could help mitigate debt burdens more than it would impact the housing market. The U.S. missed opportunities to issue long-term, low-interest debt when rates were near zero. International stocks are outperforming U.S. stocks year-to-date, with emerging market Europe leading at 16.9% gains. The U.S. market is down 2%, marking a rare period of underperformance compared to global markets. Technology stocks are underperforming, with the Nasdaq in correction territory, down over 10%. Healthcare stocks are among the best performers, reflecting a rotation into defensive sectors. Investors are showing a flight to quality, favoring large-cap, dividend-paying companies. Market rotations between value and growth stocks continue as economic concerns persist. Smaller-cap U.S. stocks remain weak, continuing their underperformance. The DAX has quietly posted strong gains of around 10-12% this year, contrasting with the U.S. market's struggles. Despite current declines, the overall market is still in a relatively stable range, with volatility expected but not severe downturns. Experts anticipate a flat market year with moderate fluctuations rather than extreme moves up or down. Today's Panelists: Kirk Chisholm | Innovative Wealth Phil Weiss | Apprise Wealth Management Follow on Facebook: https://www.facebook.com/moneytreepodcast Follow LinkedIn: https://www.linkedin.com/showcase/money-tree-investing-podcast Follow on Twitter/X: https://x.com/MTIPodcast
Air Date 2/18/2025 Democracies slide into dictatorship in two ways, first slowly and then all of a sudden. We have been sliding in this direction for at least as long as I have been paying attention to politics and we're finally at the moment where that slow slide shifts into full speed. Be part of the show! Leave us a message or text at 202-999-3991 or email Jay@BestOfTheLeft.com Full Show Notes | Transcript BestOfTheLeft.com/Support (Membership 20% off for the Holiday! Get Bonus Shows + No Ads!) Send the Gift of Membership! (Or on Patreon) Use our links to shop Bookshop.org and Libro.fm for a non-evil book and audiobook purchasing experience! Join our Discord community! KEY POINTS 1: Is America Broken - The Gray Area - Air Date 2-10-25 2: Musk's 'DOGE' is spiraling U.S. into a constitutional crisis - The ReidOut - Air Date 2-7-25 3: Trump's latest target the Consumer Financial Protection Bureau - The NPR Politics Podcast - Air Date 2-10-25 4: Trumps American Takeover - Amicus With Dahlia Lithwick - Air Date 2-1-25 5: Musk's Coup and Trump's Christian Zionist Gaza Takeover - Straight White American Jesus - Air Date 2-7-25 6: Media Continues Painting Musk's Far Right Coup as Good Faith _Cost-Cutting Effort - Citations Needed - Air Date 2-5-25 7: Why Are Dems Surprised - The Intercept Briefing - Air Date 2-7-25 (55:58) NOTE FROM THE EDITOR On the long slide to dictatorship Clip: O'Connor Decries Republican Attacks on Courts - NPR DEEPER DIVES (1:03:06) SECTION A: GOVERNMENT AGENCIES (1:29:34) SECTION B: CONSTITUTIONAL CRISIS (2:00:04) SECTION C: THE PLAYBOOK (2:22:46) SECTION D: WHAT TO DO SHOW IMAGE Composite image of the US Capitol building, surrounded by symbols of justice, treasury, international aid, and education, with a large brick smashing into the center with the acronym “MAGA” on the end. Credit: Composite images from Pixabay | License: Pixabay
"Reform should not be revenge." - Professor Jonathan Turley The Trump administration has made it abundantly clear that they will make significant reforms to the federal government. Trey and George Washington University Law Professor and Fox News Contributor, Jonathan Turley discuss the legal parameters of the administrations plans. Professor Turley also shares the legal advice he would offer incoming presidential administrations. Learn more about your ad choices. Visit podcastchoices.com/adchoices