POPULARITY
Guest: Diana Kelley, CSO at Protect AI Topics: Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right? What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better when you do it? How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we? In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance? How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy? What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks? Top differences between LLM/chatbot AI security vs AI agent security? Resources: “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers” “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem' Forever” Secure by Design for AI by Protect AI “Securing AI Supply Chain: Like Software, Only Not” OWASP Top 10 for Large Language Model Applications OWASP Top 10 for AI Agents (draft) MITRE ATLAS “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper) LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes
ABOUT JON HYMANJon Hyman is the co-founder and chief technology officer of Braze, the customer engagement platform that delivers messaging experiences across push, email, in-app, and more. He leads the charge for building the platform's technical systems and infrastructure as well as overseeing the company's technical operations and engineering team.Prior to Braze, Jon served as lead engineer for the Core Technology group at Bridgewater Associates, the world's largest hedge fund. There, he managed a team that maintained 80+ software assets and was responsible for the security and stability of critical trading systems. Jon met cofounder Bill Magnuson during his time at Bridgewater, and together they won the 2011 TechCrunch Disrupt Hackathon. Jon is a recipient of the SmartCEO Executive Management Award in the CIO/CTO Category for New York. Jon holds a B.A. from Harvard University in Computer Science.ABOUT BRAZEBraze is the leading customer engagement platform that empowers brands to Be Absolutely Engaging.™ Braze allows any marketer to collect and take action on any amount of data from any source, so they can creatively engage with customers in real time, across channels from one platform. From cross-channel messaging and journey orchestration to Al-powered experimentation and optimization, Braze enables companies to build and maintain absolutely engaging relationships with their customers that foster growth and loyalty. The company has been recognized as a 2024 U.S. News & World Report Best Companies to Work For, 2024 Best Small & Medium Workplaces in Europe by Great Place to Work®, 2024 Fortune Best Workplaces for Women™ by Great Place to Work® and was named a Leader by Gartner® in the 2024 Magic Quadrant™ for Multichannel Marketing Hubs and a Strong Performer in The Forrester Wave™: Email Marketing Service Providers, Q3 2024. Braze is headquartered in New York with 15 offices across North America, Europe, and APAC. Learn more at braze.com.SHOW NOTES:What Jon learned from being the only person on call for his company's first four years (2:56)Knowing when it's time to get help managing your servers, ops, scaling, etc. (5:42)Establishing areas of product ownership & other scaling lessons from the early days (9:25)Frameworks for conversations on splitting of products across teams (12:00)The challenges, complexities & strategies behind assigning ownership in the early days (14:40)Founding Braze (18:01)Why Braze? The story & insights behind the original vision for Braze (20:08)Identifying Braze's product market fit (22:34)Early-stage PMF challenges faced by Jon & his co-founders (25:40)Pivoting to focus on enterprise customers (27:48)“Let's integrate the SDK right now” - founder-led sales ideas to validate your product (29:22)Behind the decision to hire a chief revenue officer for the first time (34:02)The evolution of enterprise & its impact on Braze's product offering (36:42)Growing out of your early-stage failure modes (39:00)Why it's important to make personnel decisions quickly (41:22)Setting & maintaining a vision pre IPO vs. post IPO (44:21)Jon's next leadership evolution & growth areas he is focusing on (49:50)Rapid fire questions (52:53)LINKS AND RESOURCESWhen We Cease to Understand the World - Benjamín Labatut's fictional examination of the lives of real-life scientists and thinkers whose discoveries resulted in moral consequences beyond their imagining. At a breakneck pace and with a wealth of disturbing detail, Labatut uses the imaginative resources of fiction to tell the stories of Fritz Haber, Alexander Grothendieck, Werner Heisenberg, and Erwin Schrödinger, the scientists and mathematicians who expanded our notions of the possible.This episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/
We are joined by Francois Chollet and Mike Knoop, to launch the new version of the ARC prize! In version 2, the challenges have been calibrated with humans such that at least 2 humans could solve each task in a reasonable task, but also adversarially selected so that frontier reasoning models can't solve them. The best LLMs today get negligible performance on this challenge. https://arcprize.org/SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT:https://www.dropbox.com/scl/fi/0v9o8xcpppdwnkntj59oi/ARCv2.pdf?rlkey=luqb6f141976vra6zdtptv5uj&dl=0TOC:1. ARC v2 Core Design & Objectives [00:00:00] 1.1 ARC v2 Launch and Benchmark Architecture [00:03:16] 1.2 Test-Time Optimization and AGI Assessment [00:06:24] 1.3 Human-AI Capability Analysis [00:13:02] 1.4 OpenAI o3 Initial Performance Results2. ARC Technical Evolution [00:17:20] 2.1 ARC-v1 to ARC-v2 Design Improvements [00:21:12] 2.2 Human Validation Methodology [00:26:05] 2.3 Task Design and Gaming Prevention [00:29:11] 2.4 Intelligence Measurement Framework3. O3 Performance & Future Challenges [00:38:50] 3.1 O3 Comprehensive Performance Analysis [00:43:40] 3.2 System Limitations and Failure Modes [00:49:30] 3.3 Program Synthesis Applications [00:53:00] 3.4 Future Development RoadmapREFS:[00:00:15] On the Measure of Intelligence, François Chollethttps://arxiv.org/abs/1911.01547[00:06:45] ARC Prize Foundation, François Chollet, Mike Knoophttps://arcprize.org/[00:12:50] OpenAI o3 model performance on ARC v1, ARC Prize Teamhttps://arcprize.org/blog/oai-o3-pub-breakthrough[00:18:30] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, Jason Wei et al.https://arxiv.org/abs/2201.11903[00:21:45] ARC-v2 benchmark tasks, Mike Knoophttps://arcprize.org/blog/introducing-arc-agi-public-leaderboard[00:26:05] ARC Prize 2024: Technical Report, Francois Chollet et al.https://arxiv.org/html/2412.04604v2[00:32:45] ARC Prize 2024 Technical Report, Francois Chollet, Mike Knoop, Gregory Kamradthttps://arxiv.org/abs/2412.04604[00:48:55] The Bitter Lesson, Rich Suttonhttp://www.incompleteideas.net/IncIdeas/BitterLesson.html[00:53:30] Decoding strategies in neural text generation, Sina Zarrießhttps://www.mdpi.com/2078-2489/12/9/355/pdf
In this Tactics for Tech Leadership podcast episode, Andy and Mon-Chaio explore SWIFT (Structured What If Technique). While traditionally seen as a technical tool for failure analysis, the hosts consider its potential applications in leadership and organizational contexts. Listeners will learn how SWIFT can help anticipate system failures even before they occur, from technical systems like Redis caches to social-technical systems like performance reviews and hiring processes. By the end, you'll understand how to adapt this structured method for diagnosing issues and improving both technical and organizational systems.Transcript: https://thettlpodcast.com/2025/03/18/s3e10-swiftly-understanding-failure-modes/ReferencesSWIFT - https://www.asems.mod.uk/toolkit/swift
# Full transcription available at [heartsofgoldpodcast.com](http://heartsofgoldpodcast.com/) ## Episode Summary Makayla Hoefs shares the inspiring story behind her Girl Scout Gold Award project, *"Coding for Cookies."* This innovative initiative bridges Girl Scouts and robotics, offering young girls hands-on STEM experiences through engaging events. Makayla discusses how her project evolved, collaborating with the Minnesota and Wisconsin Lakes & Pines Council, and making the program sustainable for future generations. Listen to hear about the impact she's made, the challenges she faced, and how she encourages girls to explore STEM fields. ## More from Makayla My name is Makayla Hoefs from Becker, Minnesota. I am a senior at Becker High School, and I plan on going to a four-year college next fall to get my master's degree in electrical engineering. I have been a Girl Scout for about ten years. Throughout my time in Girl Scouts, I have earned my Bronze and Silver Awards and have completed many service projects. Last year, I was a Girl Scout delegate for my service unit. I am also involved in Student Council, National Honors Society, archery, and robotics. This is my fourth year on the Becker Robotics team, *C.I.S. 4607.* I am part of the electrical department and facilitate *Failure Modes and Effects Analysis.* My time in robotics has inspired me to become an engineer and a woman in STEM. ## What You'll Learn in This Episode - How *"Coding for Cookies"* introduced over 100 Girl Scouts to robotics - The collaboration between Makayla's robotics team and the Girl Scout council - Challenges in creating sustainable robotics kits - Makayla's advice for Gold Award candidates and key lessons from the process ## Follow Makayla's Journey Check out the resources from her project at [Coding for Cookies](https://sites.google.com/frc4607cis.com/cis4607/coding-for-cookies) ## Connect with Us Follow *Hearts of Gold* for more inspiring Gold Award stories. Don't forget to follow or subscribe and leave a review!
Send us a Text Message.Ready to transform your team meetings from chaotic to cohesive? Discover a method used by Six Sigma practitioners, continuous improvement teams, and design sprints to make your meetings more effective and efficient. We'll guide you through the phases of discovery, examination, and prioritization to streamline idea generation and ensure that every team member's input is valued. You'll learn techniques for individual brainstorming, anonymous idea sharing, and collective refinement, making your meetings not just productive but a crucial part of your design process.But that's not all. We dive into the utilization of quality tools for superior team decision-making during design Failure Modes and Effects Analysis (FMEA). Explore how to categorize and evaluate potential failures, assign severity ratings, and use tools like tree diagrams and fishbone diagrams to organize complex discussions. By focusing on collaboration and consensus, you'll be setting your team up for effective failure analysis. Join us to elevate your team meetings and turn them into a powerhouse of creativity and efficiency.Visit the podcast blog for this episode.Other episodes you might like: Brainstorming within Design SprintsWays to Gather Ideas with a TeamProduct Design with Brainstorming, with Emily Haidemenos (A Chat with Cross Functional Experts)Give us a Rating & Review**NEW COURSE**FMEA in Practice: from Plan to Risk-Based Decision Making is enrolling students now. Visit the course page for more information and to sign up today! Click Here **FREE RESOURCES**Quality during Design engineering and new product development is actionable. It's also a mindset. Subscribe for consistency, inspiration, and ideas at www.qualityduringdesign.com.About meDianna Deeney helps product designers work with their cross-functional team to reduce concept design time and increase product success, using quality and reliability methods. She consults with businesses to incorporate quality within their product development processes. She also coaches individuals in using Quality during Design for their projects.She founded Quality during Design through her company Deeney Enterprises, LLC. Her vision is a world of products that are easy to use, dependable, and safe – possible by using Quality during Design engineering and product development.
What is an FMEA? When should you use it? Why is it an important step in helping maintenance teams move from a break-fix maintenance state to one that is more proactive? In this episode of Great Queston: A Manufacturing Podcast, Plant Services editor in chief Thomas Wilk spoke with a specialist in the reliability field, Brian Hronchek, to start answering these questions and more about failure modes and effects analyses. Brian draws from his former experience as reliability engineer for U.S. Steel, maintenance manager for Exxon Mobil, and a 16-year veteran of the Marine Corps, in addition to his current work as a principal trainer and consultant at Eruditio.
A couple years ago, my agency asked me to write some guidance on sediment modeling, so, I reached out to the morphological modelers I knew, and particularly the model developers who write the morphological model code other people use.I asked them about the common failure modes they have seen and best practices they teach, and realized we had all essentially spent a decade or two, learning the same principles. So when the US federal agencies held their periodic Federal interagency sediment conference (SEDHYD) last year, I invited three of the model developers I have learned from over the years (Alex Sanchez, Gary Brown, and Blair Greimann), to participate in a panel discussion on their lessons learned.And the panel was much more popular than we expected. It turns out, there's appetite conversations like this. So, I turned on the mics and we did a little editing, and we're running it here.Here are brief bios for our guests.:Gary Brown did his graduate work at the university of Florida and works at the Coastal and Hydraulics Lab which is part of ERDC, the Corp's major R&D center in Vicksburg Mississippi. He's been developing sediment models for 29 years including SEDLIB, a set of sediment algorithms that are called by ERDC's hydraulic model, ADH or Adaptive hydraulics. Alex Sanchez sits in the office next to me. For the last 9 years, he has worked here at HEC and spearheaded the work to add 2D sediment to HEC-RAS which includes a novel formulation for the sub-grid approach. But actually Alex started developing sediment models at ERDC's Coastal and Hydraulics Lab where he worked for 8 years, while working on the Coastal Modeling System which is still used for Corps of Engineers coastal applications. Blair Greimann got his PhD from the University of Iowa and worked at the Bureau of Reclamation's Technical Service Center in Denver for more than 23 years, before his recent move to Stantec. While working at the Bureau Blair led the development of SRH-1D and applied this model to a range of projects including the Matilija and and Klamath Dam removals.Finally, we were lucky enough to have Doug Shields moderating this session so you will hear from him in the breaks between the four sub-topics. Dr. Shields, worked for more than 20 years at the Sedimentation Lab of the Agricultural Resource Center in Oxford MS and 10 years at ERDC and has taught at both Tennessee State and Old Miss and we were fortunate to draw Doug as a moderator. (Note: I did not mic Doug, but wanted to keep his thoughtful and winsome transitions, so his sound quality is not at the same level as the rest of the recording).After Doug and I introduced the session you will hear from Blair Greimann, Alex Sanchez, me again, and Gary Brown in that order.The conference paper associated with this session is here:https://www.sedhyd.org/2023Program/1/157.pdfThank you to the SEDHYD organizers (including but not limited to ) for hosting this conversationThis series was funded by the Regional Sediment Management (RSM) program.Stanford Gibson (HEC Sediment Specialist) hosts.Mike Loretto edited the episode and wrote and performed the music.Video shorts and other bonus content are available at the podcast website (which was temporarily down but is back up now):https://www.hec.usace.army.mil/confluence/rasdocs/rastraining/latest/the-rsm-river-mechanics-podcast...but most of the supplementary videos are available on the HEC Sediment YouTube channel:https://www.youtube.com/user/stanfordgibsonIf you have guest recommendations or feedback you can reach out to me on LinkedIn or ResearchGate or fill out this recommendation and feedback form: https://forms.gle/wWJLVSEYe7S8Cd248
Quality during Design isn't just an add-on; it's a fundamental aspect that drives innovation, efficiency, and customer satisfaction!Welcome back from our brief hiatus!One of the highlights of this episode is the introduction of an upcoming FMEA course on Udemy, with The Manufacturing Academy. FMEA, or Failure Modes and Effects Analysis, is a systematic method for evaluating our offerings to identify where and how they might fail and to assess the relative impact of different failures. Dianna's approach to FMEA is not only about adhering to traditional methods but also about addressing the criticisms and limitations often associated with them. This course promises to be a fresh take on risk-based decision-making. You'll hear more about it when it is released!Moreover, the episode touches upon the 'Quality During Design Fast Track' program, which is currently in the works and open to listener feedback. The initiative aims to harness quality tools in novel ways, even before a design concept is fully fleshed out. It emphasizes the importance of early input from cross-functional teams to gather requirements and user needs, thereby making the design process more effective and efficient. This program is system-based and affects how products and services are developed, leading to more thoughtful, user-centric designs.Listeners are invited: please take brief survey to help Dianna with aspects of these upcoming courses and more. www.qualityduringdesign.com/surveySupport the show**FREE RESOURCES**Quality during Design engineering and new product development is actionable. It's also a mindset. Subscribe for consistency, inspiration, and ideas at www.qualityduringdesign.com. About meDianna Deeney helps product designers work with their cross-functional team to reduce concept design time and increase product success, using quality and reliability methods. She consults with businesses to incorporate quality within their product development processes. She also coaches individuals in using Quality during Design for their projects.She founded Quality during Design through her company Deeney Enterprises, LLC. Her vision is a world of products that are easy to use, dependable, and safe – possible by using Quality during Design engineering and product development.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What rationality failure modes are there?, published by Ulisse Mini on January 19, 2024 on LessWrong. How do people fail to improve their rationality? How do they accidentally harm themselves in the process? I'm thinking of writing a post "How not to improve your rationality" or "A nuanced guide to reading the sequences" that preempts common mistakes, and I'd appreciate hearing people's experiences. Some examples: It took me an absurdly long time (like, 1-2yr in the rat community) before I realized you don't correct for cognitive biases, you have to "be introspectively aware of the bias occuring, and remain unmoved by it" (as Eliezer put it in a podcast) More generally, people can read about a bias and resolve to "do better" without concretely deciding what to do differently. This typically makes things worse, e.g. I have a friend who tried really hard to avoid the typical mind fallacy, and accidentally turned off her empathy in the process. The implicit frame rationalists push is logical and legible, and can lead to people distrusting their emotions. And I think it's really important to listen to listen to ick feelings when changing your thought processes, as there can be non obvious effects. E.g. My friend started thinking about integrity in terms of FDT, and this disconnected it from their motivational circuits and they made some pretty big mistakes because of it. If they'd listened to their feeling of "this is a weird way to think" this wouldn't have happened. (I think many people misinterpret sequence posts and decide to change their thinking in bad ways, and listening to your feelings can be a nice emergency check.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What rationality failure modes are there?, published by Ulisse Mini on January 19, 2024 on LessWrong. How do people fail to improve their rationality? How do they accidentally harm themselves in the process? I'm thinking of writing a post "How not to improve your rationality" or "A nuanced guide to reading the sequences" that preempts common mistakes, and I'd appreciate hearing people's experiences. Some examples: It took me an absurdly long time (like, 1-2yr in the rat community) before I realized you don't correct for cognitive biases, you have to "be introspectively aware of the bias occuring, and remain unmoved by it" (as Eliezer put it in a podcast) More generally, people can read about a bias and resolve to "do better" without concretely deciding what to do differently. This typically makes things worse, e.g. I have a friend who tried really hard to avoid the typical mind fallacy, and accidentally turned off her empathy in the process. The implicit frame rationalists push is logical and legible, and can lead to people distrusting their emotions. And I think it's really important to listen to listen to ick feelings when changing your thought processes, as there can be non obvious effects. E.g. My friend started thinking about integrity in terms of FDT, and this disconnected it from their motivational circuits and they made some pretty big mistakes because of it. If they'd listened to their feeling of "this is a weird way to think" this wouldn't have happened. (I think many people misinterpret sequence posts and decide to change their thinking in bad ways, and listening to your feelings can be a nice emergency check.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
The Builder Circle by Pratik: The Hardware Startup Success Podcast
This episode features a conversation between host, Sera Evcimen, and Alan Cohen, author of 'Prototype to Product: A Practical Guide for Getting to Market'. They delve into the complexities, misconceptions and opportunities in medical device development. The discussion includes the importance of generating good requirements, handling regulatory requirements, and conducting risk analysis. They also highlight the value of hiring experts conversant in both engineering and regulatory matters to guide a startup through navigating the FDA and EU approval processes.You can find his consultancy at www.alancohen.com.Rundown of Episode for easy navigation to topics of interest:00:00 Introduction and Host Background00:58 Guest Introduction: Alan Cohen01:32 Alan Cohen's Background and Experience02:37 Challenges in Medical Device Development05:40 Identifying a Hardware Medical Application Need07:48 Understanding Reimbursement and Business Models09:32 Regulatory Bodies and Compliance12:50 Risk Assessment in Medical Device Development21:22 Fault Tree Analysis vs Failure Modes and Effects Analysis27:48 Balancing Documentation and Process in Medical Device Development33:01 Understanding Development Process Standards33:19 Importance of Following Your Process33:42 Inspection and Audit Procedures34:16 Balancing FDA Requirements and Lean Operations35:13 Testing and Validation in Medical Device Development36:43 Navigating FDA and Institutional Review Boards40:02 Challenges in Medical Device Innovation and Funding47:16 Podcast Break47:29 Podcast break56:02 Navigating Supply Chain and Manufacturing Challenges01:04:09 Understanding FDA and EU Regulatory Differences01:07:00 Final Thoughts and Advice for Medical Device StartupsEnding with TLDL!Music by: Tom Stoke (in addition to royalty-free music provided by Descript)DISCLAIMER The content provided in this podcast is for informational purposes only and should not be construed as professional advice. Pratik Development, LLC., the hosts, guests, and producers of this podcast are not engaged in rendering legal, financial, or other professional services. The hosts and guests disclaim any liability for any errors or omissions in the content or for any actions taken based on the information provided. By accessing and listening to this podcast, you acknowledge and agree that the hosts, guests, and producers of the podcast shall not be held liable for any direct, indirect, incidental, consequential, or any other damages arising out of or in connection with the use of the information presented in the podcast. Furthermore, the hosts, guests, and producers of this podcast make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the information, products, services, or related graphics contained in the podcast for any purpose. Listeners are advised to independently verify any information presented and consult with appropriate professionals before making any decisions or taking any actions based on the content of this podcast. By continuing to listen to this podcast, you indicate your understanding and acceptance of this disclaimer.
In this special episode of Quality during Design Redux, we're pulling episodes from our archive about test results analysis. In our Season 1 - Episode 93 titled "The Fundamental Thing to Know from Statistics for Design Engineering", we talked about hypothesis testing: how it is used for lots of data analysis techniques. The next 4 episodes of this QDD Redux are taking the next steps._________________________________________If we're not careful with or ignore failure modes, we can choose the wrong reliability model or statistical distribution. If our product performance is close to the required limits and/or we need a very accurate model, this could be a big problem.We talk about the importance of failure modes and step-through a tensile-test example to explore these other topics:competing failure modessuspensionsindependent vs. dependentreliability block diagramsThe podcast blog includes extra useful information/links.Support the show**FREE RESOURCES**Quality during Design engineering and new product development is actionable. It's also a mindset. Subscribe for consistency, inspiration, and ideas at www.qualityduringdesign.com. About meDianna Deeney helps engineers work with their cross-functional team to reduce concept design time and increase product success, using quality and reliability methods. She founded Quality during Design through her company Deeney Enterprises, LLC. Her vision is a world of products that are easy to use, dependable, and safe – possible by using Quality during Design engineering and product development.
Assets Anonymous is a 12-step podcast series designed to help you get grounded in reliability basics and create a culture of continuous improvement with your team. This series will feature interviews with George Williams and Joe Anderson of ReliabilityX. ReliabilityX aims to bridge the gap between operations and maintenance through holistic reliability focused on plant performance. In this episode, George and Joe help you understand how your facility's critical assets fail.
Haley is a geotechnical engineer who recently completed her PhD at the University of Alberta. Amongst other things, Haley and I discuss her CDA award winning paper titled, "A Failure Modes and Effects Analysis Framework for Assessing Geotechnical Risks of Tailings Dam Closure" which was published in the Journal "Minerals".
In this episode we explore what Condition Based Maintenance (CBM) is (aka On-Condition Maintenance). We'll talk about :- What CBM is- The biggest trap you can fall into when implementing CBM- And what governs how often you do a Condition Based Maintenance task.As asset managers, we know that most Failure Modes occur randomly, and that can seem a little intimidating or maybe even a little scary, but it doesn't have to be because that's where Condition Based Maintenance can be very helpful. The whole point of Condition Based Maintenance is to detect a Potential Failure Condition and take action before failure occurs. That interval is called the P-F Interval and that is explained in this episode.Free Reliability Centered Maintenance (RCM) Overview Coursehttps://RCMTrainingOnline.com/OverviewLet's get connected on LinkedIn!https://www.linkedin.com/in/nancyreganrcm/
In this episode, we talk about what a Failure Mode is and why Failure Modes are so important to equipment Reliability. As responsible custodians, it's up to us to identify the plausible Failure Modes that could occur so that we can figure out if and how we should manage each one. If we don't, it can end up in disaster. Free Reliability Centered Maintenance (RCM) Overview Coursehttps://RCMTrainingOnline.com/OverviewLet's get connected on LinkedIn!https://www.linkedin.com/in/nancyreganrcm
We continue to explore mechanisms of how our cognitive system can be hijacked, leading to a breakdown and failure of natural intelligence. ======= Produced by Inqwire, a public benefit corporation on a mission to help create a world that makes sense. Inqwire is a technology platform designed to restore, enhance, and protect our natural ability to navigate towards what makes sense, alone and together. Learn more: https://www.inqwire.io/
We explore the core vulnerability of our cognitive system that allows for our natural intelligence to fail. ======= Produced by Inqwire, a public benefit corporation on a mission to help create a world that makes sense. Inqwire is a technology platform designed to restore, enhance, and protect our natural ability to navigate towards what makes sense, alone and together. Learn more: https://www.inqwire.io/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ML Systems Will Have Weird Failure Modes, published by jsteinhardt on January 26, 2022 on LessWrong. Previously, I've argued that future ML systems might exhibit unfamiliar, emergent capabilities, and that thought experiments provide one approach towards predicting these capabilities and their consequences. In this post I'll describe a particular thought experiment in detail. We'll see that taking thought experiments seriously often surfaces future risks that seem "weird" and alien from the point of view of current systems. I'll also describe how I tend to engage with these thought experiments: I usually start out intuitively skeptical, but when I reflect on emergent behavior I find that some (but not all) of the skepticism goes away. The remaining skepticism comes from ways that the thought experiment clashes with the ontology of neural networks, and I'll describe the approaches I usually take to address this and generate actionable takeaways. Thought Experiment: Deceptive Alignment Recall that the optimization anchor runs the thought experiment of assuming that an ML agent is a perfect optimizer (with respect to some "intrinsic" reward function R). I'm going to examine one implication of this assumption, in the context of an agent being trained based on some "extrinsic" reward function R∗ (which is provided by the system designer and not equal to R). Specifically, consider a training process where in step t, a model has parameters θt and generates an action at (its output on that training step, e.g. an attempted backflip assuming it is being trained to do backflips). The action at is then judged according to the extrinsic reward function R∗, and the parameters are updated to some new value θt+1 that are intended to increase at+1's value under R∗. At some point, the model is then deployed with final parameters θT, and continues to take actions. The following diagram illustrates this process: Now, let's assume that the model θt is a perfect optimizer whose objective is to maximize the discounted value of an intrinsic reward R≠R∗. That is, θt picks the action at satisfying at=argmaxaE[∑∞s=0γ−sR(at+s)∣at=a]. (I know that this is an unrealistic assumption. We'll examine the assumption in detail in the next section, but for now please grant it even if it requires suspending disbelief.) What action at will θt pick? Let's define aR=argmaxaR(a) and a∗=argmaxaR∗(a)--that is, aR maximizes the (instantaneous) intrinsic reward while a∗ maximizes the extrinsic reward. Assuming that γ is close to 1, the model wants aR to be picked on as many future rounds as possible (including at deployment time). If θt plays myopically, this means it would pick at=aR. However, this is not the optimal non-myopic action: if at=aR, then θt+1 will be updated towards producing outputs less like aR and more like a∗, so outputting aR in the current round leads to fewer aR outputs in future rounds. Instead, I think the optimal non-myopic action is approximately the following: during training time, output a∗ on every time step, so that the training process doesn't update θ (since it's already optimal) and hence θt=θt+1=⋯=θT. This guarantees that the deployed agent is θt, which is now free to output aR on every time step without risk of being modified by the training process.[1] The myopic and non-myopic outcomes are illustrated in the figure below: Myopic and non-myopic agents with an intrinsic reward, R. The myopic agent plays high intrinsic-reward actions during training, but is eventually modified to output action a∗ with high extrinsic reward R∗. The non-myopic agent instead plays a∗ during training so that it can pursue its intrinsic reward at deployment, by playing aR at all post-deployment time steps. To put it another way–the model's behavior will look aligned with R∗ during the training process...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ML Systems Will Have Weird Failure Modes, published by jsteinhardt on January 26, 2022 on LessWrong. Previously, I've argued that future ML systems might exhibit unfamiliar, emergent capabilities, and that thought experiments provide one approach towards predicting these capabilities and their consequences. In this post I'll describe a particular thought experiment in detail. We'll see that taking thought experiments seriously often surfaces future risks that seem "weird" and alien from the point of view of current systems. I'll also describe how I tend to engage with these thought experiments: I usually start out intuitively skeptical, but when I reflect on emergent behavior I find that some (but not all) of the skepticism goes away. The remaining skepticism comes from ways that the thought experiment clashes with the ontology of neural networks, and I'll describe the approaches I usually take to address this and generate actionable takeaways. Thought Experiment: Deceptive Alignment Recall that the optimization anchor runs the thought experiment of assuming that an ML agent is a perfect optimizer (with respect to some "intrinsic" reward function R). I'm going to examine one implication of this assumption, in the context of an agent being trained based on some "extrinsic" reward function R∗ (which is provided by the system designer and not equal to R). Specifically, consider a training process where in step t, a model has parameters θt and generates an action at (its output on that training step, e.g. an attempted backflip assuming it is being trained to do backflips). The action at is then judged according to the extrinsic reward function R∗, and the parameters are updated to some new value θt+1 that are intended to increase at+1's value under R∗. At some point, the model is then deployed with final parameters θT, and continues to take actions. The following diagram illustrates this process: Now, let's assume that the model θt is a perfect optimizer whose objective is to maximize the discounted value of an intrinsic reward R≠R∗. That is, θt picks the action at satisfying at=argmaxaE[∑∞s=0γ−sR(at+s)∣at=a]. (I know that this is an unrealistic assumption. We'll examine the assumption in detail in the next section, but for now please grant it even if it requires suspending disbelief.) What action at will θt pick? Let's define aR=argmaxaR(a) and a∗=argmaxaR∗(a)--that is, aR maximizes the (instantaneous) intrinsic reward while a∗ maximizes the extrinsic reward. Assuming that γ is close to 1, the model wants aR to be picked on as many future rounds as possible (including at deployment time). If θt plays myopically, this means it would pick at=aR. However, this is not the optimal non-myopic action: if at=aR, then θt+1 will be updated towards producing outputs less like aR and more like a∗, so outputting aR in the current round leads to fewer aR outputs in future rounds. Instead, I think the optimal non-myopic action is approximately the following: during training time, output a∗ on every time step, so that the training process doesn't update θ (since it's already optimal) and hence θt=θt+1=⋯=θT. This guarantees that the deployed agent is θt, which is now free to output aR on every time step without risk of being modified by the training process.[1] The myopic and non-myopic outcomes are illustrated in the figure below: Myopic and non-myopic agents with an intrinsic reward, R. The myopic agent plays high intrinsic-reward actions during training, but is eventually modified to output action a∗ with high extrinsic reward R∗. The non-myopic agent instead plays a∗ during training so that it can pursue its intrinsic reward at deployment, by playing aR at all post-deployment time steps. To put it another way–the model's behavior will look aligned with R∗ during the training process...
What are Failure Modes and Effects Analysis (FMEA) and inductive safety analysis? With FMEA, you try to induce failure at a higher level. FMEA is a bottom-up approach. Why would we do this? Because we know the effects of the failure. Now, we're trying to understand what is causing the failure. Hear more of Praveen Suvarna explanation here #FuSa #safetyanalysis #FMEA #functionalsafety #functionaltesting #iso26262 #autonomousvehicles #iso21434
In this webinar, John Bernet from Fluke Reliability will discuss best practices for applying root cause analysis and expected failure modes to motor-drive systems. You will learn the simple steps of total condition maintenance, how different inspection techniques from electrical to thermal can help identify different failure modes, and how vibration analysis in particular can find the most common mechanical faults on rotating machines. We will wrap up with a discussion on the obstacles teams may face when starting a reliability program and learn from those who have succeeded. Register for an upcoming webinar at: https://flukereliability.info/bpw-frr Learn more about Accelix at: https://flukereliability.info/accelix
For every action taken to maintain a piece of equipment, a Failure Mode—or cause of failure—is managed. That is why a Failure Modes and Effects Analysis (FMEA) is an essential part of physical asset management. When done properly, an FMEA helps organizations: 1) Define equipment goals, 2) Identify what could cause Reliability to suffer, and 3) Assign criticality. Join Nancy as she shares how to avoid common FMEA pitfalls, and how to use a properly executed FMEA to make effective maintenance decisions. Register for an upcoming webinar here: https://www.accelix.com/best-practice-webinars/ (flukereliability.info/bpw)
In this episode, Corey and I discuss the use of risk analyses for improving the design and operation of tailings and other facilities. We discuss the use of Failure Modes and Effects Analyses (FMEAs) and bowtie analyses.
An asset is a collection of failure modes, manage the failure modes and you manage the asset... This is a quote from our guest this week, Tacoma Zach! In this episode we discuss failure modes, operating context, and a whole lot more. Asset management is one of Tacoma's passions and we were lucky to corner him for an hour to discuss! Connect with our Guest Here: Tacoma Zach - https://www.linkedin.com/in/tacoma-zach-p-eng-0913514/ www.mentorapm.com www.uberlytics.com If your company sells products or services to engaged maintenance & reliability professionals, tell your marketing manager about Maintenance Disrupted. If you'd like to discuss advertising, please email us at maintenancedisrupted@gmail.com Check out our website at www.maintenancedisrupted.com and sign up for the weekly disruption newsletter with bonus content. If you like the show, please tell your colleagues about it and follow maintenance disrupted on LinkedIn and YouTube. Follow Maintenance Disrupted on LinkedIn https://www.linkedin.com/company/maintenancedisrupted Music: The Descent by Kevin MacLeod Link: https://incompetech.filmmusic.io/song/4490-the-descent License: http://creativecommons.org/licenses/by/4.0/
If we're not careful with or ignore failure modes, we can choose the wrong reliability model or statistical distribution. If our product performance is close to the required limits and/or we need a very accurate model, this could be a big problem.We talk about the importance of failure modes and step-through a tensile-test example to explore these other topics:competing failure modessuspensionsindependent vs. dependentreliability block diagramsThe podcast blog includes extra useful information/links.Support the show
Nancy Regan has a degree in Aerospace Engineering and founded The Force, a company focusing on Reliability Centered Maintenance. In addition to being an engineer and entrepreneur, she also holds six patents, is an author and a key note speaker. Episode NotesMusic used in the podcast: Higher Up, Silverman Sound StudioAcronyms, Definitions, and Fact CheckReliability Centered Maintenance (RCM) - A lot of organizations don't get the Reliability they need from their equipment. That causes chronic downtime, increased costs, and lost production. Using Reliability Centered Maintenance (RCM), organizations figure out what proactive maintenance to do so they get what they need from their machines (https://RCMTrainingOnline.com).RCM Consists of seven steps:FunctionsFunctional FailuresFailure ModesFailure EffectsFailure ConsequencesProactive Maintenance and IntervalsDefault StrategiesSteps one through four make up the Failure Modes and Effects Analysis (FMEA). Steps one through five make up the Failure Modes, Effects, and Criticality Analysis (FMECA). Step six embodies preventive maintenance and Condition Based Maintenance (CBM)."The One Thing" by Gary Keller. The book discusses the benefits of prioritizing a single task, and it also provides examples of how to engage in those tasks with a singular focus.Embry-Riddle Aeronautical University - a private university focused on aviation and aerospace programs with its main campuses in Daytona Beach, Florida. Women make up 27% of enrollment currently. (Wikipedia)Naval Air Warfare Center Aircraft Division (NAWCAD) Lakehurst is the world leader in Aircraft Launch and Recovery Equipment (ALRE) and Naval Aviation Support Equipment (SE). It is part of the Naval Air Systems Command (NAVAIR) and is located on Joint Base McGuire-Dix-Lakehurst (JB MDL) in central New Jersey. As the Navy's lead engineering support activity for ALRE and SE, NAWCAD Lakehurst conducts programs of acquisition management, technology development, systems integration, engineering, rapid prototyping / manufacturing, developmental evaluation and verification, fleet engineering support and integrated logistics support management. NAWCAD Lakehurst is responsible for maintaining fleet support and infusing modern technology across the entire spectrum of equipment needed to launch, land and maintain aircraft from ships at sea and austere expeditionary airfields. (navair.navy.mil)The USS Midway Museum is a historical naval aircraft carrier museum located in downtown San Diego, California at Navy Pier. The museum consists of the aircraft carrier Midway. The ship houses an extensive collection of aircraft, many of which were built in Southern California. (wikipedia)
Using Failure Modes and Effects Analysis to improve special-order implant procurement by AORNJournal
Manufacturers struggle to manage product quality, achieve on-time delivery, and reduce product and process risks. However, one of their most important assets – employees – should also be included in failure modes and effect analysis (FMEA) risk assessment. This webinar will identify key risk areas affecting workers and how FMEA can identify and manage those risks for a safer work environment. You are listening to audio from a webinar in the Safety+Health Webinar Series presented on July 11, 2019, by Kelly Kuchinski, Quality and Document Control Product Marketing Manager, Cority. Watch the archived webinar video to see the presenter's slides at https://www.safetyandhealthmagazine.com/events/143-quality-up-injuries-down-using-failure-modes-and-effect-analysis-fmea-for-a-safer-work-environment
Manufacturers struggle to manage product quality, achieve on-time delivery, and reduce product and process risks. However, one of their most important assets – employees – should also be included in failure modes and effect analysis (FMEA) risk assessment. This webinar will identify key risk areas affecting workers and how FMEA can identify and manage those risks for a safer work environment. You are listening to audio from a webinar in the Safety+Health Webinar Series presented on July 11, 2019, by Kelly Kuchinski, Quality and Document Control Product Marketing Manager, Cority. Watch the archived webinar video to see the presenter's slides at https://www.safetyandhealthmagazine.com/events/143-quality-up-injuries-down-using-failure-modes-and-effect-analysis-fmea-for-a-safer-work-environment
Oil and gas rig performance integrity is an extremely important capability that involves risk analysis, threat analysis, failure mode and effect analysis, safety, health, environmental safety, situational awareness, and last, but not least, software quality. Software quality plays a very important role in rig performance, and one that is not fully appreciated. In this episode Don Shafer discusses the challenges, pitfalls, and successes with software quality in the oil patch.Don Shafer is a cofounder of the Athens Group and technical fellow. Don developed Athens Group’s oil and gas practice and leads engineers in delivering software services for exploration, production, and pipeline monitoring systems for clients such as BP, Chevron, ExxonMobil, ConocoPhillips, and Shell. He led groups developing and marketing hardware and software products for Motorola, AMD, and Crystal Semiconductor. Don managed a large PC product group producing award-winning audio components for Apple. From the development of low-level software drivers to the selection and monitoring of semiconductor facilities, he has led key product and process efforts. You can connect with Don here:Email: donshafer@athensgroup.com LinkedIn: Don ShaferWeb Site: Athens GroupAbout PPQC:Process and Product Quality Consulting (PPQC) helps global executives tackle complex corporate challenges.To learn more about PPQC, visit www.ppqc.netSupport the show (https://ppqc.net)
On this week's episode, I am joined by John Reeve, co-author of Failure Modes to Failure Codes to talk about computerized maintenance management systems (CMMS). We talk about how we can use a CMMS to facilitate chronic failure analysis, how to properly set up a component list and John gives us his top CMMS tips. Thank you for listening and if you enjoy the show, please subscribe to Rob's Reliability Project on your favourite podcast platform and share it with your colleagues. You can also follow Rob's Reliability Project on LinkedIn and Facebook and check out robsreliability.com as well. If you're looking for a shorter tip, subscribe to Rob's Reliability Tip of the Day on your favorite podcast platform or on your Amazon Alexa as a Flash Briefing. Finally, if there are any topics, guests you'd like to hear from, questions you want answered, or if you'd like to appear on the podcast, email me at robsreliabilityproject@gmail.com Follow Rob's Reliability Project on LinkedIn - https://www.linkedin.com/company/robsreliabilityproject/ Follow Rob's Reliability Project on Facebook - https://www.facebook.com/robsreliabilityproject/ Music by XTaKeRuX, Song: White Crow is licensed under a Creative Commons 4.0 Attribution License.
In this episode, we talk with Dr. Alecia Gabriel of P3 Group North America. Dr. Gabriel serves as the Quality Systems Consultant and brings her experience in project management, automotive and aerospace quality assurance, and automotive consulting to our healthcare platform to talk Failure Modes and Effects Analysis (FMEA). In this first installment of our Beyond Healthcare Quality segment, Alecia will share an outsider view of a powerful quality improvement methodology, the potential that it holds for transforming healthcare, and why its appropriate application and execution should be a key part of your organizational high-reliability strategy.
Jason Millar, Social Failure Modes in Technology: Implications for AI by Centre for Ethics, University of Toronto
Developing Failure Codes with Bill Leahy Failures codes help the reliability engineers to make intelligent business decisions regarding the issues with the assets. It is a formula that helps you understand how an asset fails and how can you gather the needed data to mitigate a failure? A good failure code contains a hierarchy of […] The post 142 – Developing Failure Codes with Bill Leahy appeared first on Accendo Reliability.
This week, I welcome Mark Benak on to the podcast. Mark is the VP of Business Ventures at Uptake. We talk about why merging a library of failure modes with artificial intelligence makes stronger predictive analytics. Follow Mark Benak on LinkedIn - https://www.linkedin.com/in/mbenak/ Thank you for listening and if you enjoy the show, please subscribe to Rob's Reliability Project on your favourite podcast platform and share it with your colleagues. You can also follow Rob's Reliability Project on LinkedIn and Facebook and check out robsreliability.com as well. If you're looking for a shorter tip, subscribe to Rob's Reliability Tip of the Day on your favorite podcast platform or on your Amazon Alexa as a Flash Briefing. Finally, if there are any topics, guests you'd like to hear from, questions you want answered, or if you'd like to appear on the podcast, email me at robsreliabilityproject@gmail.com Follow Rob's Reliability Project on LinkedIn - https://www.linkedin.com/company/robsreliabilityproject/ Follow Rob's Reliability Project on Facebook - https://www.facebook.com/robsreliabilityproject/ Music by XTaKeRuX, Song: White Crow is licensed under a Creative Commons 4.0 Attribution License.
On this follow-up deep dive episode of “Leader Dialogue“, CHIME CEO Russ Branzell again joins Ben, Jennifer and Duffie to discuss the two common Strategy Execution failure modes resulting from five (5) commonly held strategy execution myths. (Adapted from the research of: Donald Sull, MIT Sloan School of Management, and Rebecca Homkes fellow at the […] The post LEADER DIALOGUE: Clarifying Strategy Execution Failure Modes & Myths with CHIME – Deep Dive appeared first on Business RadioX ®.
Making FMEAs Work with Fred Schenkelberg In most of the organizations, Failure Modes and Effect Analysis, is taken as a light exercise where people come, argue, and leave without learning a single thing of value for their company. There is a lot going on when you are in need of FMEA because unless your equipment […] The post 81 – Making FMEAs Work with Fred Schenkelberg appeared first on Accendo Reliability.
Microcontroller Failure Modes: Why They Happen and How to Prevent Them by Altium Inc.
00:16 – Welcome to “Diamonds Are For Gender” …we mean, “Greater Than Code!” 00:56 – Origin Story, Superpowers, and Data Science 04:20 – Diversity and Career Paths in Data Science 10:51 – Ethical Debates Within the Data Science Field Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (https://www.amazon.com/gp/product/0553418815/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&tag=therubyrep-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=0553418815&linkId=0ed7c081ef2baa2e5a6f33a076e2929b) Therac-25 (https://en.wikipedia.org/wiki/Therac-25) FMEA (Failure Mode Effects Analysis) (https://www.greaterthancode.com/2017/06/21/episode-037-failure-mode-with-emily-gorcenski/) 17:21 – Software Development and Engineering; Failure Modes in Software 21:44 – Failure Modes in Democracy; Voting Machine Software 33:37 – Working for a Government Contractor 36:21 – Data Patterns and Tampering 39:00 – Open Data and Open Science 45:59 – Falsifying Data Reflections: Coraline: Considering all the ways something can fail. Sam: The world that I live in and the kind of software development practices that I take for granted are extraordinary niche. Emily: Tech conferences and their decadence vs academic/corporate conferences. This episode was brought to you by @therubyrep (https://twitter.com/therubyrep) of DevReps, LLC (http://www.devreps.com/). To pledge your support and to join our awesome Slack community, visit patreon.com/greaterthancode (https://www.patreon.com/greaterthancode). To make a one-time donation so that we can continue to bring you more content and transcripts like this, please do so at paypal.me/devreps (https://www.paypal.me/devreps). You will also get an invitation to our Slack community this way as well. Amazon links may be affiliate links, which means you’re supporting the show when you purchase our recommendations. Thanks! Special Guest: Emily Gorcenski.
In this podcast I will give a simple standardization for your Cabin Check that will increase your knowledge of your systems and add safety and redundancy to your flying. Fly Your Best! Jason