Podcasts about system design

  • 254PODCASTS
  • 382EPISODES
  • 41mAVG DURATION
  • 1WEEKLY EPISODE
  • Mar 7, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about system design

Latest podcast episodes about system design

Progress, Potential, and Possibilities
Dr. Craig L. Katz, MD - Professor of Psychiatry, Medical Education, and System Design and Global Health - Icahn School of Medicine at Mount Sinai - Disaster Psychiatry & Global Mental Health

Progress, Potential, and Possibilities

Play Episode Listen Later Mar 7, 2025 50:55


Send us a textDr. Craig Katz, MD ( https://profiles.mountsinai.org/craig-l-katz ) is Clinical Professor of Psychiatry, Medical Education, and System Design and Global Health at the Icahn School of Medicine at Mount Sinai, and the founder and director of Mount Sinai's Program in Global Mental Health ( https://icahn.mssm.edu/education/medical/global-health-education/global-mental-health ), an interest that grew out of his experience in organizing and providing psychiatric services to disaster-affected communities since 1998 through an organization he co-founded and led: Disaster Psychiatry Outreach.As part of Disaster Psychiatry Outreach, Dr. Katz has organized the psychiatric response to 9/11 in New York City, including founding and directing the World Trade Center Mental Health Screening and Treatment Program for 9/11 responders for a number of years. Dr. Katz is also currently the special advisor to the Center for Stress, Resilience, and Personal Growth ( https://icahn.mssm.edu/research/center-stress-resilience-personal-growth ), Mount Sinai's system-wide program for addressing healthcare workers' mental health issues arising from COVID-19. Dr. Katz's honors include the Medical Society of the State of New York's 2022 David B. L. Meza, III, M.D. Award for Excellence in Emergency Preparedness and Disaster/Terrorism Response. Dr. Katz has written and co-edited a number of books and papers in the fields of disaster psychiatry, human rights, medical education, and global psychiatry, including A Guide to Global Mental Health Practice: Seeing the Unseen (Routledge) and his most recent, Unseen: Field Notes of a Global Psychiatrist ( https://www.amazon.com/Unseen-Field-Notes-Global-Psychiatrist/dp/9815129422 ).After attending Harvard College, Dr. Katz received his medical degree at Columbia University, where he completed his psychiatric residency training and served as chief resident in psychiatry. He subsequently completed a fellowship in forensic psychiatry at NYU. Dr. Katz also has a private practice in general and forensic psychiatry in Manhattan and is a former President of the New York County District Branch of the American Psychiatric Association as well as a Distinguished Fellow of the APA. He is currently the Vice Chair of the Medical Society of New York's Committee on Emergency Preparedness and also serves as the National Trauma Consultant to Advanced Recovery Systems and the International Association of Firefighters Center of Behavioral Excellence.#CraigKatz #Psychiatry #GlobalHealth #IcahnSchoolOfMedicine #MountSinai #MentalHealth #Stress #Resilience #Trauma #Disasters #Psychodynamics #Telepsychiatry #TeleMentalHealth #ProgressPotentialAndPossibilities #IraPastor #Podcast #Podcaster #ViralPodcast #STEM #Innovation #Technology #Science #ResearchSupport the show

Book Overflow
Acing the System Design Interview - System Design Interview by Alex Xu

Book Overflow

Play Episode Listen Later Mar 3, 2025 62:21


This week Carter and Nathan discuss the first half of System Design Interview by Alex Xu. Join them as they discuss Alex's excellent newsletter Byte Byte Go, how systems design interviews reflect actual jobs, and what tips and tricks Alex offers to ace your interviews!Byte Byte Go: https://bytebytego.com/-- Books Mentioned in this Episode --Note: As an Amazon Associate, we earn from qualifying purchases.----------------------------------------------------------System Design Interview – An insider's guide by Alex Xu https://amzn.to/3EXFYUa (paid link)Team Topologies by Matthew Skelton and Manuel Paishttps://amzn.to/4kgfH3F (paid link)----------------00:00 Intro01:33 About the Book03:08 Thoughts on the Book11:57 What is a Systems Design Interview?22:15 Why focus on Systems Design Interview?27:26 Our Experience with System Design Interviews36:09 Strategies, Approach, and Expertise40:20 Importance of Back of the Envelope Calculations45:39 Learning through building57:02 Final Thoughts----------------Spotify: https://open.spotify.com/show/5kj6DLCEWR5nHShlSYJI5LApple Podcasts: https://podcasts.apple.com/us/podcast/book-overflow/id1745257325X: https://x.com/bookoverflowpodCarter on X: https://x.com/cartermorganNathan's Functionally Imperative: www.functionallyimperative.com----------------Book Overflow is a podcast for software engineers, by software engineers dedicated to improving our craft by reading the best technical books in the world. Join Carter Morgan and Nathan Toups as they read and discuss a new technical book each week!The full book schedule and links to every major podcast player can be found at https://www.bookoverflow.io

Scaling UP! H2O
409 A Western Refinery's Journey to Improved Water Efficiency

Scaling UP! H2O

Play Episode Listen Later Feb 28, 2025 41:52


"Water is a limited resource, and in this refinery, every gallon saved is a win for sustainability." – Juan Meneses Water is a critical resource in industrial operations, and improving efficiency is a top priority for many companies. In this episode of Scaling UP! H2O, returning guest Juan Meneses, District Manager at Nalco Water, an Ecolab Company, discusses how a Western refinery optimized its water footprint using advanced treatment technologies.  This episode is packed with insights on water conservation strategies, sustainability goals, and the role of advanced monitoring technologies like 3D TRASAR in maximizing operational efficiency while minimizing environmental impact.  Key Topics Covered:  The Challenges of Water Usage in Refineries  This refinery faced rising water costs and increasing sustainability pressures. With water sourced from the city, costs were projected to rise by 5% annually, with wastewater discharge costs climbing even faster. Finding ways to reduce water consumption while maintaining efficiency was a top priority.  Optimizing Cooling Towers for Maximum Water Efficiency  Cooling towers presented a key opportunity for conservation. The team aimed to increase cycles of concentration to reduce water waste without compromising system integrity. By using 3D TRASAR technology, they monitored real-time conditions, allowing precise adjustments to prevent scaling and corrosion.  Implementing Smart Water Treatment Strategies  To sustain higher cycles, the refinery introduced dual cathodic inhibitors and high-charge polymers, enhancing corrosion and scale control. pH adjustments ensured effective biocide performance while maintaining system reliability. This strategic shift allowed for significant reductions in water and chemical use.  Results and Lessons Learned  By increasing cycles from 5.5 to 9.3, the refinery saved 52 million gallons of water annually while cutting wastewater discharge and chemical consumption. The biggest takeaway? Real-time monitoring and proactive pH control are essential for maintaining efficiency at higher cycles.  Best Practices for Industrial Water Optimization  Collaboration between plant operators and water treatment professionals is key. Regular monitoring, data-driven decision-making, and advanced automation tools can help refineries maximize efficiency while meeting sustainability goals. Water conservation is good for business and the environment. Could your facility save millions of gallons? Explore advanced water treatment strategies today. Learn more at ScalingUpH2O.com. Stay engaged, keep learning, and continue scaling up your knowledge!  Timestamps   02:14 – Upcoming Events for Water Treatment Professionals   08:02 – Water You Know with James McDonald  10:20 – Interview with returning guest Juan Meneses, District Manager at Nalco Water about Western Refinery Water Efficiency   11:06 – The biggest water challenges faced by the refinery  12:11 – Strategies for optimizing water footprint and sustainability goals  14:07 – How 3D TRASAR and modeling software improve water efficiency  24:50 – Water savings achieved: 52 million gallons saved  29:21 – Best practices for communicating water optimization goals  Quotes  “The way that we can reduce water in the cooling tower is to increase cycle of concentration.” – Juan Meneses "A good implementation of this project, if you can, with good and advanced monitoring and automation. You can optimize your chemical treatment by modeling the condition.” - Juan Meneses “About the teamwork, foster collaboration and communication with the customer are key component of that and focus on sustainability.” - Juan Meneses   Connect with Juan Meneses  Phone: 337.309.9619  Email: jmeneses@ecolab.com  Website: Reinventing the Way Water is Managed | Nalco Water  LinkedIn: Juan A. Meneses | LinkedIn  Click HERE to Download Episode's Discussion Guide    Guest Resources Mentioned   CH 2029 Western Refinery Optimizes Water Footprint Using 3D Trasar Technology for Cooling Modeling Tools paper  Crucial Conversations Tools for Talking When Stakes Are High, Second Edition: Kerry Patterson, Joseph Grenny, Ron McMillan, Al Switzler   Good Profit: How Creating Value for Others Built One Of The World's Most Successful Company by Charles Koch  The 4 Disciplines of Execution: Achieving Your Wildly Important Goals by Sean Covey (Author), Chris McChesney (Author), Jim Huling (Author)      Scaling UP! H2O Resources Mentioned  AWT (Association of Water Technologies)   IWC (International Water Conference)      Scaling UP! H2O Academy video courses  Submit a Show Idea  The Rising Tide Mastermind   405 Cooling Water Innovation: Harnessing Wastewater for Sustainability 164 The One With Chris McChesney    Water You Know with James McDonald  Question: What effect will the water temperature have on softener backwash during regeneration?    2025 Events for Water Professionals  Check out our Scaling UP! H2O Events Calendar where we've listed every event Water Treaters should be aware of by clicking HERE.   

Stop Doing What You Hate
Financial System Design Flaws

Stop Doing What You Hate

Play Episode Listen Later Feb 27, 2025 26:01


Today, Harold shares insights on navigating the flawed financial systems that can hinder your path to early retirement. Harold will address the misconceptions around cash flow, stock markets, and investments to help you make informed decisions. It is not about depending on the bank, instead, it is about becoming your own banker.  Join us to learn more about beating the banking systems and protecting your wealth.  Show Highlights: Here is how you can understand humility [02:48] Is the financial system rigged? [04:34] Do you know the reality of the banking systems? [06:18] Is life insurance a bad investment? [10:38] The misconceptions about the stock market [14:45] Why do we misunderstand risk tolerance? [16:29] Navigating the emotional aspect of risk tolerance [18:08] Know about the limits and restrictions in 401k plans [21:00]

Storage Unpacked Podcast
Storage Unpacked 266 – Architectural Choices in Storage System Design

Storage Unpacked Podcast

Play Episode Listen Later Feb 24, 2025 37:13


In this episode, Chris discusses the options available to storage system vendors when building modern storage appliances, with Bill Basinas, Senior Director, Product Marketing at Infinidat. The conversation derives from an observation on architectural choices, following the move to AMD processors from Intel for the latest G4 systems built by Infinidat. AMD offers a greater core count per processor compared to Intel, allowing Infinidat to move to single socket designs, while gaining improvements from PCIe 5.0 and DDR5 memory. Ultimately, this discussion highlights how modern storage system design can take standardised components and build flexible architectures, implementing most features in software. For Infinidat, that could mean expanding its range of solutions for smaller enterprise requirements, or building out products specifically for Edge use cases. Although Bill did not reveal any future plans, the implication is clear - watch this space for future evolution of the InfiniBox architecture to a wider and more varied set of hardwaree configurations. Elapsed Time: 00:37:13 Timeline 00:00:00 - Intros 00:01:15 - How do vendors choose the hardware components for storage systems? 00:02:30 - What are the main (storage) technology challenges for customers? 00:04:08 - Customers want predictable data features 00:05:55 - Capacity demand continues to grow relentlessly 00:07:30 - Infinidat features are built into software 00:09:35 - Most AI requirements wil run on existing performance storage 00:11:20 - Modern hardware provides significant flexibility for system design 00:15:00 - AMD gives access to single and high core-count processors 00:16:10 - PCIe 5.0 provides for faster SSDs and power efficiency 00:18:46 - Infinidat has introduced smaller form-factor solutions 00:21:32 - Multiple cores will always get used! 00:25:53 - Infinidat G4 architecture provides for in-place controller upgrades 00:28:22 - Storage arrays should become more “virtual” 00:34:10 - Data services implementations are very different between vendors 00:35:55 - Hybrid architecture still has value in the Infinidat world 00:36:20 - Wrap Up Related Podcasts & Blogs Storage Unpacked 258 - Introducing Infinidat G4, InfuzeOS 8 and InfiniSafe ACP #202 - Enterprise Storage Consolidation with Phil Bullinger from Infinidat Infinidat adds customer value with SSA Express and improved SSA capacity Copyright (c) 2016-2025 Unpacked Network. No reproduction or re-use without permission. Podcast episode #e4dr

UBC News World
Friendswood Yard Drainage System Design & Installation: Call Top Local Team

UBC News World

Play Episode Listen Later Feb 24, 2025 2:18


If you're having yard flooding issues in Friendswood, Clear Lake, Bacliff, or San Leon, contact the team of experts at League City Drainage & Irrigation (409-572-0824) for a water flow management solution. Go to https://drainmyyardleaguecity.com for more information. League City Drainage and Irrigation City: El Lago Address: 400 Lakeshore Dr. Website: https://drainmyyardleaguecity.comm

Talking about Platforms
Platform and Innovation System Design with Kevin Boudreau

Talking about Platforms

Play Episode Listen Later Feb 12, 2025 44:32


In this episode, Kevin Boudreau discusses his research on platforms, from his early perspectives to his current view of platforms as governed digital spaces. He also delves into the impact of AI and data on platforms, and how business schools can play an active role in helping companies adapt to the rapidly changing landscape.

SemiWiki.com
Podcast EP272: An Overview How AI is Changing Semiconductor and System Design with Dr. Sailesh Kumar

SemiWiki.com

Play Episode Listen Later Jan 31, 2025 21:55


Daniel Nenni is joined by Dr. Sailesh Kumar, CEO of Baya Systems. With over two decades of experience, Sailesh is a seasoned expert in SoC, fabric, I/O, memory architecture, and algorithms. Previously, Sailesh founded NetSpeed Systems and served as its Chief Technology Officer until its successful acquisition by Intel. Sailesh… Read More

Learn System Design
Mastering System Design Interview: Creating a Scalable Parking Garage System

Learn System Design

Play Episode Listen Later Jan 28, 2025 38:37 Transcription Available


The Learn System Design podcast delves into the intricacies of designing a parking lot system, a topic often encountered in technical interviews, especially at large tech companies. The host, Ben Kitchell, begins by providing context from previous discussions, particularly regarding the importance of atomicity and redundancy in system design. He emphasizes the need for a reliable and scalable architecture that can handle real-time reservations and payments, illustrating the challenges of maintaining consistency in a distributed environment. The episode outlines critical functional requirements such as user authentication, reservation capabilities, and payment processing, while also addressing non-functional requirements like security and latency. Throughout the discussion, Ben explores the CAP theorem, highlighting the trade-offs between consistency and availability. He advocates for prioritizing consistency in this specific use case—parking reservations—because allowing multiple users to occupy the same spot would lead to significant user dissatisfaction. The episode also covers capacity estimates, proposing a realistic user base and discussing storage needs, which ultimately lead to considerations for database modeling. Ben suggests utilizing a relational database for its inherent relationships between users, vehicles, and reservations, ensuring data integrity and efficient querying. Furthermore, the podcast dives into the technical architecture of the system, advocating for a modular approach with dedicated services for user management, vehicle handling, parking spot management, and payment processing. Ben proposes the use of Redis for distributed locking to manage concurrency effectively, ensuring that users cannot double-book parking spots. He concludes with a discussion on scaling strategies and the importance of designing systems that can evolve with changing demands. This episode serves as a comprehensive guide for engineers looking to deepen their understanding of system design while preparing for real-world application and technical interviews.Takeaways: Building a parking lot system requires a focus on core functional requirements like user reservation and payment. Using a distributed locking mechanism, such as Redis, can help maintain consistency in concurrent transactions. Non-functional requirements such as security and low latency are critical for user satisfaction. Estimating capacity for the system is important; 100,000 users a month is a realistic start. A structured database model with tables for users, vehicles, reservations, and spots is essential for functionality. Designing for scalability involves separating services and using load balancers to manage traffic effectively. Companies mentioned in this episode: Amazon AWS DynamoDB Zookeeper Stripe Raising Cane's Chick Fil A Support the showDedicated to the memory of Crystal Rose.Email me at LearnSystemDesignPod@gmail.comJoin the free Discord Consider supporting us on PatreonSpecial thanks to Aimless Orbiter for the wonderful music.Please consider giving us a rating on ITunes or wherever you listen to new episodes.

Machine Learning Street Talk
Nicholas Carlini (Google DeepMind)

Machine Learning Street Talk

Play Episode Listen Later Jan 25, 2025 81:15


Nicholas Carlini from Google DeepMind offers his view of AI security, emergent LLM capabilities, and his groundbreaking model-stealing research. He reveals how LLMs can unexpectedly excel at tasks like chess and discusses the security pitfalls of LLM-generated code. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events? Goto https://tufalabs.ai/ *** Transcript: https://www.dropbox.com/scl/fi/lat7sfyd4k3g5k9crjpbf/CARLINI.pdf?rlkey=b7kcqbvau17uw6rksbr8ccd8v&dl=0 TOC: 1. ML Security Fundamentals [00:00:00] 1.1 ML Model Reasoning and Security Fundamentals [00:03:04] 1.2 ML Security Vulnerabilities and System Design [00:08:22] 1.3 LLM Chess Capabilities and Emergent Behavior [00:13:20] 1.4 Model Training, RLHF, and Calibration Effects 2. Model Evaluation and Research Methods [00:19:40] 2.1 Model Reasoning and Evaluation Metrics [00:24:37] 2.2 Security Research Philosophy and Methodology [00:27:50] 2.3 Security Disclosure Norms and Community Differences 3. LLM Applications and Best Practices [00:44:29] 3.1 Practical LLM Applications and Productivity Gains [00:49:51] 3.2 Effective LLM Usage and Prompting Strategies [00:53:03] 3.3 Security Vulnerabilities in LLM-Generated Code 4. Advanced LLM Research and Architecture [00:59:13] 4.1 LLM Code Generation Performance and O(1) Labs Experience [01:03:31] 4.2 Adaptation Patterns and Benchmarking Challenges [01:10:10] 4.3 Model Stealing Research and Production LLM Architecture Extraction REFS: [00:01:15] Nicholas Carlini's personal website & research profile (Google DeepMind, ML security) - https://nicholas.carlini.com/ [00:01:50] CentML AI compute platform for language model workloads - https://centml.ai/ [00:04:30] Seminal paper on neural network robustness against adversarial examples (Carlini & Wagner, 2016) - https://arxiv.org/abs/1608.04644 [00:05:20] Computer Fraud and Abuse Act (CFAA) – primary U.S. federal law on computer hacking liability - https://www.justice.gov/jm/jm-9-48000-computer-fraud [00:08:30] Blog post: Emergent chess capabilities in GPT-3.5-turbo-instruct (Nicholas Carlini, Sept 2023) - https://nicholas.carlini.com/writing/2023/chess-llm.html [00:16:10] Paper: “Self-Play Preference Optimization for Language Model Alignment” (Yue Wu et al., 2024) - https://arxiv.org/abs/2405.00675 [00:18:00] GPT-4 Technical Report: development, capabilities, and calibration analysis - https://arxiv.org/abs/2303.08774 [00:22:40] Historical shift from descriptive to algebraic chess notation (FIDE) - https://en.wikipedia.org/wiki/Descriptive_notation [00:23:55] Analysis of distribution shift in ML (Hendrycks et al.) - https://arxiv.org/abs/2006.16241 [00:27:40] Nicholas Carlini's essay “Why I Attack” (June 2024) – motivations for security research - https://nicholas.carlini.com/writing/2024/why-i-attack.html [00:34:05] Google Project Zero's 90-day vulnerability disclosure policy - https://googleprojectzero.blogspot.com/p/vulnerability-disclosure-policy.html [00:51:15] Evolution of Google search syntax & user behavior (Daniel M. Russell) - https://www.amazon.com/Joy-Search-Google-Master-Information/dp/0262042878 [01:04:05] Rust's ownership & borrowing system for memory safety - https://doc.rust-lang.org/book/ch04-00-understanding-ownership.html [01:10:05] Paper: “Stealing Part of a Production Language Model” (Carlini et al., March 2024) – extraction attacks on ChatGPT, PaLM-2 - https://arxiv.org/abs/2403.06634 [01:10:55] First model stealing paper (Tramèr et al., 2016) – attacking ML APIs via prediction - https://arxiv.org/abs/1609.02943

Machine Learning Street Talk
Subbarao Kambhampati - Do o1 models search?

Machine Learning Street Talk

Play Episode Listen Later Jan 23, 2025 92:13


Join Prof. Subbarao Kambhampati and host Tim Scarfe for a deep dive into OpenAI's O1 model and the future of AI reasoning systems. * How O1 likely uses reinforcement learning similar to AlphaGo, with hidden reasoning tokens that users pay for but never see * The evolution from traditional Large Language Models to more sophisticated reasoning systems * The concept of "fractal intelligence" in AI - where models work brilliantly sometimes but fail unpredictably * Why O1's improved performance comes with substantial computational costs * The ongoing debate between single-model approaches (OpenAI) vs hybrid systems (Google) * The critical distinction between AI as an intelligence amplifier vs autonomous decision-maker SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events? Goto https://tufalabs.ai/ *** TOC: 1. **O1 Architecture and Reasoning Foundations** [00:00:00] 1.1 Fractal Intelligence and Reasoning Model Limitations [00:04:28] 1.2 LLM Evolution: From Simple Prompting to Advanced Reasoning [00:14:28] 1.3 O1's Architecture and AlphaGo-like Reasoning Approach [00:23:18] 1.4 Empirical Evaluation of O1's Planning Capabilities 2. **Monte Carlo Methods and Model Deep-Dive** [00:29:30] 2.1 Monte Carlo Methods and MARCO-O1 Implementation [00:31:30] 2.2 Reasoning vs. Retrieval in LLM Systems [00:40:40] 2.3 Fractal Intelligence Capabilities and Limitations [00:45:59] 2.4 Mechanistic Interpretability of Model Behavior [00:51:41] 2.5 O1 Response Patterns and Performance Analysis 3. **System Design and Real-World Applications** [00:59:30] 3.1 Evolution from LLMs to Language Reasoning Models [01:06:48] 3.2 Cost-Efficiency Analysis: LLMs vs O1 [01:11:28] 3.3 Autonomous vs Human-in-the-Loop Systems [01:16:01] 3.4 Program Generation and Fine-Tuning Approaches [01:26:08] 3.5 Hybrid Architecture Implementation Strategies Transcript: https://www.dropbox.com/scl/fi/d0ef4ovnfxi0lknirkvft/Subbarao.pdf?rlkey=l3rp29gs4hkut7he8u04mm1df&dl=0 REFS: [00:02:00] Monty Python (1975) Witch trial scene: flawed logical reasoning. https://www.youtube.com/watch?v=zrzMhU_4m-g [00:04:00] Cade Metz (2024) Microsoft–OpenAI partnership evolution and control dynamics. https://www.nytimes.com/2024/10/17/technology/microsoft-openai-partnership-deal.html [00:07:25] Kojima et al. (2022) Zero-shot chain-of-thought prompting ('Let's think step by step'). https://arxiv.org/pdf/2205.11916 [00:12:50] DeepMind Research Team (2023) Multi-bot game solving with external and internal planning. https://deepmind.google/research/publications/139455/ [00:15:10] Silver et al. (2016) AlphaGo's Monte Carlo Tree Search and Q-learning. https://www.nature.com/articles/nature16961 [00:16:30] Kambhampati, S. et al. (2023) Evaluates O1's planning in "Strawberry Fields" benchmarks. https://arxiv.org/pdf/2410.02162 [00:29:30] Alibaba AIDC-AI Team (2023) MARCO-O1: Chain-of-Thought + MCTS for improved reasoning. https://arxiv.org/html/2411.14405 [00:31:30] Kambhampati, S. (2024) Explores LLM "reasoning vs retrieval" debate. https://arxiv.org/html/2403.04121v2 [00:37:35] Wei, J. et al. (2022) Chain-of-thought prompting (introduces last-letter concatenation). https://arxiv.org/pdf/2201.11903 [00:42:35] Barbero, F. et al. (2024) Transformer attention and "information over-squashing." https://arxiv.org/html/2406.04267v2 [00:46:05] Ruis, L. et al. (2023) Influence functions to understand procedural knowledge in LLMs. https://arxiv.org/html/2411.12580v1 (truncated - continued in shownotes/transcript doc)

Learn System Design
Mastering System Design Interview: From Concept to Scale Building Efficient URL Shorteners

Learn System Design

Play Episode Listen Later Jan 7, 2025 33:40 Transcription Available


Send us a textURL Shortener DesignsUnlock the secrets to designing a high-performing URL shortener system in our latest episode with Ben at the helm! Get ready to master the essentials that transform a simple idea into a robust tool, essential for platforms with strict character limits like Twitter. We'll walk you through the core elements of creating effective short URLs, from ensuring seamless redirects to setting expiration dates tailored for various industries like marketing firms. Discover the importance of high availability in read-heavy systems, and learn how to craft a service that not only meets but anticipates user demands.Dive into the architectural complexities of building a URL shortener that can scale to billions of requests. Ben breaks down the nitty-gritty of data model structuring and the strategic benefits of non-relational databases like Cassandra for horizontal scaling. Learn to harness the power of character hashes and explore innovative ways to keep your URLs unique and efficient. We'll reveal the architectural tactics like adding database replicas and load balancers to maintain system availability and performance. Tune in for a wealth of strategies and insights that promise to elevate your system design skills to new heights!Support the showDedicated to the memory of Crystal Rose.Email me at LearnSystemDesignPod@gmail.comJoin the free Discord Consider supporting us on PatreonSpecial thanks to Aimless Orbiter for the wonderful music.Please consider giving us a rating on ITunes or wherever you listen to new episodes.

BookClub DotNet
[S02E16] EOF | BookClub DOTNET

BookClub DotNet

Play Episode Listen Later Dec 26, 2024 9:13


Ведущие:- Роман Гашков- Григорий Кузьмин- Роман ЩербаковДизайн и иллюстрации:- Серафима ЛебедеваВыпуск на YouTube: https://www.youtube.com/watch?v=0nSIEPepbxM&list=PLbxr_aGL4q3TUK_LvjiGIbbxc58O4ZuJv&index=17&pp=gAQBiAQBsAQBВыпуск на mave: https://bookclub-dotnet.mave.digital/ep-35Канал книжного клуба: https://t.me/bookclubdotnetСайт книжного клуба: https://bookclub.dotnet.ruКнигаSystem Design. Подготовка к сложному интервью (https://www.piter.com/product/system-design-podgotovka-k-slozhnomu-intervyu)The following music was used for this media project:Music: Ambient Corporate by WinnieTheMoogFree download: https://filmmusic.io/song/6188-ambient-corporateLicense (CC BY 4.0): https://filmmusic.io/standard-licenseКлючевые слова: архитектура, системный дизайн, алгоритмы, паттерны, программирование, собеседование, книга, книжный клуб, architecture, system design, algorithms, patterns, programming, interview, book, bookclub

BookClub DotNet
[S02E15] Проектирование Google Drive | BookClub DOTNET

BookClub DotNet

Play Episode Listen Later Dec 21, 2024 34:35


Ведущие:- Роман Гашков- Григорий Кузьмин- Роман ЩербаковДизайн и иллюстрации:- Серафима ЛебедеваВыпуск на YouTube: https://www.youtube.com/watch?v=X1k-Z0oB6xw&list=PLbxr_aGL4q3TUK_LvjiGIbbxc58O4ZuJv&index=16&pp=gAQBiAQBВыпуск на mave: https://bookclub-dotnet.mave.digital/ep-34Канал книжного клуба: https://t.me/bookclubdotnetСайт книжного клуба: https://bookclub.dotnet.ruКнигаSystem Design. Подготовка к сложному интервью (https://www.piter.com/product/system-design-podgotovka-k-slozhnomu-intervyu)The following music was used for this media project:Music: Ambient Corporate by WinnieTheMoogFree download: https://filmmusic.io/song/6188-ambient-corporateLicense (CC BY 4.0): https://filmmusic.io/standard-licenseКлючевые слова: архитектура, системный дизайн, алгоритмы, паттерны, программирование, собеседование, книга, книжный клуб, architecture, system design, algorithms, patterns, programming, interview, book, bookclub

Learn System Design
Mastering System Design Interviews: Building Scalable Web Crawlers

Learn System Design

Play Episode Listen Later Dec 17, 2024 32:14 Transcription Available


Send us a textWeb Crawler DesignsCan a simple idea like building a web crawler teach you the intricacies of system design? Join me, Ben Kitchell, as we uncover this fascinating intersection. Returning from a brief pause, I'm eager to guide you through the essential building blocks of a web crawler, from queuing seed URLs to parsing new links autonomously. These basic functionalities are your gateway to creating a minimum viable product or acing that system design interview. You'll gain insights into potential extensions like scheduled crawling and page prioritization, ensuring a strong foundation for tackling real-world challenges.Managing a billion URLs a month is no small feat, and scaling such a system requires meticulous planning. We'll break down the daunting numbers into digestible pieces, exploring how to efficiently store six petabytes of data annually. By examining different database models, you'll learn how to handle URLs, track visit timestamps, and keep data searchable. The focus is on creating a robust system that not only scales but does so in a way that meets evolving demands without compromising on performance.Navigating the complexities of designing a web crawler means making critical decisions about data storage and system architecture. We'll weigh the benefits of using cloud storage solutions like AWS S3 and Azure Blob Storage against maintaining dedicated servers. Discover the role of REST APIs in seamless user and service interactions, and explore search functionalities using Cassandra, Amazon Athena, or Google's BigQuery. Flexibility and foresight are key as we build systems that adapt to future needs. Thank you for your continued support—let's keep learning and growing on this exciting system design journey together.Support the showDedicated to the memory of Crystal Rose.Email me at LearnSystemDesignPod@gmail.comJoin the free Discord Consider supporting us on PatreonSpecial thanks to Aimless Orbiter for the wonderful music.Please consider giving us a rating on ITunes or wherever you listen to new episodes.

Giant Robots Smashing Into Other Giant Robots
551: System Design is a Team Sport with Tom Johnson

Giant Robots Smashing Into Other Giant Robots

Play Episode Listen Later Nov 21, 2024 37:06


If system design is a team sport, then you need to make sure that your team has the tools they need to work together. In this episode, entrepreneur, CTO, and co-founder Tom Johnson joins us to discuss Multiplayer, a collaborative tool streamlining system design and documentation for developers. Multiplayer is often likened to “Figma for developers,” as it allows teams to map, document, and debug distributed systems visually and collaboratively. Tom shares his experience building this tool, drawing on years of backend development challenges, from debugging to coordinating across teams. We also discuss the business side of startups before learning about the AI features that they have planned for Multiplayer and how it will benefit users, including eliminating time-consuming “grunt work”. Join us to learn how Multiplayer is revolutionizing system design and get a sneak peek into the exciting AI-powered features on the horizon! Key Points From This Episode: Introducing Tom Johnson, co-founder of Multiplayer. An overview of Multiplayer and how it helps developers work on distributed systems. The teams and developers that will get the most use out of Multiplayer. Details on Multiplayer's debugging and auto-documentation tools. A breakdown of what distributed systems are in modern software development. Why Tom sees contemporary systems design as a team sport. Multiplier's whiteboard-type space and how it allows teams to collaborate. Tom's back-end developer experience and how it helped him create Multiplayer. How Tom co-founded Multiplayer with his wife, Steph Johnson, and her role as CEO. Why solving a problem you've personally experienced is a good starting point for startups. What you need to have before fundraising: a minimum viable product (MVP). How they used the open-source software, YJS, for virtual, real-time collaboration. Insights into Multiplayer's upcoming AI-powered features. Links Mentioned in Today's Episode: 
Thomas Johnson on LinkedIn (https://www.linkedin.com/in/tomjohnson3/) Thomas Johnson on X (https://x.com/tomjohnson3) Thomas Johnson on Threads (https://www.threads.net/@tomjohnson3?hl=en)
 Steph Johnson on LinkedIn (https://www.linkedin.com/in/steph-johnson-14355b3/)
 Multiplayer (https://www.multiplayer.app/)
 YJS (https://github.com/yjs/yjs)
 Figma (https://www.figma.com/) Chad Pytel on LinkedIn (https://www.linkedin.com/in/cpytel/) Chad Pytel on X (https://x.com/cpytel) thoughtbot (https://thoughtbot.com) thoughtbot on LinkedIn (https://www.linkedin.com/company/150727/) thoughtbot on X (https://twitter.com/thoughtbot) Giant Robots Smashing Into Other Giant Robots Podcast (https://podcast.thoughtbot.com/) Giant Robots Smashing Into Other Giant Robots Email (hosts@giantrobots.fm) Support Giant Robots Smashing Into Other Giant Robots (https://github.com/sponsors/thoughtbot)

BookClub DotNet
[S02E14] Проектирование YouTube | BookClub DOTNET

BookClub DotNet

Play Episode Listen Later Nov 20, 2024 32:56


Ведущие:- Роман Гашков- Григорий Кузьмин- Роман ЩербаковДизайн и иллюстрации:- Серафима ЛебедеваВыпуск на YouTube: https://www.youtube.com/watch?v=eS0yyQ3f2NM&list=PLbxr_aGL4q3TUK_LvjiGIbbxc58O4ZuJv&index=15&pp=gAQBiAQBВыпуск на mave: https://bookclub-dotnet.mave.digital/ep-33Канал книжного клуба: https://t.me/bookclubdotnetСайт книжного клуба: https://bookclub.dotnet.ruКнигаSystem Design. Подготовка к сложному интервью (https://www.piter.com/product/system-design-podgotovka-k-slozhnomu-intervyu)The following music was used for this media project:Music: Ambient Corporate by WinnieTheMoogFree download: https://filmmusic.io/song/6188-ambient-corporateLicense (CC BY 4.0): https://filmmusic.io/standard-licenseКлючевые слова: архитектура, системный дизайн, алгоритмы, паттерны, программирование, собеседование, книга, книжный клуб, architecture, system design, algorithms, patterns, programming, interview, book, bookclub

Learn System Design
Mastering System Design Interview: Navigating Database Models, Entity Relationships, and Key Attributes for Robust Systems

Learn System Design

Play Episode Listen Later Nov 19, 2024 27:39 Transcription Available


Send us a textUnlock the secrets of database models and elevate your system design skills with Ben Kitchell on the Learn System Design podcast. What if mastering the art of database modeling could transform your approach to system design interviews and real-world applications? Explore the intricacies of relational data models, where the simplicity of a 2D matrix of rows and columns meets the complexity of larger datasets. Discover how primary and foreign keys form the backbone of relational databases, using practical examples like user and address tables. With the flexibility to integrate seamlessly with API models, you're set to gain insights into using Entity Relationship Diagrams (ERDs) for crafting efficient systems.Navigate through the hierarchical structure of a restaurant's organizational model to understand complex data relationships better. Learn how to identify and connect entities within systems, illustrated by the example of Spotify. Embrace the iterative planning process as we emphasize the significance of key attributes like IDs and timestamps, allowing you to adapt your database models as new elements emerge. This episode promises a foundational understanding crucial for anyone aspiring to perfect their database modeling skills, ensuring you design systems that are robust and future-proof. Join us for practical tips and strategic insights that will empower your system design journey.Everyday AI: Your daily guide to grown with Generative AICan't keep up with AI? We've got you. Everyday AI helps you keep up and get ahead.Listen on: Apple Podcasts SpotifySupport the showDedicated to the memory of Crystal Rose.Email me at LearnSystemDesignPod@gmail.comJoin the free Discord Consider supporting us on PatreonSpecial thanks to Aimless Orbiter for the wonderful music.Please consider giving us a rating on ITunes or wherever you listen to new episodes.

Design Systems Podcast
#122 - Creating Cohesion from Chaos: The Role of Language and System Design with Ben Callahan

Design Systems Podcast

Play Episode Listen Later Nov 12, 2024 44:37


Send us feedback or episode suggestions.Language shapes the way we think and structure the world we build. On this episode of The Design Systems Podcast, Chris Strahl sits down with Ben Callahan, co-founder of Sparkbox, to explore the critical role of language and communication in team dynamics, problem-solving, and system structures. Ben shares insights on how linguistic choices shape product creation and drive organization-wide cohesion. He argues that while common ground is essential, design systems should balance order with controlled "quenched disorder" to foster innovation, using flexible, layered structures that adapt to unique team needs and create scalable, culturally embedded solutions. Listen to the full episode to gain a deeper understanding of the intricate relationship between language, design systems, and organizational culture, as well as practical insights on fostering system adoption and cross-team collaboration.View the transcript of this episode.Check out our upcoming events.GuestBen Callahan is a developer, designer, and founder of Sparkbox, known for his engaging presentations, workshops, and his interactive series The Question. With a passion for sharing knowledge, Ben uses his journey to inspire others, bridging the gap between technical insights and the human challenges within design and development.HostChris Strahl is co-founder and CEO of Knapsack, host of @TheDSPod, DnD DM, and occasional river guide. You can find Chris on Twitter as @chrisstrahl and on LinkedIn.SponsorSponsored by Knapsack, the design system platform that brings teams together. Learn more at knapsack.cloud.

Learn System Design
Mastering System Design Interview: Unlocking API Design, Crafting Routes, and Real-Time Data Transfer Techniques

Learn System Design

Play Episode Listen Later Nov 5, 2024 34:41 Transcription Available


Send us a textUnlock the secrets of API design and elevate your system design skills with our latest episode featuring me, Benny Kitchell. Explore the pivotal role APIs play in system design interviews and real-world development, where they act like the seamless communication between waiters, cooks, and customers in a restaurant. Learn how to craft APIs that are tailored to both internal and external developers by understanding their specific needs and objectives, ensuring a smooth and efficient user experience.We also shine a light on the critical aspects of designing API routes. Understanding user needs and addressing core problems are the bedrock of effective API design. By focusing on functional and non-functional requirements, you'll be equipped to create API routes that meet real-world demands. Discover the importance of API versioning through our Spotify example, where future-proofing your design becomes crucial in maintaining user satisfaction and facilitating seamless updates.Finally, we delve into the world of real-time data transfer, examining both synchronous and asynchronous communication methods. From the traditional request-response model to the innovative use of WebSockets for instantaneous data exchanges, we break down the strengths and limitations of each approach. Equip yourself with the knowledge to choose the best method for your client-server interactions, ensuring your system design is robust, flexible, and ready for any challenge.Support the showDedicated to the memory of Crystal Rose.Email me at LearnSystemDesignPod@gmail.comJoin the free Discord Consider supporting us on PatreonSpecial thanks to Aimless Orbiter for the wonderful music.Please consider giving us a rating on ITunes or wherever you listen to new episodes.

DEVNAESTRADA
DNE 439 - System Design Interview Versão Brasileira

DEVNAESTRADA

Play Episode Listen Later Oct 18, 2024 46:25


Edu e Emílio se juntam pra contar um pouco de como funciona esse tipo de teste que as empresas vem adotando aqui no Brasil, como parte do processo seletivo de vagas de desenvolvimento. Vivências, dicas valiosas e como se sair bem numa etapa dessas! Vem com a gente nesse papo esclarecedor!

Learn System Design
Mastering System Design Interview: Capacity Estimates, Scaling Challenges, and Strategic Insights

Learn System Design

Play Episode Listen Later Oct 15, 2024 25:37 Transcription Available


Send us a textMaster the art of system design as Benny Ketchel guides us through the essential skills every senior tech candidate needs to excel, starting with capacity estimates. By the end of this episode, you'll be able to navigate the complexities of bandwidth and data size without getting bogged down in unnecessary arithmetic. We explore how to think like industry leaders at Netflix, Google, and Instagram, focusing on rough estimates, worst-case scenarios, and the use of metric prefixes to simplify calculations. This episode is not just about numbers; it's about understanding the larger picture and harnessing the power of strategic thinking.Our discussion doesn't stop at capacity. Join us as we tackle the challenges of large-scale systems, offering insights into handling billions of users and managing enormous data streams. Learn to focus on the core components of a system, such as video content for a streaming giant, and balance cost with hardware efficiency. Plus, get a sneak peek at our upcoming special episodes and discover ways to support and engage with our community, from sending feedback to joining us on Patreon. This isn't just a lesson in system design—it's a call to action for aspiring tech leaders to think big and design even bigger.Support the showDedicated to the memory of Crystal Rose.Email me at LearnSystemDesignPod@gmail.comJoin the free Discord Consider supporting us on PatreonSpecial thanks to Aimless Orbiter for the wonderful music.Please consider giving us a rating on ITunes or wherever you listen to new episodes.

AFT Construction Podcast
Sustainable Water Solutions for Homes with Water Service Elite

AFT Construction Podcast

Play Episode Listen Later Oct 6, 2024 58:28


Sponsors:• ◦ Visit Buildertrend to get a 60-day money-back guarantee on your Buildertrend account!• ◦ Pella Windows & Doors• ◦ Sub-Zero Wolf Cove Showroom PhoenixConnect with Water Service Elite:https://www.waterserviceelite.comConnect with Brad Leavitt:Website | Instagram | Facebook | Houzz | Pinterest | YouTube

Learn System Design
Mastering System Design Interview: Essential System Design Interview Principles and Techniques

Learn System Design

Play Episode Listen Later Oct 1, 2024 29:02 Transcription Available


Send us a textCan a simple delay really cost a company millions? We kick off season two of the Learn System Design podcast by exploring this and more. I'm Benny Kitchell, your host, and after a refreshing hiatus, I'm excited to bring you a fresh take on system design interviews and real-world applications. We start with the fundamentals of functional requirements using a relatable example: a music streaming app like Spotify. Discover how to align core functionalities such as song playback, playlist creation, and music recommendations with stakeholder expectations, setting the stage for effective system design.This episode also delves into the intricacies of caching systems, the critical role of TTL (Time To Live), and the balancing act required by the CAP theorem. We address the importance of understanding both functional and non-functional requirements, emphasizing stakeholder input to ensure a robust design. Key concepts like latency, durability, and partition tolerance are unpacked, highlighting their impact on user experience and system stability. Tune in to gain valuable insights that will not only prepare you for system design interviews but also enhance your technical prowess in the field. Thank you for joining us on this journey; your support means the world!https://www.cloudflare.com/learning/privacy/what-are-fair-information-practices-fipps/https://mashable.com/article/myspace-data-lossSupport the showDedicated to the memory of Crystal Rose.Email me at LearnSystemDesignPod@gmail.comJoin the free Discord Consider supporting us on PatreonSpecial thanks to Aimless Orbiter for the wonderful music.Please consider giving us a rating on ITunes or wherever you listen to new episodes.

The GeekNarrator
System Design the formal way with FizzBee

The GeekNarrator

Play Episode Listen Later Sep 22, 2024 76:22


In this video I talk to Jayaprabhakar Kadarkarai aka JP who is the founder of FizzBee. FizzBee is a design specification language and model checker to help developers verify their design before writing even a single line of implementation code. We have discussed where it is applicable, what are the benefits, how does it work and many other interesting challenges with examples. Chapters: 00:00 Introduction 01:13 Challenges in Designing Distributed Systems 03:13 Understanding Design Specification Languages 04:00 The Value of Structured Design Documents 09:00 When to Use Design Specification Languages 21:27 Modeling a Travel Booking System 22:51 Ensuring Atomicity in Distributed Systems 26:09 Handling Failures and Consistency 34:45 Refinement in System Design 35:38 Balancing Abstraction and Implementation 37:53 Common Pitfalls in Modeling and Implementation 40:02 Challenges in System Design and Implementation 40:12 Two-Way Feedback in System Design 41:01 Performance Considerations in Implementation 41:36 Importance of Solid Design Blueprints 41:56 Model-Based Testing and Continuous Integration 43:27 Updating Design Documentation 44:38 Simulation Testing vs. Model Checking 45:32 Design Issues and Formal Verification 49:51 Applying Formal Verification to Existing Systems 55:35 Common Design Problems and Solutions 01:07:57 Future Enhancements in Design Specification Tools 01:12:50 Getting Started with FizzBee FizzBee : https://fizzbee.io/ Get in touch with JP: https://www.linkedin.com/in/jayaprabhakar Like building stuff? Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription. https://app.codecrafters.io/join?via=geeknarrator If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning! #distributedsystems #systemdesign #formalmethods

TheCase.Report
BONUS: Prehospital System Design with Prof Junaid Abdul Razzak

TheCase.Report

Play Episode Listen Later Sep 19, 2024 17:29


Season 5 is just around the corner! We'll be kicking off the new season October 7th. Mark your calendars! In the meantime, Mohammed is off to Pakistan this week to PakTraumaCon 2024. Being the insufferable nerd that he is, he's not satisfied annoying his colleagues and new acquaintances only at the conference, but has also been after them for chats before even getting on the plane... Someone needs to stop him! Joining him on this bonus is Prof Junaid Abdul Razzak, also a speaker at PakTraumaCon this year. Prof Abdul Razzak is Vice Chair of Research at Weill Cornell Medicine in New York, and is also the Director of the Centre of Excellence for Trauma and Emergencies at the Aga Khan University in Karachi. Right! Let's get to it.

Amelia's Weekly Fish Fry
Operation eVTOL: System Design Challenges for Electric Vertical Take-Off and Landing Aircraft

Amelia's Weekly Fish Fry

Play Episode Listen Later Sep 13, 2024 18:59


This week's podcast is all about electric vertical take-off and landing aircraft! Matt McAlonis from TE Connectivity and I explore the biggest design challenges for eVTOLs, strategies for managing power effectively within eVTOL systems, complexities of complying with new eVTOL certification requirements, and what Matt thinks it will take for eVTOL aircraft to become mainstream. 

Develpreneur: Become a Better Developer and Entrepreneur
User Stories Unveiled: A Developer's Guide to Capturing the Full Narrative

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later Aug 15, 2024 17:45


In this episode of the developer podcast, the hosts explore user stories, a crucial tool in gathering effective software requirements. Using a creative analogy comparing user stories to movie ratings, the episode explains how to create detailed and valuable user stories that go beyond the basics. What Are User Stories? User stories are the foundation of understanding how users interact with a system to achieve their goals. At their simplest, these stories capture these interactions in a narrative form, providing insight into the user's experience and needs. For example, consider an office manager using a back-office system: their stories might include actions like entering customer information, processing payments, or looking up employee records. Each of these actions represents a distinct user story, offering a snapshot of the user's journey within the system. The “Happy Path”: Your G-Rated User Story The podcast introduces a unique analogy to explain the concept of user stories: movie ratings. The “happy path” in a user story is akin to a G-rated movie. This scenario represents the ideal situation where everything works perfectly—data is correct, the system functions as expected, and the user easily achieves their goal. The happy path is the most straightforward user story, focusing on the best-case scenario where nothing goes wrong. Expanding the Story: From PG to R-Rated Scenarios But just like in movies, real-world systems rarely stick to the happy path. The analogy progresses to PG-rated scenarios, where minor issues start to appear. These might include small errors like a typo in a phone number or a data entry mistake. In these cases, stories must account for how the system will handle such deviations. Will it alert the user, automatically correct the error, or flag the issue for review? Addressing these scenarios ensures the system is robust and user-friendly. As we move into PG-13 and R-rated scenarios, the complexity increases. Now, user stories must consider more serious problems—such as incorrect data formats, missing information, or system errors. For example, what happens if a user enters an invalid zip code or tries to complete a transaction without sufficient funds? These stories require the system to have validation checks, error handling, and fail-safes to prevent or mitigate these issues. The Extreme Cases: Rated X User Stories The analogy reaches its peak with “Rated X” scenarios—extreme cases where the user might try to break or exploit the system. These could involve malicious activities like SQL injection or simply entering nonsensical data to see how the system reacts. While these scenarios might seem far-fetched, they are critical when developing stories. Addressing these edge cases ensures the system is secure, resilient, and able to withstand unexpected challenges. Deepening User Stories: Peeling Back the Layers To create truly effective stories, it's essential to go beyond surface-level narratives. This means asking “what if” questions and exploring different possibilities that could arise. The host likens this process to peeling an onion, revealing deeper layers of complexity within the user's experience. By considering a wide range of scenarios—from the happy path to the edge cases—developers can create comprehensive and detailed stories that lead to more valuable requirements. The Art of Listening: Capturing the Full Story A critical point emphasized in the episode is the importance of actively listening to the user when gathering stories. Developers often make the mistake of jumping to technical solutions without fully understanding the user's narrative. It's vital to remember that a user story is not about the technology—it's about the user's journey. Developers need to focus on understanding the story itself, ensuring they capture the full picture before diving into the technical implementation. Evolving User Stories: Building on the Narrative User stories are not static—they evolve over time as the user's needs change. The initial story might be simple, like needing a basic payroll system. However, as the user's needs expand, new stories emerge, requiring additional features and functionalities. These new stories can be seen as sequels in a movie series, building on the original narrative to create a more complex and feature-rich system. Recognizing this evolution helps developers design systems that are flexible and capable of adapting to changing requirements. Crafting Comprehensive User Stories This episode of the developer podcast provides a fresh perspective on user stories, using a movie analogy to illustrate the different levels of complexity in requirements gathering. By understanding user stories as evolving narratives and focusing on the user's journey, developers can craft software that meets and exceeds user expectations, leading to more successful and satisfying outcomes. Stay Connected: Join the Developreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources How to write effective user stories in agile development? The Importance of Properly Defining Requirements Changing Requirements – Welcome Them For Competitive Advantage Creating Your Product Requirements Creating Use Cases and Gathering Requirements The Developer Journey Videos – With Bonus Content

Purrfect.dev
4.17 - AI for System Design and Architecture Documentation

Purrfect.dev

Play Episode Listen Later Jul 10, 2024 39:51


Learn how AI transforms system design and architecture with insights from Multiplayer's CTO. Explore new tools and collaborative features for developers. https://codingcat.dev/podcast/ai-for-system-design-and-architecture-documentation 00:00 Introduction 00:18 Meet Thomas Johnson 00:54 What is Multiplayer? 02:55 Early Tech Journey 05:12 Career Transition 07:04 VOIP Industry Insights 08:56 System Design Importance 12:47 Multiplayer Demo 24:53 Collaborative Features 26:42 AI in System Design 29:36 Consulting Potential 35:21 Perfect Picks 39:30 Conclusion --- Support this podcast: https://podcasters.spotify.com/pod/show/codingcatdev/support

Effective Altruism Forum Podcast
“Detecting Genetically Engineered Viruses With Metagenomic Sequencing” by Jeff Kaufman

Effective Altruism Forum Podcast

Play Episode Listen Later Jun 30, 2024 14:08


This is a link post. This represents work from several people at the NAO. Thanks especially to Dan Rice for implementing the duplicate junction detection, and to @Will Bradshaw and @mike_mclaren for editorial feedback.Summary Summary . If someone were to intentionally cause a stealth pandemic today, one of the ways they might do it is by modifying an existing virus. Over the past few months we've been working on building a computational pipeline that could flag evidence of this kind of genetic engineering, and we now have an initial pipeline working end to end. When given 35B read pairs of wastewater sequencing data it raises 14 alerts for manual review, 13 of which are quickly dismissible false positives and one is a known genetically engineered sequence derived from HIV. While it's hard to get a good estimate before actually going and doing it, our best guess is that [...] ---Outline:(00:22) Summary(01:14) System Design(02:34) Evaluation(02:49) Simulation(05:27) Real World Evaluation(08:28) System Sensitivity(11:32) Future Work--- First published: June 27th, 2024 Source: https://forum.effectivealtruism.org/posts/da6iKGxco8hjwH4nv/detecting-genetically-engineered-viruses-with-metagenomic --- Narrated by TYPE III AUDIO.

Effective Altruism Forum Podcast
“Detecting Genetically Engineered Viruses With Metagenomic Sequencing” by Jeff Kaufman

Effective Altruism Forum Podcast

Play Episode Listen Later Jun 30, 2024 14:15


This is a link post. This represents work from several people at the NAO. Thanks especially to Dan Rice for implementing the duplicate junction detection, and to @Will Bradshaw and @mike_mclaren for editorial feedback. Summary If someone were to intentionally cause a stealth pandemic today, one of the ways they might do it is by modifying an existing virus. Over the past few months we've been working on building a computational pipeline that could flag evidence of this kind of genetic engineering, and we now have an initial pipeline working end to end. When given 35B read pairs of wastewater sequencing data it raises 14 alerts for manual review, 13 of which are quickly dismissible false positives and one is a known genetically engineered sequence derived from HIV. While it's hard to get a good estimate before actually going and doing it, our best guess is that if this system [...] ---Outline:(00:22) Summary(01:15) System Design(02:36) Evaluation(02:50) Simulation(05:34) Real World Evaluation(08:34) System Sensitivity(11:39) Future WorkThe original text contained 1 image which was described by AI. --- First published: June 27th, 2024 Source: https://forum.effectivealtruism.org/posts/da6iKGxco8hjwH4nv/detecting-genetically-engineered-viruses-with-metagenomic --- Narrated by TYPE III AUDIO.

Effective Altruism Forum Podcast
“Detecting Genetically Engineered Viruses With Metagenomic Sequencing” by Jeff Kaufman

Effective Altruism Forum Podcast

Play Episode Listen Later Jun 30, 2024 14:14


This is a link post. This represents work from several people at the NAO. Thanks especially to Dan Rice for implementing the duplicate junction detection, and to @Will Bradshaw and @mike_mclaren for editorial feedback. Summary If someone were to intentionally cause a stealth pandemic today, one of the ways they might do it is by modifying an existing virus. Over the past few months we've been working on building a computational pipeline that could flag evidence of this kind of genetic engineering, and we now have an initial pipeline working end to end. When given 35B read pairs of wastewater sequencing data it raises 14 alerts for manual review, 13 of which are quickly dismissible false positives and one is a known genetically engineered sequence derived from HIV. While it's hard to get a good estimate before actually going and doing it, our best guess is that if this system [...] ---Outline:(00:22) Summary(01:15) System Design(02:36) Evaluation(02:50) Simulation(05:28) Real World Evaluation(08:29) System Sensitivity(11:34) Future WorkThe original text contained 1 image which was described by AI. --- First published: June 27th, 2024 Source: https://forum.effectivealtruism.org/posts/da6iKGxco8hjwH4nv/detecting-genetically-engineered-viruses-with-metagenomic --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

GearSource Geezers of Gear
#247 - Joe Casanova

GearSource Geezers of Gear

Play Episode Listen Later Jun 5, 2024 96:07


Joe started back in the 80's on the rental and production side of Century Music Systems Inc and worked as a staff engineer prepping tours, repairing equipment, training new road staff, and then engineering shows for Showco. Some of the artists for Showco were Prince, Vince Gill, Eric Clapton, James Taylor, The Rolling Stones, and Paul McCartney. For nearly 17 years Joe was the Senior National Account Executive and co-founder of the Audio Division at VER.  Joe is now the Managing Partner, the co-founder of SAV Entertainment. ​SAV Entertainment specialize in Audio, RF Coordination, Communications, System Design and Engineering, Box Rentals, and much more. This episode is brought to you by ACT Entertainment and GearSource. --- Send in a voice message: https://podcasters.spotify.com/pod/show/geezersofgear/message

The eVTOL Insights Podcast
Episode 139: Ben Chiswick and Lee Rogers of Drive System Design

The eVTOL Insights Podcast

Play Episode Listen Later May 29, 2024 29:06


In this episode, Ben and Lee introduce us to Drive System Design (DSD) and what the company is working on in the Advanced Air Mobility market. As DSD is helping to fast-track the development of electric aircraft, we learn more about this, as well as one of the company's products the AePOP project or Aerospace Electrified Powertrain Optimisation Process. While there are many challenges to solve in this market, we ask Ben and Lee what their top three would be and how DSD is best suited to resolve them.

Tech Lead Journal
#176 - Acing the System Design Interview - Zhiyong Tan

Tech Lead Journal

Play Episode Listen Later May 27, 2024 48:24


“Always remember that system design interview is not about perfection. It is about trade-offs and being able to communicate them clearly and concisely." Zhiyong Tan is the author of “Acing the System Design Interview”. In this episode, he joins me in demystifying the system design interview process. He shares insights into what to expect, how to tackle common challenges like time management, anxiety, and knowledge gaps, and reveals the core principles that guide successful system design interview. Zhiyong dives deep into common pitfalls, offering advice on handling tricky topics like requirements gathering, data consistency, scaling problems, and service design. He also provides practical tips on how to learn and grow from system design interview failures, turning setbacks into stepping stones towards success. Whether you're a seasoned engineer or just starting your tech career, this episode offers valuable insights and actionable advice to help you ace your next system design interview.   Listen out for: Career Journey - [00:01:43] System Design Interview - [00:05:03] Trade-offs - [00:07:36] Managing the Time - [00:09:51] Handling What You Don't Know - [00:13:27] Managing Anxiety - [00:15:40] System Design Interview Principles - [00:18:32] Non-Functional Requirements - [00:21:22] Data Consistency - [00:25:11] Database Scaling Problem - [00:28:41] Distributed Transactions - [00:33:09] Functional Requirements & API Design - [00:36:31] Failing System Design Interview - [00:38:38] 3 Tech Lead Wisdom - [00:42:02] _____ Zhiyong Tan's BioZhiyong Tan is the author of Acing the System Design Interview. He is the founder of Tingxie, an app for learning Chinese as a second language. Previously, he was an Engineering Manager and Staff Engineer at PayPal, a senior software engineer at Uber, and a software and data engineer at various startups. Follow Zhiyong: LinkedIn – linkedin.com/in/zytan Acing System Design Interview – https://www.manning.com/books/acing-the-system-design-interview Tingxie (iOS) – https://apps.apple.com/us/app/%E5%90%AC%E5%86%99-chinese-spelling-dictation/id6462944919 Tingxie (Android) – https://play.google.com/store/apps/details?id=com.zhiyong.tingxie Jointgoals.com – https://www.jointgoals.com/ Manning forum – https://livebook.manning.com/forum _____ Our Sponsors Enjoy an exceptional developer experience with JetBrains. Whatever programming language and technology you use, JetBrains IDEs provide the tools you need to go beyond simple code editing and excel as a developer.Check out FREE coding software options and special offers on jetbrains.com/store/#discounts.Make it happen. With code. Manning Publications is a premier publisher of technical books on computer and software development topics for both experienced developers and new learners alike. Manning prides itself on being independently owned and operated, and for paving the way for innovative initiatives, such as early access book content and protection-free PDF formats that are now industry standard.Get a 45% discount for Tech Lead Journal listeners by using the code techlead45 for all products in all formats. Like this episode? Show notes & transcript: techleadjournal.dev/episodes/176. Follow @techleadjournal on LinkedIn, Twitter, and Instagram. Buy me a coffee or become a patron.

PreAccident Investigation Podcast
PAPod 499 - Navigating Confusion: A Lesson in System Design

PreAccident Investigation Podcast

Play Episode Listen Later May 25, 2024 26:58 Transcription Available


In this episode of the Pre-Accident Investigation Podcast, host Todd Conklin delves into a real-life scenario that underscores the importance of effective system design. Through an engaging story about a confusing airport security line, Todd explores the concept of high reliability and the pitfalls of blaming individuals for systemic failures. Join Todd as he reflects on the principles of highly reliable organizations and how they apply to everyday situations. This episode offers valuable insights into recognizing weak links in systems and emphasizes the need for clear, user-friendly design to prevent unnecessary confusion and errors. Tune in for an enlightening discussion that not only entertains but also provides practical takeaways for improving system reliability and user experience in any organization.

ITSPmagazine | Technology. Cybersecurity. Society
Integrating Human Factors Engineering in Cybersecurity | Human-Centered Cybersecurity Series with Co-Host Julie Haney and Guest Calvin Nobles | Redefining CyberSecurity Podcast with Sean Martin

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later May 21, 2024 43:36


Guests: Julie Haney, Computer scientist and Human-Centered Cybersecurity Program Lead at National Institute of Standards and Technology [@NISTcyber]On Linkedin | https://www.linkedin.com/in/julie-haney-037449119/On Twitter | https://x.com/jmhaney8?s=21&t=f6qJjVoRYdIJhkm3pOngHQDr. Calvin Nobles, Ph.D., Portfolio Vice President / Dean, School of Cybersecurity and Information Technology, University of Maryland Global Campus [@umdglobalcampus]On LinkedIn | https://www.linkedin.com/in/calvinnobles/____________________________Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinView This Show's Sponsors___________________________Episode NotesIn a recent episode of Human-Centered Cybersecurity Series on the Redefining CyberSecurity podcast, co-hosts Sean Martin and Julie Haney dive into the intriguing world of human-centered cybersecurity with their guest, Dr. Calvin Nobles, Dean of the School of Cyber Security and Information Technology at the University of Maryland Global Campus. The episode provided a wealth of knowledge, not only about the significance of human factors in cybersecurity but also about how organizations can better integrate these considerations into their cybersecurity strategies.The conversation illuminated the critical role of human factors, a field born out of experimental psychology and foundational to related subfields such as human-computer interaction and usability. Dr. Nobles' insights shed light on the need for cybersecurity systems to be designed with human limitations and strengths in mind, thus optimizing user performance and reducing the risk of errors. It's a call to move from technology-centered designs to ones that place humans at their core. A significant point of discussion revolved around the common misunderstandies surrounding human factors in cybersecurity. Dr. Nobles clarified the definition of human factors, pointing out its systematic approach towards optimizing human performance. By fitting the system to the user, rather than forcing the user to adapt, cybersecurity can become more intuitive and less prone to human error.The episode also touched on the concerning gap in current cybersecurity education and practice. Dr. Nobles and Haney highlighted the sparse incorporation of human factors into cybersecurity curricula across universities, stressing the urgency for integrated education that aligns with real-world needs. This gap points to a broader issue within organizations—the lack of focused human factors programs to address the human element comprehensively.Practical advice was shared for organizations aspiring to incorporate human factors into their cybersecurity efforts. Identifying 'human friction areas' at work, such as fatigue, resource shortages, and a lack of prioritization, can guide initiatives to mitigate these challenges. Moreover, the suggestion to provide cybersecurity professionals with education in human factors underlines the need for a well-rounded skillset that goes beyond technical expertise.This episode serves as a beacon for the cybersecurity community, emphasizing the necessity of integrating human factors into cybersecurity education, practice, and policies. By doing so, the field can advance towards a more effective, human-centered approach that enhances both security and user experience.Top Questions AddressedWhat is the definition of human factors in cybersecurity?How can organizations integrate human factors into their cybersecurity strategies?What role does education play in bridging the gap between current cybersecurity practices and the need for a human-centered approach?___________________________Watch this and other videos on ITSPmagazine's YouTube ChannelRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

The Bike Shed
424: The Spectrum of Automated Processes for Your Dev Team

The Bike Shed

Play Episode Listen Later Apr 30, 2024 36:47


Joël shares his experience with the dry-rb suite of gems, focusing on how he's been using contracts to validate input data. Stephanie relates to Joël's insights with her preparation for RailsConf, discussing her methods for presenting code in slides and weighing the aesthetics and functionality of different tools like VS Code and Carbon.sh. She also encounters a CI test failure that prompts her to consider the implications of enforcing specific coding standards through CI processes. The conversation turns into a discussion on managing coding standards and tools effectively, ensuring that automated systems help rather than hinder development. Joël and Stephanie ponder the balance between enforcing strict coding standards through CI and allowing developers the flexibility to bypass specific rules when necessary, ensuring tools provide valuable feedback without becoming obstructions. Transcript: AD: We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP. Join for all seven of the weekly sessions, or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue. STEPHANIE: Hello and welcome to another episode of the Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I've been working on a project that uses the dry-rb suite of gems. And one of the things we're doing there is we're validating inputs using this concept of a contract. So, you sort of describe the shape and requirements of this, like hash of attributes that you get, and it will then tell you whether it's valid or not, along with error messages. We then want to use those to eventually build some other sort of value object type things that we use in the app. And because there's, like, failure points at multiple places that you have to track, it gets a little bit clunky. And I got to thinking a little bit about, like, forget about the internal machinery. What is it that I would actually like to happen here? And really, what I want is to say, I've got this, like, bunch of attributes, which may or may not be correct. I want to pass them into a method, and then either get back a value object that I was hoping to construct or some kind of error. STEPHANIE: That sounds reasonable to me. JOËL: And then, thinking about it just a little bit longer, I was like, wait a minute, this idea of, like, unstructured input goes into a method, you get back something more structured or an error, that's kind of the broad definition of parsing. I think what I'm looking for is a parser object. And this really fits well with a style of processing popularized in the functional programming community called parse, don't validate the idea that you use a parser like this to sort of transform data from more loose to more strict values, values where you can have more assumptions. And so, I create an object, and I can take a contract. I can take a class and say, "Attempt to take the following attributes. If they're valid according to the construct, create this classroom." And it, you know, does a bunch of error handling and some...under the hood, dry-rb does all this monad stuff. So, I handled that all inside of the object, but it's actually really nice. STEPHANIE: Cool. Yeah, I had a feeling that was where you were going to go. A while back, we had talked about really impactful articles that we had read over the course of the year, and you had shared one called Parse, Don't Validate. And that heuristic has actually been stuck in my head a little bit. And that was really cool that you found an opportunity to use it in, you know, previously trying to make something work that, like, you weren't really sure kind of how you wanted to implement that. JOËL: I think I had a bit of a light bulb moment as I was trying to figure this out because, in my mind, there are sort of two broad approaches. There's the parse, don't validate where you have some inputs, and then you transform them into something stricter. Or there's more of that validation approach where you have inputs, you verify that they're correct, and then you pass them on to someone else. And you just say, "Trust me, I verified they're in the right shape." Dry-rb sort of contracts feel like they fit more under that validation approach rather than the parse, don't validate. Where I think the kind of the light bulb turned on for me is the idea that if you pair a validation step and an object construction step, you've effectively approximated the idea of parse, don't validate. So, if I create a parser object that says, in sort of one step, I'm going to validate some inputs and then immediately use them if they're valid to construct an object, then I've kind of done a parse don't validate, even though the individual building blocks don't follow that pattern. STEPHANIE: More like a parse and validate, if you will [laughs]. I have a question for you. Like, do you own those inputs kind of in your domain? JOËL: In this particular case, sort of. They're coming from a form, so yes. But it's user input, so never trust that. STEPHANIE: Gotcha. JOËL: I think you can take this idea and go a little bit broader as well. It doesn't have to be, like, the dry-rb-related stuff. You could do, for example, a JSON schema, right? You're dealing with the input from a third-party API, and you say, "Okay, well, I'm going to have a sort of validation JSON schema." It will just tell you, "Is this data valid or not?" and give you some errors. But what if you paired that with construction and you could create a little parser object, if you wanted to, that says, "Hey, I've got a payload coming in from a third-party API, validate it against this JSON schema, and attempt to construct this shopping cart object, and give me an error otherwise." And now you've sort of created a nice, little parse, don't validate pipeline which I find a really nice way to deal with data like that. STEPHANIE: From a user perspective, I'm curious: Does this also improve the user experience? I'm kind of wondering about that. It seems like it could. But have you explored that? JOËL: This is more about the developer experience. STEPHANIE: Got it. JOËL: The user experience, I think, would be either identical or, you know, you can play around with things to display better errors. But this is more about the ergonomics on the development side of things. It was a little bit clunky to sort of assemble all the parts together. And sometimes we didn't immediately do both steps together at the same time. So, you might sort of have parameters that we're like, oh, these are totally good, we promise. And we pass them on to someone else, who passes them on to someone else. And then, they might try to do something with them and hope that they've got the data in the right shape. And so, saying, let's co-locate these two things. Let's say the validation of the inputs and then the creation of some richer object happen immediately one after another. We're always going to bundle them together. And then, in this particular case, because we're using dry-rb, there's all this monad stuff that has to happen. That was a little bit clunky. We've sort of hidden that in one object, and then nobody else ever has to deal with that. So, it's easier for developers in terms of just, if you want to turn inputs into objects, now you're just passing them into one object, into one, like, parser, and it works. But it's a nicer developer experience, but also there's a little bit more safety in that because now you're sort of always working with these richer objects that have been validated. STEPHANIE: Yeah, that makes sense. It sounds very cohesive because you've determined that these are two things that should always happen together. The problems arise when they start to actually get separated, and you don't have what you need in terms of using your interfaces. And that's very nice that you were able to bundle that in an abstraction that makes sense. JOËL: A really interesting thing I think about abstractions is sometimes thinking of them as the combination of multiple other things. So, you could say that the combination of one thing and another thing, and all of a sudden, you have a new sort of combo thing that you have created. And, in this case, I think the combination of input validation and construction, and, you know, to a certain extent, error handling, so maybe it's a combination of three things gives you a thing you can call a parser. And knowing that that combination is a thing you can put a name on, I think, is really powerful, or at least it felt really powerful to me when that light bulb turned on. STEPHANIE: Yeah, it's kind of like the whole is greater than the sum of its parts. JOËL: Yeah. STEPHANIE: Cool. JOËL: And you and I did an episode on Specialized Vocabulary a while back. And that power of naming, saying that, oh, I don't just have a bunch of little atomic steps that do things. But the fact that the combination of three or four of them is a thing in and of itself that has a name that we can talk about has properties that we're familiar with, all of a sudden, that is a really powerful way to think about a system. STEPHANIE: Absolutely. That's very exciting. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, I am plugging away at my RailsConf talk, and I reached the point where I'm starting to work on slides. And this talk will be the first one where I have a lot of code that I want to present on my slides. And so, I've been playing around with a couple of different tools to present code on slides or, I guess, you know, just being able to share code outside of an editor. And the two tools I'm trying are...VS Code actually has a copy with syntax functionality in its command palette. And so, that's cool because it basically, you know, just takes your editor styling and applies it wherever you paste that code snippet. JOËL: Is that a screenshot or that's, like, formatted text that you can paste in, like, a rich text editor? STEPHANIE: Yeah, it's the latter. JOËL: Okay. STEPHANIE: That was nice because if I needed to make changes in my slides once I had already put them there, I could do that. But then the other tool that I was giving a whirl is Carbon.sh. And that one, I think, is pretty popular because it looks very slick. It kind of looks like a little Mac window and is very minimal. But you can paste your code into their text editor, and then you can export PNGs of the code. So, those are just screenshots rather than editable text. And I [chuckles] was using that, exported a bunch of screenshots of all of my code in various stages, and then realized I had a typo [laughs]. JOËL: Oh no! STEPHANIE: Yeah, so I have not got around to fixing that yet. That was pretty frustrating because now I would have to go back and regenerate all of those exports. So, that's kind of where I'm at in terms of exploring sharing code. So, if anyone has any other tools that they would use and recommend, I am all ears. JOËL: How do you feel about balancing sort of the quantity of code that you put on a slide? Do you tend to go with, like, a larger code slide and then maybe, like, highlight certain sections? Do you try to explain ideas in general and then only show, like, a couple of lines? Do you show, like, maybe a class that's got ten lines, and that's fine? Where do you find that balance in terms of how much code to put on a slide? Because I feel like that's always the big dilemma for me. STEPHANIE: Yeah. Since this is my first time doing it, like, I really have no idea how it's going to turn out. But what I've been trying is focusing more on changes between each slide, so the progression of the code. And then, I can, hopefully, focus more on what has changed since the last snippet of code we were looking at. That has also required me to be more fiddly with the formatting because I don't want essentially, like, the window that's containing the code to be changing sizes [laughs] in between slide transitions. So, that was a little bit finicky. And then, there's also a few other parts where I am highlighting with, like, a border or something around certain texts that I will probably pause and talk about, but yeah, it's tough. I feel like I've seen it done well, but it's a lot harder to and a lot more effort to [laughs] do in practice, I'm finding. JOËL: When someone does it well, it looks effortless. And then, when somebody does it poorly, you're like, okay, I'm struggling to connect with this talk. STEPHANIE: Yep. Yep. I hear that. I don't know if you would agree with this, but I get the sense that people who are able to make that look effortless have, like, a really deep and thorough understanding of the code they're showing and what exactly they think is important for the audience to pay attention to and understand in that given moment in their talk. That's the part that I'm finding a lot more work [laughs] because just thinking about, you know, the code I'm showing from a different lens or perspective. JOËL: How do you sort of shrink it down to only what's essential for the point that you're trying to make? And then, more broadly, not just the point you're trying to make on this one slide, but how does this one slide fit into the broader narrative of the story you're trying to tell? STEPHANIE: Right. So, we'll see how it goes for me. I'm sure it's one of those things that takes practice and experience, and this will be my first time, and we'll learn something from it. JOËL: That's exciting. So, this is RailsConf in Detroit this year, I believe, May 7th through 9th. STEPHANIE: Yep. That's right. So, recently on my client work, I encountered a CI failure on a PR of mine that I was surprised by. And basically, I had introduced a new association on a model, and this CI failure was saying like, "Hey, like, we see that you introduced this association. You should consider adding this to the presenter for this model." And I hadn't even known that that presenter existed [laughs]. So, it was kind of interesting to get a CI failure nudging me to consider if I need to be, like, making a different, you know, this other change somewhere else. JOËL: That's a really fun use of CI. Do you think that was sort of helpful for you as a newer person on that codebase? Or was it more kind of annoying and, like, okay, this CI is over the top? STEPHANIE: You know, I'm not sure [laughs]. For what it's worth, this presenter was actually for their admin dashboard, essentially. And so, the goal of what this workflow was trying to do was help folks who are using the admin dashboard have, like, all of the capabilities they need to do that job. And it makes sense that as you add behavior to your app, sometimes those things could get missed in terms of supporting, you know, not just your customers but developers, support product, you know, the other users of your app. So, it was cool. And that was, you know, something that they cared enough to enforce. But yeah, I think there maybe is a bit of a slippery slope or at least some kind of line, or it might even be pretty blurry around what should our test failures really be doing. JOËL: And CI is interesting because it can be a lot more than just tests. You can run all sorts of things. You can run a linter that fails. You could run various code quality tools that are not things like unit tests. And I think those are all valid uses of the CI process. What's interesting here is that it sounds like there were two systems that needed to stay in sync. And this particular CI check was about making sure that we didn't accidentally introduce code that would sort of drift apart in those two places. Does that sound about right? STEPHANIE: Yeah, that does sound right. I think where it gets a little fuzzy, for me, is whether that kind of check was for code quality, was for a standard, or for a policy, right? It was kind of saying like, hey, like, this is the way that we've enforced developers to keep those two things from drifting. Whereas I think that could be also handled in different ways, right? JOËL: Yeah. I guess in terms of, like, keeping two things in sync, I like to do that at almost, like, a code level, if possible. I mean, maybe you need a single source of truth, and then it just sort of happens automatically. Otherwise, maybe doing it in a way that will yell at you. So, you know, maybe there's a base class somewhere that will raise an error, and that will get caught by CI, or, you know, when you're manually testing and like, oh yeah, I need to keep this thing in sync. Maybe you can derive some things or get fancy with metaprogramming. And the goal here is you don't have a situation where someone adds a new file in one place and then they accidentally break an admin dashboard because they weren't aware that you needed these two files to be one-to-one. If I can't do it just at a code level, I have done that before at, like, a unit test level, where maybe there's, like, a constant somewhere, and I just want to assert that every item in this constant array has a matching entry somewhere else or something like that, so that you don't end up effectively crashing the site for someone else because that is broken behavior. STEPHANIE: Yeah, in this particular case, it wasn't necessarily broken. It was asking you "Hey, should this be added to the admin presenter?" which I thought was interesting. But I also hear what you're saying. It actually does remind me of what we were talking about earlier when you've identified two things that should happen, like mostly together and whether the code gives you affordances to do that. JOËL: So, one of the things you said is really interesting, the idea that adding to the presenter might have been optional. Does that mean that CI failed for you but that you could merge anyway, or how does that work? STEPHANIE: Right. I should have been more clear. This was actually a test failure, you know, that happened to be caught by CI because I don't run [laughs] the whole test suite locally. JOËL: But it's an optional test failure, so you're allowed to let that test fail. STEPHANIE: Basically, it told me, like, if I want this to be shown in the presenter, add it to this method, or if not, add it to...it was kind of like an allow list basically. JOËL: I see. STEPHANIE: Or an ignore list, yeah. JOËL: I think that kind of makes sense because now you have sort of, like, a required consistency thing. So, you say, "Our system requires you...whenever you add a file in this directory, you must add it to either an allow list or an ignore list, which we have set up in this other file." And, you know, sometimes you might forget, or sometimes you're new, and it's your first time adding a file in this directory, and you didn't remember there's a different place where you have to effectively register it. That seems like a reasonable check to have in place if you're relying on these sort of allow lists for other parts of the system, and you need to keep them in sync. STEPHANIE: So, I think this is one of the few instances where I might disagree with you, Joël. What I'm thinking is that it feels a bit weird to me to enforce a decision that was so far away from the code change that I made. You know, you're right. On one hand, I am newer to this codebase, maybe have less of that context of different features, things that need to happen. It's a big app. But I almost think this test reinforces this weird coupling of things that are very far away from each other [laughs]. JOËL: So, it's maybe not the test itself you object to rather than the general architecture where these admin presenters are relying on these other objects. And by you introducing a file in a totally different part of the app, there's a chance that you might break the admin, and that feels weird to you. STEPHANIE: Yeah, that does feel weird to me. And then, also that this implementation is, like, codified in this test, I guess, as opposed to a different kind of, like, acceptance test, rather than specifying specifically like, oh, I noticed, you know, you didn't add this new association or attribute to either the allow list or the ignore list. Maybe there is a more, like, higher level test that could steer us in keeping the features consistent without necessarily dictating, like, that it needs to happen in these particular methods. JOËL: So, you're talking something like doing an integration test rather than a unit test? Or are you talking about something entirely different? STEPHANIE: I think it could be an integration test or a system test. I'm not sure exactly. But I am wondering what options, you know, are out there for helping keeping standards in place without necessarily, like, prescribing too much about, like, how it needs to be done. JOËL: So, you used the word standard here, which I tend to think about more in terms of, like, code style, things like that. What you're describing here feels a little bit less like a standard and more of what I would call a code invariant. STEPHANIE: Ooh. JOËL: It's sort of like in this architecture the way we've set up, there must always be sort of one-to-one matching between files in this directory and entries in this array. Now, that's annoying because they're sort of, like, two different places, and they can vary independently. So, locking those two in sync requires you to do some clunky things, but that's sort of the way the architecture has been designed. These two things must remain one-to-one. This is an invariant we want in the app. STEPHANIE: Can you define invariant for me [laughs], the way that you're using it here? JOËL: Yeah, so something that is required to be true of all elements in this class of things, sort of a rule or a law that you're applying to the way that these particular bits of code need to behave. So, in this case, the invariant is every file in this directory must have a matching entry in this array. There's a lot of ways to enforce that. The sort of traditional idea is sort of pushing a lot of that checking...they'll sometimes talk about pushing errors to the left. So, if you can handle this earlier in the sort of code execution pipeline, can you do it maybe with a type system if you're in a type language? Can you do it with some sort of input validation at runtime? Some languages have the concept of contracts, so maybe you enforce invariants using that. You could even do something really ad hoc in Ruby, where you might say, "Hey, at boot time, when we load this particular array for the admin, just load this directory. Make sure that the entries in the array match the entries in the directory, and if they don't, raise an error." And I guess you would catch that probably in CI just because you tried to run your test suite, and you'd immediately get this boot error because the entries don't match. So, I guess it kind of gets [inaudible 22:36] CI, but now it's not really a dedicated test anymore. It's more of, like, a property of the system. And so, in this case, I've sort of shifted the error checking or the checking of this invariant more into the architecture itself rather than in, like, things that exercise the architecture. But you can go the other way and say, "Well, let's shift it out of the architecture into tests," or maybe even beyond that, into, like, manual QA or, you know, other things that you can do to verify it. STEPHANIE: Hmm. That is very compelling to me. JOËL: So, we've been talking so far about the idea of invariants, but the thing about invariants is that they don't vary. They're always true. This is a sort of fundamental rule of how this system works. The class of problems that I often struggle with how to deal with in these sorts of situations are rules that you only sometimes want to apply. They're not consistent. Have you ever run into things like that? STEPHANIE: Yeah, I have. And I think that's what was compelling to me about what you were sharing about code invariance because I wasn't totally convinced this particular situation was a very clear and absolute rule that had been decided, you know, it seemed a little bit more ambiguous. When you're talking about, like, applying rules that sometimes you actually don't want to apply, I think of things like linters, where we want to disable, you know, certain rules because we just can't get around implementing the way we want to while following those standards. Or maybe, you know, sometimes you just have to do something that is not accessible [laughs], not that that's what I would recommend, but in the case where there aren't other levers to change, you maybe want to disable some kind of accessibility check. JOËL: That's always interesting, right? Because sometimes, you might want, like, the idea of something that has an escape hatch in it, but that immediately adds a lot of complexity to things as well. This is getting into more controversial territory. But I read a really compelling article by Jeroen Engels about how being able to, like, locally disable your linter for particular methods actually makes your code, but also the linter itself, a worse tool. And it really kind of made me rethink a little bit of how I approach linters as a tool. STEPHANIE: Ooh. JOËL: And what makes sense in a linter. STEPHANIE: What was the argument for the linter being a worse tool by doing that? JOËL: You know, it's funny that you ask because now I can't remember, and it's been a little while since I've read the article. STEPHANIE: I'll have to revisit it after the show [laughs]. JOËL: Apparently, I didn't do the homework for this episode, but we'll definitely link to that article in the show notes. STEPHANIE: So, how do you approach either introducing a new rule to something like a linter or maybe reconsidering an existing rule? Like, how would you go about finding, like, consensus on that from your team? JOËL: That varies a lot by organizational culture, right? Some places will do it top-down, some of them will have a broader conversation and come to a consensus. And sometimes you just straight up don't get a choice. You're pulling in a tool like standard rb, and you're saying, "Look, we don't want to have a discussion about every little style thing, so whatever, you know, the community has agreed on for the standard rb linter is the style we're using. There are no discussions. Do what the linter tells you." STEPHANIE: Yeah, that's true. I think I have to adapt to whatever, you know, client culture is like when I join new projects. You know, sometimes I do see people being like, "Hey, I think it's kind of weird that we have this," or, "Hey, I've noticed, for example, oh, we're merging focused RSpec tests. Like, let's introduce a rule to make sure that that doesn't happen." I also think that a different approach is for those things not to be enforced at all by automation, but we, you know, there are still guidelines. I think the thoughtbot guides are an example of pretty opinionated guidelines around style and syntax. But I don't think that those kinds of things would, you know, ever be, like, enforced in a way that would be blocking. JOËL: Those are kind of hard because they're not as consistent as you would think, so it's not a rule you can apply every time. It's more of a, here's some things to maybe keep in mind. Or if you're writing code in this way, think about some of the edge cases that might happen, or don't default to writing it in this way because things might go wrong. Make sure you know what you're doing. I love the phrase, "Must be able to justify this," or sometimes, "Must convince your pair that this is okay." So, default to writing in style A, avoid style B unless you can have a compelling reason to do so and can articulate that on your PR or, you know, convince your pair that that's the right way to go. STEPHANIE: Interesting. It's kind of like the honor system, then [laughs]. JOËL: And I think that's sort of the general way when you're working with developers, right? There's a lot of areas where there is ambiguity. There is no single best way to do it. And so, you rely on people's expertise to build systems that work well. There are some things where you say, look, having conversations about these things is not useful. We want to have some amount of standardization or uniformity about certain things. Maybe there's invariance you want to hold. Maybe there's certain things we're, like, this should never get to production. Whenever you've got these, like, broad sweeping statements about things should be always true or never true, that's a great time to introduce something like a linting rule. When it's more up to personal judgment, and you just want to nudge that judgment one way or another, then maybe it's better to have something like a guide. STEPHANIE: Yeah, what I'm hearing is there is a bit of a spectrum. JOËL: For sure. From things that are always true to things that are, like, sometimes true. I think I'm sort of curious about the idea of going a level beyond that, though, beyond things like just code style or maybe even, like, invariance you want to hold or something, being able to make suggestions to developers based off the code that is written. So, now you're applying more like heuristics, but instead of asking a human to apply those heuristics at code review time and leave some comments, maybe there's a way to get automated feedback from a tool. STEPHANIE: Yeah, I think we had mentioned code analysis tools earlier because some teams and organizations include those as part of their CI builds, right? And, you know, even Brakeman, right? Like, that's an analysis tool for security. But I can't recall if I've seen an organization use things like Flog metrics which measure code complexity in things like that. How would you feel if that were a check that was blocking your work? JOËL: So, I've seen things like that be used if you're using, like, the Code Climate plugin for GitHub. And Code Climate internally does effectively flog and other things that are fancier on your code quality. And so, you can set a threshold to say, hey, if complexity gets higher than a certain amount, fail the build. You can also...if you're doing things via GitHub, what's nice is that you can do effectively non-blocking comments. So, instead of failing CI to say, "Hey, this method looks really complex. You cannot merge until you have made this method less complex," maybe the sort of, like, next step up in ambiguity is to just leave a comment on a PR from a tool and say, "Hey, this method here is looking really complex. Consider breaking it up." STEPHANIE: Yeah, there is a tool that I've seen but not used called Danger, and its tagline is, Stop saying, "You forgot to..." in code review [laughs]. And it basically does that, what you were saying, of, like, leaving probably a suggestion. I can imagine it's blocking, but a suggestive comment that just automates that rather than it being a manual process that humans have to remember or notice. JOËL: And there's a lot of things that could be specific to your organization or your architecture. So, you say, "Hey, you introduced a file here. Would you consider also making an entry to this presenter file so that it's editable on the admin?" And maybe that's a better place to handle that. Just a comment. But you wouldn't necessarily want every code reviewer to have to think about that. STEPHANIE: So, I do think that I am sometimes not necessarily suspicious, but I have also seen tools like that end up just getting in the way, and it just becomes something you ignore. It's something you end up always using the escape hatch for, or people just find ways around it because they're harming more than they're helping. Do you have any thoughts about how to kind of keep those things in check and make sure that the tools we introduce genuinely are kind of helping the organization do the right thing rather than kind of being these perhaps arbitrary blockers? JOËL: I'm going to throw a fancy phrase at you. STEPHANIE: Ooh, I'm ready. JOËL: Signal-to-noise ratio. STEPHANIE: Whoa, uh-huh. JOËL: So, how often is the feedback from your tool actually helpful, and how often is it just noise that you have to dismiss, or manually override, or things like that? At some point, the ratio becomes so much that you lose the signal in all the noise. And so, maybe you even, like, because you're always just ignoring the feedback from this tool, you accidentally start overriding things that would be genuinely helpful. And, at that point, you've got the worst of both worlds. So, sort of keeping track on what that ratio is, and there's not, like, a magic number. I'm not going to tell you, "Oh, this is an 80/20 principle. You need to have, you know, 80% of the time it's useful and only 20% of the time it's not useful." I don't have a number to give you, but keeping track on maybe, you know, is it more often than not useful? Is your team getting to the point where they're just ignoring feedback from this tool? And thinking in terms of that signal versus that noise, I think is useful—to go back to that word again, heuristic for managing whether a tool is still helpful. STEPHANIE: Yeah. And I would even go on to say that, you know, I always appreciate when people in leadership roles keep an eye on these things. And they're like, "Oh, I've been hearing that people are just totally numb to this tool [laughs]" or, you know, "There's no engagement on this. People are just ignoring those signals." Any developer impacted by this, it is valid to bring it up if you're getting frustrated by it or just finding yourself, you know, having all of these obstacles getting in the way of your development process. JOËL: Sometimes, this can be a symptom that you're mixing too many classes of problems together in one tool. So, maybe there are things that are, like, really dangerous to your product to go live with them. Maybe it's, you know, something like Brakeman where you're doing security checks, and you really, ideally, would not go to production with a failing security check. And then, you've got some random other style things in there, and you're just like, oh yeah, whatever, it's this tool because it's mostly style things but occasionally gives you a security problem. And because you ignore it all the time, now you accidentally go to production with a security problem. So, splitting that out and say, "Look, we've got blocking and unblocking because we recognize these two classes of problems can be a helpful solution to this problem." STEPHANIE: Joël, did you just apply an object-oriented design principle to an organizational system? [laughter] JOËL: I may be too much of a developer. STEPHANIE: Cool. Well, I really appreciate your input on this because, you know, I was just kind of mulling over, like, how I felt about these kinds of things that I encounter as a developer. And I am glad that we got to kind of talk about it. And I think it gives me a more expanded vocabulary to, you know, analyze or reflect when I encounter these things on different client organizations. JOËL: And every organization is different, right? Like, you've got to learn the culture, learn the different elements of that software. What are the things that are invariant? What are the things that are dangerous that we don't want to ship without? What are the things that we're doing just for consistency? What are things which are, like, these are culturally things that we'd like to do? There's all these levels, and it's a lot to pick up. STEPHANIE: Yeah. At the end of the day, I think what I really liked about the last thing you said was being able to identify the problem, like the class of problem, and applying the right tool for the right job. It helps me take a step back and perhaps even think of different solutions that we might not have thought about earlier because we had just gotten so used to the one way of enforcing or checking things like that. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

AMA Marketing / And with Bennie F. Johnson
Value of the Platform and Business Re-Engineering

AMA Marketing / And with Bennie F. Johnson

Play Episode Listen Later Apr 24, 2024 31:13


Ted Moser, Senior Partner at Prophet and the author of Winning Through Platforms, joins AMA's Bennie F. Johnson to talk about his new book, Winning Through Platforms, why the platform is a great company equalizer, and finding the thing you're passionate about.

AMA Marketing / And with Bennie F. Johnson
Innovation Journey and Finding Patience in Tech

AMA Marketing / And with Bennie F. Johnson

Play Episode Listen Later Apr 24, 2024 34:11


Elav Horwitz, the Executive Vice President, Global Head of Applied Innovation, Gen AI Lead of McCann Worldgroup, joins AMA's Bennie F. Johnson to talk about creativity and the human experience, the needs for experimentation, and trends in technology.

AMA Marketing / And with Bennie F. Johnson
Creative Endeavors and System Design

AMA Marketing / And with Bennie F. Johnson

Play Episode Listen Later Apr 24, 2024 45:40


Sean Adams, the Dean of Visual Art and Communication at the ArtCenter College of Design, joins AMA's Bennie F. Johnson for a discussion about pushing the limits, being committed to your profession, and how branding is changing.

AMA Marketing / And with Bennie F. Johnson
Strategic Intent and Driving Enduring Change

AMA Marketing / And with Bennie F. Johnson

Play Episode Listen Later Apr 24, 2024 40:31


Dr. Sylvia Long-Tolbert, the Founder of Know More Marketing, joins AMA's Bennie F. Johnson to explore why a mental model can help transform old brands into new ones, the need to integrate data into our work, and the incredible power of empathy.

AMA Marketing / And with Bennie F. Johnson
Marketing to Children and Privacy in Advertising

AMA Marketing / And with Bennie F. Johnson

Play Episode Listen Later Apr 24, 2024 34:55


Katie Goldstein, the Global Head of Policy and Regulatory Affairs at SuperAwesome, joins AMA's Bennie F. Johnson to talk about what marketers need to know when marketing to children, keeping kids safe online, and what we all need to know about online privacy.

AMA Marketing / And with Bennie F. Johnson
Value of Saying Yes and Strategic Approaches

AMA Marketing / And with Bennie F. Johnson

Play Episode Listen Later Apr 24, 2024 42:42


Zontee Hou, the Founder of Brooklyn-based digital marketing agency Media Volery and Managing Director for Convince & Convert, joins AMA's Bennie F. Johnson to discuss why we should invest in our teams, the culture of curiosity, and privacy and personalization.

AMA Marketing / And with Bennie F. Johnson
Human-driven Innovation and Career Journeys

AMA Marketing / And with Bennie F. Johnson

Play Episode Listen Later Apr 24, 2024 33:05


Paul M. Rand, the Vice President of Communications at the University of Chicago, joins AMA's Bennie F. Johnson to talk about the value of human connections, innovation and exploring new ways of doing things, and entrepreneurial journeys.

Learn System Design
5. Crafting Resilient Architectures with Messaging Brokers

Learn System Design

Play Episode Listen Later Apr 23, 2024 30:32 Transcription Available


Send us a Text Message.Discover the secret sauce that makes software systems scalable and robust as we dissect the architecture that powers modern applications. You're about to get a masterclass in software design, where we unravel the complex world of monolithic and microservice architectures. Say goodbye to confusion around when to stick with the simplicity of a monolith and when to break free into the modular world of microservices. We'll walk you through the need for message brokers and queues in this digital maze, ensuring your journey in software scalability is as smooth as possible.Then, fasten your seatbelts as we delve into the messaging protocols that keep the world connected. MQTT and AMQP are more than just acronyms; they're the backbone of reliable communication in distributed systems. We'll demystify MQTT's retained messages and 'Last Will' features, and break down AMQP's advanced message handling that's critical in sectors like banking. This episode is jam-packed with insights that promise to elevate your understanding of the intricate dance of message exchange types and delivery processes. Tune in and equip yourself with the knowledge to architect resilient and flexible software systems.Show Notes:You can follow Diego's language podcast here: https://diversilingua.com/Support the Show.Dedicated to the memory of Crystal Rose.Email me at LearnSystemDesignPod@gmail.comJoin the free Discord Consider supporting us on PatreonSpecial thanks to Aimless Orbiter for the wonderful music.Please consider giving us a rating on ITunes or wherever you listen to new episodes.

OnTrack with Judy Warner
Revolutionary System Design: How Valispace is Changing Engineering

OnTrack with Judy Warner

Play Episode Listen Later Mar 19, 2024 33:50


In this episode of The OnTrack Podcast, Tech Consultant Zach Peterson chats with the co-founders of Valispace, Louise Lindblad and Marco Witzmann. Valispace is revolutionizing the engineering world by transforming system design tools, making them more efficient, connected, and capable of handling the complex requirements of today's engineering challenges.    Learn how, from satellite engineering to the intricate demands of aerospace and beyond, Valispace's unique approach empowers engineers to build better, more innovative products by bridging the gap between requirements, design, and implementation.  Key Highlights: Louise & Marco's Backgrounds: Introduction of Louise and Marco, highlighting their professional backgrounds and expertise. Valispace Background & Intent: Overview of the origins of Valispace, including the initial purpose and intentions behind its development. Spreadsheet Pains: Discussion on the challenges and limitations associated with using spreadsheets for engineering and project management tasks. Further Resources: Connect with Louise and Marco on LinkedIn Watch this webinar to learn more about the Altium 365 requirements manager powered by Valispace Exclusive 15 Days Free Altium Designer Access

Design Thinking 101
Talk to the Elephant: Design Learning for Behavior Change with Julie Dirksen — DT101 E131

Design Thinking 101

Play Episode Listen Later Mar 14, 2024 65:58


Julie Dirksen is the author of the books Design for How People Learn and Talk to the Elephant: Design Learning for Behavior Change. She is a learning strategy consultant with a focus on incorporating behavioral science into learning interventions. Julie was my guest for episode 42 of the show. In this episode, we talk about her latest book, ways to motivate learners and workshop participants, designing learning experiences for skill development, and more. Listen to learn about:>> Julie's latest book, Talk to the Elephant: Design Learning for Behavior Change >> Behavior change challenges >> The biggest challenge when creating virtual learning experiences >> Motivating and engaging learners >> AI in education Our Guest Julie Dirksen is the author of the books Design For How People Learn and Talk to the Elephant: Design Learning for Behavior Change. She is a learning strategy consultant with a focus on incorporating behavioral science into learning interventions. Her MS degree is in Instructional Systems Technology from Indiana University. She's been an adjunct faculty member at the Minneapolis College of Art and Design and is a Learning Guild Guildmaster. She is happiest when she gets to learn something new, and you can find her at usablelearning.com. Show Highlights[02:02] Julie gives a quick summary of her first book and how Talk to the Elephant is its natural sequel. [02:42] The new book tackles the challenges in actually changing behavior. [04:26] On learning experiences. [05:21] Julie is starting to organize a third book, which will be on skill acquisition. [05:34] The evolution of behavioral design. [06:21] The COVID-19 pandemic is the biggest behavior change experiment in the history of the world. [07:06] The book's audience are those in the learning and development field — people who design learning experiences. [08:00] The Change Ladder. [08:54] Julie offers one case study she uses in the book to demonstrate the challenges around behavior change. [14:17] The importance of communicating and working with the people you serve when it comes to changing behaviors. [14:58] Julie tells a story illustrating the importance of talking to and understanding the people you serve and their needs. [17:57] It's important for people to participate in their own behavioral design. [20:15] Creating the conditions for learners to motivate themselves. [21:22] Making things as easy as possible for someone to do. [22:42] A Miro Moment. [25:27] Creating learning experiences that engage learners. [26:14] The biggest challenge in designing virtual workshops. [27:55] Why Julie is interested in Virtual Reality. [29:34] The top two challenges Julie sees in almost every behavior change. [34:55] Immediate impact and immediate rewards help learners stay motivated. [37:21] Helping learners see what they will be able to do with this new skill or new knowledge. [42:53] Julie shows appreciation for how video games onboard players as a great example of guiding people along the learning curve. [45:11] Designing learning experiences to make your learner feel smart and capable as they acquire new skills and knowledge. [48:42] Julie talks about research on self-directed learning by Catherine Lombardozzi. [49:20] Julie and Catherine will be doing a webinar on the key behaviors seen in good self-directed learners. [52:05] Julie ponders how systems thinking and design fits into behavior change. [52:54] Dawan and Julie talk about AI and its role in education. LinksJulie on LinkedIn Usable Learning Designing for how people learn Book RecommendationsDesign for How People Learn, by Julie Dirksen Talk to the Elephant: Design Learning for Behavior Change, by Julie Dirksen Thinking, Fast and Slow, by Daniel Kahneman Nudge: The Final Edition, by Richard Thaler and Cass Sunstein How Change Happens, by Cass Sunstein Misbelief: What Makes Rational People Believe Irrational Things, by Dan Ariely Predictably Irrational, Revised and Expanded Edition: The Hidden Forces That Shape Our Decisions, by Dan Ariely DT 101 EpisodesLearning Design + Designing for How People Learn with Julie Dirksen — DT101 E42 Learning Design with Yianna Vovides — DT101 E58 Adding System Awareness to System Design to Your Innovation Stack with Julie Guinn — DT101 E43

The Money Pit’s Calls & Answers
Septic System Design for High Water Table

The Money Pit’s Calls & Answers

Play Episode Listen Later Mar 6, 2024 4:22


Septic Systems must have a functioning leech drain field to work properly. Find out how to drain a septic tan when a high water table is present. Learn more about your ad choices. Visit podcastchoices.com/adchoices

BSD Now
542: Retro and Futuro

BSD Now

Play Episode Listen Later Jan 18, 2024 53:11


8 Open Source Trends to Keep an Eye Out for in 2024, System Design for Advanced Beginners, 2024 plans and 2023 retrospective, Upgrading from NetBSD 5.1 to 10*RC1, FreeBSD has a new C compiler: Oracle Developer Studio 12.6, Ctrl+Alt Museum NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines 8 Open Source Trends to Keep an Eye Out for in 2024 (https://klarasystems.com/articles/8-open-source-trends-to-keep-an-eye-out-for-in-2024/) System Design for Advanced Beginners (https://robertheaton.com/2020/04/06/systems-design-for-advanced-beginners/) News Roundup 2024 plans and 2023 retrospective (https://dataswamp.org/~solene/2024-01-09-plans-for-2024.html) Upgrading from NetBSD 5.1 to 10_RC1 (https://www.idatum.net/upgrading-from-netbsd-51-to-10_rc1.html) FreeBSD has a new C compiler: Oracle Developer Studio 12.6 (https://briancallahan.net/blog/20240101.html) Ctrl+Alt Museum (https://photos.google.com/share/AF1QipMTsm7-LbZ-EiFh4xctppvVbBg_IhOPLTu4ej3fc7gWNgg6nHAUlBEK67-AD_tTsA?pli=1&key=N3dLRWlWVUpUY0RfNU1nb2VxYWUzRDdNek5DU2hn) Beastie Bits Taylor's Hackerstation (https://hackerstations.com/setups/taylor_town/) An Empirical Study of the Reliability of UNIX Utilities (https://sigwait.org/~alex/blog/2022/09/11/fuzz.pdf) BSD on Windows: Things I wish I knew existed (https://virtuallyfun.com/2023/12/08/bsd-on-windows-things-i-wish-i-knew-existed/) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Join us and other BSD Fans in our BSD Now Telegram channel (https://t.me/bsdnow)