POPULARITY
In this short podcast episode, Bryan walks through some common thermostatic expansion valve (TXV) troubleshooting scenarios. Many of the same principles apply to troubleshooting electronic expansion valves (EEVs). These dynamic metering devices maintain a constant superheat. Troubleshooting does NOT start and end with the TXV. First, you need to inspect components (especially filters, ductwork, and filter-driers) and confirm the airflow and charge. You can use measureQuick to monitor superheat, subcooling, static pressure, and other key measurements, and the TrueFlow grid can give you a true idea of the CFM your system is moving. Keep in mind that superheat and subcooling values can vary by system. Airflow problems and filter-drier restrictions may mimic failed TXV conditions. Ideally, the liquid line filter-drier will be located indoors, and you can check for a pressure drop across it by looking for temperature differentials. You need a full column of liquid going into the filter-drier, and you can use a thermal imaging camera to see the desuperheating, condensing, and subcooling phases inside the condenser coil. The TXV has a bulb that can be loose, improperly mounted, or improperly insulated; when there is an issue with the bulb, there will likely be low superheat. The bulb should be on a clean and [ideally] horizontal portion of the suction line, and it should be strapped with copper or stainless steel straps. Insulating the bulb is especially important when it's externally located and when low superheat or flood back is a concern. Have a question that you want us to answer on the podcast? Submit your questions at https://www.speakpipe.com/hvacschool. Purchase your tickets or learn more about the 6th Annual HVACR Training Symposium at https://hvacrschool.com/symposium. Subscribe to our podcast on your iPhone or Android. Subscribe to our YouTube channel. Check out our handy calculators here or on the HVAC School Mobile App for Apple and Android
In this episode of the HVAC Know It All Podcast, host Gary McCreadie talks with Glen Schwarzman from Copeland about compressors and refrigerants. They discuss why using the wrong refrigerant can cause problems and void warranties. Glen explains different types of compressors, how variable speed tech works, and the importance of protecting compressors with things like phase monitors. They also cover tips for oil return in modulating compressors and why understanding new tech is key for HVAC techs. The episode is full of helpful info for anyone working with modern HVAC systems. Glen Schwarzman explains how using the right refrigerant and compressor protects HVAC systems and avoids costly failures. He breaks down the differences between single-stage, two-stage, and variable speed compressors and why electrical protection is important. Glen shares tips on oil return in modulating compressors and the role of vapor injection in heat pumps. He also stresses the need for techs to learn modern HVAC technology to better serve customers and avoid costly mistakes. Throughout the episode, Glen explains why using the correct refrigerant is crucial to avoid compressor damage and warranty issues. He walks through different compressor types, how variable speed technology works, and why electrical protections like phase monitors matter. Glen also shares tips on keeping oil flowing in modulating compressors and how vapor injection boosts heat pump performance. He encourages techs to embrace new HVAC tech and use resources like Copeland Mobile to stay informed. Expect to Learn: Why using the right refrigerant matters for compressors. The differences between single-stage, two-stage, and variable speed compressors. How electrical protections like phase monitors keep systems safe. Tips for oil return and vapor injection in modulating compressors. Why learning modern HVAC tech is important for techs today. Episode Highlights: [00:33] - Intro to Glen Schwarzman [01:55] - Why using the right refrigerant matters [03:47] - Compressor types: single-stage, two-stage, variable speed [06:09] - Electrical protection and phase monitors explained [08:29] - How variable speed compressors handle oil return [12:02] - Importance of Electrical Protection for Variable Speed Compressors [13:31] - Importance of learning modern HVAC technology [16:34] - Tips for diagnosing compressor issues [17:11] - Vapor injection and cold climate heat pumps [18:26] - What to check before replacing a compressor [23:19] - Is a Compressor Really a Pump? Glen Explains the Difference [24:09] - How to Diagnose Compressor Issues and Avoid False Failures This Episode is Kindly Sponsored by: Master: https://www.master.ca/ Cintas: https://www.cintas.com/ Supply House: https://www.supplyhouse.com/ Cool Air Products: https://www.coolairproducts.net/ Follow the Guest Glen Schwarzman on: Copeland: https://www.linkedin.com/company/copeland/ Follow the Host: LinkedIn: https://www.linkedin.com/in/gary-mccreadie-38217a77/ Website: https://www.hvacknowitall.com Facebook: https://www.facebook.com/people/HVAC-Know-It-All-2/61569643061429/ Instagram: https://www.instagram.com/hvacknowitall1/
A paint booth is essentially a large air mover. The freer air can move in and out, the better off you'll be. But if a booth is imbalanced, the air can be turbulent and dirty, and that creates more work to sand out and polish dirt nibs. In this CollisionCast episode, sponsored by Accudraft Paint Booths, Editor-in-Chief Jay Sicht sits down with Jeremy Winters, content and marketing at Accudraft, to talk about the importance of having the right amount of airflow for each refinish coating.
This was a fun discussion! Back in the tangent cube at last, we cover some of the highlights of our long trip through Egypt and Turkey, from Cappadocia and Derenkuyu to Gobekli Tepe and Karahan Tepe, the Pyramids and the Osirieon, ancient tools and lost vaults of knowledge. We talk about the mythology connected with the structures, and focus a lot of time on the Osiris Myth. Thank you all for your patience as we travel, and your continued support! We are going to Peru in October of 2025!! Sign up now and join us, Ben from UnchartedX, and Yousef Aywan from the Khemet School on an epic journey through the highlands of Peru: https://unchartedx.com/2025peru2/ Join us, Ben from UnchartedX, Adam Young, and Karoly Poka for an afternoon at The Metropolitan Museum of Art in New York where we will peruse their collection of Ancient Egyptian artifacts, then we will move to the Explorer's Club for dinner and presentations from us and Ben! https://eveningattheexplorersclub.eventbrite.com/ Join our Patreon, support the show, get extra content and early access! https://www.patreon.com/brothersoftheserpent Support the show with a paypal donation: https://paypal.me/snakebros Chapters 00:00 Welcome Back and Reflections on the Journey 02:53 Exploring Cappadocia's Underground Cities 05:42 The Role of Special Permissions in Archaeology 08:51 Airflow and Structural Integrity of Ancient Tunnels 11:39 Connections to Ancient Myths and Stories 14:45 The Evolution of Gobekli Tepe's Structures 22:02 Symbolism of the Lion's Gate and Sphinxes 26:52 Samson, Gilgamesh, and the Cycle of Civilizations 33:35 The Significance of Hair and Statues in Ancient Egypt 45:36 Exploring Ancient Symbolism and Mythology 47:06 The Osiris Myth: Variations and Interpretations 49:40 Experiencing the Osirian: Personal Reflections 51:32 Architectural Insights: The Construction Techniques of Ancient Egypt 54:38 Theories on Ancient Tools and Techniques 57:41 The Connection Between the Temple and the Osirieon 01:00:21 Excavation Insights: The History of the Osirieon 01:02:56 The Alignment and Purpose of Ancient Structures 01:06:26 The Osiris Myth: A Foundation of Civilization 01:25:06 The Evolution of Myths and Civilizations 01:27:15 The Eye of Horus and Lunar Symbolism 01:29:26 The Sabians: Pilgrims of Knowledge 01:31:46 Hermeticism and Ancient Astronomers 01:35:08 Exploring Gobekli Tepe and Its Mysteries 01:39:02 Lithics and Their Connection to Ancient Cultures 01:42:57 Contrasting Technologies: Lithics vs. Megaliths 01:48:22 The Evolution of Craftsmanship in Ancient Civilizations 02:01:17 Midden Accumulation and Cultural Practices 02:05:48 Future Discoveries in Neolithic Archaeology
In this engaging session, David Richardson breaks down the concept of high-performance HVAC, offering a clear roadmap for industry professionals looking to elevate their craft. Richardson argues that the HVAC industry has long been focused on equipment rather than complete systems, leading to widespread inefficiencies. The average system delivers only about 57% of its rated capacity into buildings, while even code-approved systems barely reach 63%. By implementing high-performance HVAC principles, contractors can achieve up to 88% efficiency while improving safety, health, comfort, and energy performance. Richardson presents a practical framework using the acronym "PATH" - Pressure, Airflow, Temperature, and Heat (BTUs) - as a step-by-step approach to implementing high-performance HVAC. He emphasizes starting with static pressure testing, which he calls "the foundation of airflow" and one of the most misunderstood principles in the industry. Just as doctors check blood pressure as a vital sign during every visit, Richardson advocates for measuring static pressure on every call, or at minimum, when encountering "red flag" issues like repeated equipment failures. From there, professionals can progress to measuring airflow, temperature, and finally BTU delivery to create complete system diagnostics. The presentation offers a journey-based approach, acknowledging that implementation takes time and requires breaking old habits. Richardson introduces the "one degree principle," suggesting that change happens incrementally, with small improvements eventually leading to breakthrough moments. He urges contractors to apply this methodology not just to equipment, but to extend testing into duct systems and even the building envelope. By making these changes visible through measurement, contractors can prove value to themselves, their teams, and ultimately their customers, transforming the way HVAC work is perceived and delivered. Key Topics Covered: The definition of high-performance HVAC: getting back to craftsmanship, challenging the status quo, and confirming work through measurement The industry problem: focusing on equipment instead of complete systems, resulting in just 57% of rated BTU capacity reaching conditioned spaces The PATH framework: Pressure, Airflow, Temperature, and Heat as building blocks for system diagnostics How to implement static pressure testing as the foundation for airflow diagnostics The importance of measuring at both equipment and register/grille locations Breaking down implementation into three areas: equipment, ducts, and building envelope STEPS approach: Show, Teach, Equip, Promote, with application to yourself, your team, and your customers The "one degree principle" for making incremental changes that lead to breakthrough results Common obstacles to implementation and how to overcome resistance to change How measurements make your work transparent and lead to better performance Have a question that you want us to answer on the podcast? Submit your questions at https://www.speakpipe.com/hvacschool. Purchase your tickets or learn more about the 6th Annual HVACR Training Symposium at https://hvacrschool.com/symposium. Subscribe to our podcast on your iPhone or Android. Subscribe to our YouTube channel. Check out our handy calculators here or on the HVAC School Mobile App for Apple and Android
In this podcast episode, HVAC professionals Chris Hughes and Adam Mufich discuss the intricate challenges technicians face when commissioning modern inverter-based heating and cooling systems. Unlike traditional single-stage or two-stage HVAC equipment, inverter systems introduce a new level of complexity that can leave even experienced technicians feeling uncertain about proper installation and startup procedures. The presentation highlights a fundamental shift in how HVAC systems operate, moving from straightforward single-stage systems to sophisticated inverter-based technologies that modulate compressor, fan, and refrigerant flow dynamically. This technological evolution creates significant challenges for technicians, who previously could rely on simple, consistent commissioning processes. The speakers emphasize that modern inverter systems require a much more nuanced approach, with specific temperature ranges, wait times, and verification methods that are not always clearly documented in manufacturer manuals. Recognizing the industry-wide need for clarity, Chris and Adam have developed a comprehensive spreadsheet resource that consolidates commissioning information across multiple HVAC brands. Their goal is to empower technicians by providing accessible, standardized guidance for working with these complex systems. By sharing their research and encouraging collaboration, they aim to address what they see as a critical gap in manufacturer support and technical documentation. The podcast serves as both an educational resource and a call to action for HVAC professionals and manufacturers alike. Chris and Adam argue that the industry needs more transparency, better documentation, and a collective effort to standardize inverter system commissioning practices. Their work represents a significant step towards demystifying these advanced HVAC technologies and ensuring that technicians can confidently and effectively install and service modern heating and cooling equipment. Key Topics Covered: Differences between single-stage, two-stage, and inverter HVAC systems Commissioning challenges with modern inverter technologies Critical factors in proper system startup, including: Outdoor and indoor temperature requirements Wait times for system stabilization Refrigerant charging methods Airflow measurement and verification The importance of precise refrigerant charging (superheat and subcooling) Challenges with manufacturer documentation and technical support The need for standardized commissioning procedures across HVAC brands Strategies for verifying system performance during commissioning The speakers' collaborative effort to create a comprehensive inverter system commissioning guide Have a question that you want us to answer on the podcast? Submit your questions at https://www.speakpipe.com/hvacschool. Purchase your tickets or learn more about the 6th Annual HVACR Training Symposium at https://hvacrschool.com/symposium. Subscribe to our podcast on your iPhone or Android. Subscribe to our YouTube channel. Check out our handy calculators here or on the HVAC School Mobile App for Apple and Android
Thibault Lefèvre est Global Data & AI Product Manager sur le périmètre Manufacturing et Supply chez Sanofi, le groupe pharmaceutique leader mondial. Il nous parle des 3 piliers de leur stratégie Data et IA.On aborde :
This podcast contains a handful of visuals that we thought would be helpful, so we've published a video version of this podcast at https://youtu.be/NZ2qp06oET8. The Testo 605i that Mark mentioned can be found at https://amzn.to/41TYFjsTo find the chart that Mark referenced, go to https://efficientcomfort.net/charts/. Check this link to IEB Unite: https://events.iebcoaching.com/IEBUnite2025You can find Mark at https://besttampainspector.com.Reuben Saltzman, Tessa Murry, and Mark Cramer delve into the intricacies of air conditioning testing, focusing on how home inspectors can improve their methods. They discuss the importance of understanding temperature splits, the role of humidity, and the need for advanced measurement techniques. Mark emphasizes the limitations of basic thermometers and advocates for more accurate tools to assess air conditioning performance. The discussion also covers real-world examples, practical applications, and the significance of airflow in HVAC systems. In this conversation, the speakers delve into the intricacies of HVAC measurement techniques, focusing on the use of advanced tools like the Measure Quick app. They discuss the importance of accurate temperature readings, the role of humidity in system performance, and the shift toward non-invasive testing methods. The conversation highlights the challenges faced by HVAC professionals in adapting to new technologies and the implications of energy efficiency on system performance.TakeawaysAir conditioning is crucial for comfort, especially in humid climates.Home inspectors often rely on basic thermometers, which may not provide accurate readings.Temperature splits in air conditioning can vary significantly based on humidity levels.Understanding latent heat is essential for accurate air conditioning assessments.Advanced measurement tools can provide more precise data than traditional methods.Humidity plays a critical role in determining the effectiveness of air conditioning systems.Real-world examples illustrate the importance of proper testing techniques.Airflow issues are a common problem in HVAC systems that can affect performance.Using technology like hygrometers can enhance the accuracy of air conditioning evaluations.The ideal temperature split for air conditioning systems typically falls between 18-20 degrees. Using two probes allows for the simultaneous measurement of return and supply air.The Measure Quick app simplifies the process of HVAC measurements.Accurate temperature readings are crucial for assessing system performance.Non-invasive methods are becoming the preferred approach in HVAC inspections.Humidity levels significantly impact the efficiency of air conditioning systems.High-efficiency systems may struggle with humidity control despite their performance.Understanding airflow and duct conditions is essential for accurate HVAC assessments.Investing in advanced measurement tools can enhance inspection accuracy.The HVAC industry is gradually shifting away from traditional gauge methods.Education and resources are vital for HVAC professionals to stay updated. Chapters00:00 Introduction to Air Conditioning Testing09:14 Advanced Measurement Techniques for Air Conditioning18:01 Understanding Temperature Differential and Humidity31:31 Understanding Measurement Techniques in HVAC Systems43:12 Cost and Accessibility of HVAC Measurement Tools48:13 Key Factors Affecting HVAC Performance56:18 Resources for Further Learning in HVAC
In this episode of the HVAC Know It All Podcast, host Gary McCreadie continues his conversation with Tim De Stasio, the owner and president of Comfort Science Solutions. In Part 2, he takes a closer look at the issues in HVAC caused by profit-focused decisions instead of customer care. He also discusses how company culture and investor pressure can affect service quality and ethical standards in the industry. Tim explains here in the part how some technicians are pressured to sell rather than fix real customer problems. He points out how this approach harms both the industry and customers in the long run. Tim emphasizes the need to focus on quality service instead of quick profits to create a more honest and sustainable business. Tim highlights the need to move away from profit-driven sales tactics in HVAC. He advocates for a culture that values technical skills and real problem-solving. He explores how ethical business practices can improve service quality and strengthen a company's reputation. Expect to Learn: How company culture influences HVAC technicians' approach to sales and service. The impact of venture capital on ethical practices within HVAC companies. Strategies for HVAC companies to adopt more ethical sales practices. The role of technician training in promoting problem-solving over sales-first approaches. How the industry can shift towards more sustainable and customer-focused practices. Episode Highlights: [00:33] – Introduction to the Second Part of the Episode with Tim De Stasio [02:19] – Transitioning from Comfort Advisors to Sales Technicians [05:24] – Analysis of venture capital's role in shaping company culture and ethical dilemmas [11:46] – Exploring the balance between making profits and providing ethical HVAC services [18:00] – Tim's take on the real-world impacts of prioritizing sales over service [21:22] – Promoting Ethical Training and Diagnostics in HVAC This Episode is Kindly Sponsored by: Master: https://www.master.ca/ Cintas: https://www.cintas.com/ Supply House: https://www.supplyhouse.com/ Cool Air Products: https://www.coolairproducts.net/ Lambert Insurance Services: https://www.lambert-ins.com/ Follow the Guest Tim De Stasio on: LinkedIn: https://www.linkedin.com/in/tim-de-stasio-0618824a/ Facebook: https://www.facebook.com/timothy.destasio Instagram: https://www.instagram.com/timdestasiohvac/ YouTube: https://www.youtube.com/@timdestasiohvac Comfort Science LP: https://www.instagram.com/comfortsciencehvac/ Follow the Host: LinkedIn: https://www.linkedin.com/in/gary-mccreadie-38217a77/ Website: https://www.hvacknowitall.com Facebook: https://www.facebook.com/people/HVAC-Know-It-All-2/61569643061429/ Instagram: https://www.instagram.com/hvacknowitall1/
Доклады: Goodbye GIL: Exploring the Free-threaded mode in Python 3.13 - Adarsh Divakaran (https://youtu.be/7NvgI3jDprg) Unlocking Concurrency and Performance in Python with ASGI and Async I/O - Allen Y, M Aswin Kishore (https://youtu.be/s5UGRvdrb_Q) Quantifying Nebraska - Adam Harvey (https://youtu.be/vH9xOxryqW8) Error Culture - Ryan Cheley (https://youtu.be/FBMg2Bp4I-Q) Mono-repositories in Python - Avik Basu (https://youtu.be/VIlcodf9Wrg) You Should Build a Robot (MicroPython) (https://youtu.be/UygK5W3txTM) As easy as breathing: manage your workflows with Airflow! - Madison Swain-Bowden (https://youtu.be/dWZSVY79-SM) Optimal Performance Over Basic as a Perfectionist with Deadlines - Velda Kiara (https://youtu.be/dvxzJDk6x9Q) Нас можно найти: 1. Telegram: https://t.me/proConf 2. Youtube: https://www.youtube.com/c/proconf 3. SoundCloud: https://soundcloud.com/proconf 4. Itunes: https://podcasts.apple.com/by/podcast/podcast-proconf/id1455023466 5. Spotify: https://open.spotify.com/show/77BSWwGavfnMKGIg5TDnLz
Join me, Tony Mormino, as I delve into the groundbreaking world of airflow control technology at Antec Controls. In this episode, I'm at the AHR Expo in Orlando, sharing insights from my experience at the Antec booth. Curtis, an expert from Antec, joins me to discuss how their advanced control valves are transforming critical healthcare spaces. We'll explore why these systems are vital for maintaining sterility and safety, from isolation rooms to clean labs, and how they differ significantly from traditional VAV boxes. If you're interested in the intersection of HVAC technology and healthcare safety, this episode will equip you with the knowledge of cutting-edge solutions that ensure cleaner, safer environments where health and wellness advancements are made. Tune in to discover how precise airflow control can make a significant difference in protecting both patients and healthcare workers.
In this episode of HVAC Know It All Podcast, host Gary McCreadie welcomes Dr. Mark Modera, The Inventor of Aeroseal, Professor at the University of California, Davis, and Visiting Faculty at Berkeley Lab and Former Vice President at Carrier HVAC. They talk about why sealing ducts is important for HVAC performance, how new methods like Aeroseal, the invention of Dr. Mark Modera, is better than old ones, and why fixing duct leaks saves energy and improves comfort. Dr. Mark Modera also explains the science behind duct leaks, how they affect system performance, and how modern sealing technology makes the job easier. This discussion gives HVAC professionals useful tips on improving airflow, making homes more effective, and using better duct sealing methods.Dr. Mark Modera talks about the problems caused by duct leaks in HVAC systems and why old sealing methods don't work as well as new solutions like Aeroseal. He explains why it's important to check and measure duct leaks, the need for ongoing learning in building science, and how technology is making duct sealing better for improved system performance. They also discuss how clear communication, better diagnostics, and advanced sealing techniques can help HVAC professionals improve performance and reduce energy waste.This episode is packed with practical HVAC tips, industry challenges, and real solutions to help technicians learn about duct leaks, boost system performance, and use better sealing methods for improved performance.Expect to Learn:Why duct leakage is a major issue and how it impacts HVAC system performance.The limitations of traditional duct sealing methods and the benefits of Aeroseal.How verifying and measuring duct leakage can improve energy efficiency.Common misconceptions about duct sealing and their impact on home comfort.How modern technology is transforming the way HVAC professionals approach duct sealing.Episode Highlights: [00:00] – Introduction to Dr. Mark Modera[01:30] – Understanding Aeroseal: How It Works & How It Differs from Air Barrier Sealing[03:46] – The Impact of Duct Leakage on Energy Bills, Airflow & Home Comfort [07:24] – The Invention of Aeroseal: Dr. Mark Modera's Breakthrough & Impact on Duct Sealing & Energy Savings [11:12] – The Evolution of Heat Pump Technology & the Need for Better Duct Sealing Solutions [14:11] – How Aeroseal Was Developed & Its Impact on Basement Duct Leakage and Air Distribution [15:56] – Why Aeroseal is a Game-Changer for Hard-to-Reach Duct Leaks [18:01] – How Long Does Aeroseal Take? Setup, Application & Real-Time Leakage MonitoringThis Episode is Kindly Sponsored by:Master: https://www.master.ca/ Cintas: https://www.cintas.com/ Supply House: https://www.supplyhouse.com/ Cool Air Products: https://www.coolairproducts.net/ Lambert Insurance Services: https://www.lambert-ins.com/ Follow the Guest Dr. Mark Modera on: LinkedIn: https://www.linkedin.com/in/mark-modera-94432212/ Aeroseal: https://www.linkedin.com/company/aeroseal-llc/about/ University of California: https://www.linkedin.com/school/uc-davis/ Berkeley Lab: https://www.linkedin.com/company/lawrence-berkeley-national-laboratory/ Carrier HVAC: https://www.linkedin.com/company/carrierhvac/ Website: Aeroseal: https://aeroseal.com/ Carrier HVAC: https://www.carrier.com/carrier/en/worldwide/ Follow the Host:LinkedIn: https://www.linkedin.com/in/gary-mccreadie-38217a77/ Website: https://www.hvacknowitall.com Facebook: https://www.facebook.com/people/HVAC-Know-It-All-2/61569643061429/ Instagram: https://www.instagram.com/hvacknowitall1/
In this podcast episode, we talked with Adrian Brudaru about the past, present and future of data engineering.About the speaker:Adrian Brudaru studied economics in Romania but soon got bored with how creative the industry was, and chose to go instead for the more factual side. He ended up in Berlin at the age of 25 and started a role as a business analyst. At the age of 30, he had enough of startups and decided to join a corporation, but quickly found out that it did not provide the challenge he wanted.As going back to startups was not a desirable option either, he decided to postpone his decision by taking freelance work and has never looked back since. Five years later, he co-founded a company in the data space to try new things. This company is also looking to release open source tools to help democratize data engineering.0:00 Introduction to DataTalks.Club1:05 Discussing trends in data engineering with Adrian2:03 Adrian's background and journey into data engineering5:04 Growth and updates on Adrian's company, DLT Hub9:05 Challenges and specialization in data engineering today13:00 Opportunities for data engineers entering the field15:00 The "Modern Data Stack" and its evolution17:25 Emerging trends: AI integration and Iceberg technology27:40 DuckDB and the emergence of portable, cost-effective data stacks32:14 The rise and impact of dbt in data engineering34:08 Alternatives to dbt: SQLMesh and others35:25 Workflow orchestration tools: Airflow, Dagster, Prefect, and GitHub Actions37:20 Audience questions: Career focus in data roles and AI engineering overlaps39:00 The role of semantics in data and AI workflows41:11 Focusing on learning concepts over tools when entering the field 45:15 Transitioning from backend to data engineering: challenges and opportunities 47:48 Current state of the data engineering job market in Europe and beyond 49:05 Introduction to Apache Iceberg, Delta, and Hudi file formats 50:40 Suitability of these formats for batch and streaming workloads 52:29 Tools for streaming: Kafka, SQS, and related trends 58:07 Building AI agents and enabling intelligent data applications 59:09Closing discussion on the place of tools like DBT in the ecosystem
Claude Plays Pokémon - A Conversation with the Creator // MLOps Podcast #294 with Alexa Griffith, Senior Software Engineer at Bloomberg.Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // AbstractAlexa shares her journey into software engineering, from early struggles with Airflow and Kubernetes to leading open-source projects like the Envoy AI Gateway. She and Demetrios discuss AI model deployment, tooling differences across tech roles, and the importance of abstraction. They highlight aligning technical work with business goals and improving cross-team communication, offering key insights into MLOps and AI infrastructure.// BioAlexa Griffith is a Senior Software Engineer at Bloomberg, where she builds scalable inference platforms for machine learning workflows and contributes to open-source projects like KServe. She began her career at Bluecore working in data science infrastructure, and holds an honors degree in Chemistry from the University of Tennessee, Knoxville. She shares her insights through her podcast, Alexa's Input (AI), technical blogs, and active engagement with the tech community at conferences and meetups.// Related LinksWebsite: https://alexagriffith.com/Kubecon Keynote about Envoy AI Gateway https://www.youtube.com/watch?v=do1viOk8nok~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity][https://x.com/mlopscommunity] or LinkedIn [https://go.mlops.community/linkedin] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Alexa on LinkedIn: /alexa-griffith
In this episode of the Aging Well Podcast, Dr. Jeff Armstrong and Corbin Bruton explore the trending topic of mouth taping and its efficacy in managing obstructive sleep apnea. They discuss recent research findings from Brigham and Women's Hospital and Harvard Medical School, revealing that mouth taping might worsen the condition for some individuals. The conversation underscores the complexity of managing sleep apnea, emphasizing personalized approaches and professional medical advice. Corbin shares his personal journey with sleep apnea and the benefits of using a CPAP machine while highlighting lifestyle changes that can improve sleep quality. This episode serves as a critical guide for those dealing with sleep apnea or considering viral trends for sleep solutions.https://www.medpagetoday.com/pulmonology/sleepdisorders/112246?xid=nl_mpt_DHE_2024-10-03&mh=b4bce701259919425f7ab5e844f1878e&utm_source=Sailthru&utm_medium=email&utm_campaign=Daily%20Headlines%20Evening%202024-10-03&utm_term=NL_Daily_DHE_dual-gmail-definiMouth Closure and Airflow in Patients With Obstructive Sleep Apnea: A Nonrandomized Clinical Trial Should Mouth Taping and Obstructive Sleep Apnea Therapies Be Regulated?
Cultivation Elevated - Indoor Farming, Cannabis Growers & Cultivators - Pipp Horticulture
In this episode of theHVAC Know It All Podcast, Host Gary McCreadie sits down with Adam Mufich, an Expert in airflow diagnostics and technical trainer at theNational Comfort Institute (NCI). They will discuss deep into the critical role of airflow in HVAC systems, why static pressure doesn't equal airflow, and how technicians can improve system performance with better diagnostics.Adam shares insights on the TrueFlow Grid, a Revolutionary tool for measuring airflow accurately, and explains how it integrates with MeasureQuick and NCI workflows to help technicians troubleshoot and optimize HVAC systems more efficiently. They also discuss common airflow mistakes, the importance of proper system sizing, and the impact of filter selection on performance.Expect to Learn:1. Why airflow is the backbone of HVAC system performance.2. How the TrueFlow Grid simplifies airflow measurement.3. The difference between static pressure and airflow and why it matters.4. How improper system sizing leads to airflow issues.5. Why deeper pleated filters outperform one-inch filters.Episode Highlights:[00:33] – Introduction to the Episode with Adam Mufich[02:23] – How Important Is Airflow? Adam Explains Why it's a 10/10[03:35] – Right Way to Sell HVAC Services: Solution-Based Selling & the Role of Airflow Measurement.[06:19] – The TrueFlow Grid: Accurate Airflow Measurement Beyond Ductwork Limitations.[10:56] – Static Pressure vs. Airflow Understanding the key differences[13:29] – TrueFlow Grid & NCI: Optimizing Airflow with Fan Law 2.[21:28] – Undersized Ducts or Oversized Equipment? The Key to Proper Airflow.[26:03] – The Deep Pleat Filter Advantage, More surface area = better airflow[29:25] – Can Better Filtration Reduce White Slime?[31:45] – UV Lights, Drain Pans & Biofilm, Do UV lights really help?[33:38] – Final Thoughts: How to Improve Your Airflow Game.This Episode is Kindly Sponsored by:Master:https://www.master.ca/Cintas:https://www.cintas.com/Supply House:https://www.supplyhouse.comCool Air Products:https://www.coolairproducts.netLambert Insurance Services:https://www.lambert-ins.com Follow the Adam Mufich on:LinkedIn:https://www.linkedin.com/in/adam-mufich-5225055a/National Comfort Institute:https://www.linkedin.com/company/national-comfort-institute/ Master HVAC diagnostics with Measure Quick & True Flow Grid!
Did you know that adding a simple Code Interpreter took o3 from 9.2% to 32% on FrontierMath? The Latent Space crew is hosting a hack night Feb 11th in San Francisco focused on CodeGen use cases, co-hosted with E2B and Edge AGI; watch E2B's new workshop and RSVP here!We're happy to announce that today's guest Samuel Colvin will be teaching his very first Pydantic AI workshop at the newly announced AI Engineer NYC Workshops day on Feb 22! 25 tickets left.If you're a Python developer, it's very likely that you've heard of Pydantic. Every month, it's downloaded >300,000,000 times, making it one of the top 25 PyPi packages. OpenAI uses it in its SDK for structured outputs, it's at the core of FastAPI, and if you've followed our AI Engineer Summit conference, Jason Liu of Instructor has given two great talks about it: “Pydantic is all you need” and “Pydantic is STILL all you need”. Now, Samuel Colvin has raised $17M from Sequoia to turn Pydantic from an open source project to a full stack AI engineer platform with Logfire, their observability platform, and PydanticAI, their new agent framework.Logfire: bringing OTEL to AIOpenTelemetry recently merged Semantic Conventions for LLM workloads which provides standard definitions to track performance like gen_ai.server.time_per_output_token. In Sam's view at least 80% of new apps being built today have some sort of LLM usage in them, and just like web observability platform got replaced by cloud-first ones in the 2010s, Logfire wants to do the same for AI-first apps. If you're interested in the technical details, Logfire migrated away from Clickhouse to Datafusion for their backend. We spent some time on the importance of picking open source tools you understand and that you can actually contribute to upstream, rather than the more popular ones; listen in ~43:19 for that part.Agents are the killer app for graphsPydantic AI is their attempt at taking a lot of the learnings that LangChain and the other early LLM frameworks had, and putting Python best practices into it. At an API level, it's very similar to the other libraries: you can call LLMs, create agents, do function calling, do evals, etc.They define an “Agent” as a container with a system prompt, tools, structured result, and an LLM. Under the hood, each Agent is now a graph of function calls that can orchestrate multi-step LLM interactions. You can start simple, then move toward fully dynamic graph-based control flow if needed.“We were compelled enough by graphs once we got them right that our agent implementation [...] is now actually a graph under the hood.”Why Graphs?* More natural for complex or multi-step AI workflows.* Easy to visualize and debug with mermaid diagrams.* Potential for distributed runs, or “waiting days” between steps in certain flows.In parallel, you see folks like Emil Eifrem of Neo4j talk about GraphRAG as another place where graphs fit really well in the AI stack, so it might be time for more people to take them seriously.Full Video EpisodeLike and subscribe!Chapters* 00:00:00 Introductions* 00:00:24 Origins of Pydantic* 00:05:28 Pydantic's AI moment * 00:08:05 Why build a new agents framework?* 00:10:17 Overview of Pydantic AI* 00:12:33 Becoming a believer in graphs* 00:24:02 God Model vs Compound AI Systems* 00:28:13 Why not build an LLM gateway?* 00:31:39 Programmatic testing vs live evals* 00:35:51 Using OpenTelemetry for AI traces* 00:43:19 Why they don't use Clickhouse* 00:48:34 Competing in the observability space* 00:50:41 Licensing decisions for Pydantic and LogFire* 00:51:48 Building Pydantic.run* 00:55:24 Marimo and the future of Jupyter notebooks* 00:57:44 London's AI sceneShow Notes* Sam Colvin* Pydantic* Pydantic AI* Logfire* Pydantic.run* Zod* E2B* Arize* Langsmith* Marimo* Prefect* GLA (Google Generative Language API)* OpenTelemetry* Jason Liu* Sebastian Ramirez* Bogomil Balkansky* Hood Chatham* Jeremy Howard* Andrew LambTranscriptAlessio [00:00:03]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Good morning. And today we're very excited to have Sam Colvin join us from Pydantic AI. Welcome. Sam, I heard that Pydantic is all we need. Is that true?Samuel [00:00:24]: I would say you might need Pydantic AI and Logfire as well, but it gets you a long way, that's for sure.Swyx [00:00:29]: Pydantic almost basically needs no introduction. It's almost 300 million downloads in December. And obviously, in the previous podcasts and discussions we've had with Jason Liu, he's been a big fan and promoter of Pydantic and AI.Samuel [00:00:45]: Yeah, it's weird because obviously I didn't create Pydantic originally for uses in AI, it predates LLMs. But it's like we've been lucky that it's been picked up by that community and used so widely.Swyx [00:00:58]: Actually, maybe we'll hear it. Right from you, what is Pydantic and maybe a little bit of the origin story?Samuel [00:01:04]: The best name for it, which is not quite right, is a validation library. And we get some tension around that name because it doesn't just do validation, it will do coercion by default. We now have strict mode, so you can disable that coercion. But by default, if you say you want an integer field and you get in a string of 1, 2, 3, it will convert it to 123 and a bunch of other sensible conversions. And as you can imagine, the semantics around it. Exactly when you convert and when you don't, it's complicated, but because of that, it's more than just validation. Back in 2017, when I first started it, the different thing it was doing was using type hints to define your schema. That was controversial at the time. It was genuinely disapproved of by some people. I think the success of Pydantic and libraries like FastAPI that build on top of it means that today that's no longer controversial in Python. And indeed, lots of other people have copied that route, but yeah, it's a data validation library. It uses type hints for the for the most part and obviously does all the other stuff you want, like serialization on top of that. But yeah, that's the core.Alessio [00:02:06]: Do you have any fun stories on how JSON schemas ended up being kind of like the structure output standard for LLMs? And were you involved in any of these discussions? Because I know OpenAI was, you know, one of the early adopters. So did they reach out to you? Was there kind of like a structure output console in open source that people were talking about or was it just a random?Samuel [00:02:26]: No, very much not. So I originally. Didn't implement JSON schema inside Pydantic and then Sebastian, Sebastian Ramirez, FastAPI came along and like the first I ever heard of him was over a weekend. I got like 50 emails from him or 50 like emails as he was committing to Pydantic, adding JSON schema long pre version one. So the reason it was added was for OpenAPI, which is obviously closely akin to JSON schema. And then, yeah, I don't know why it was JSON that got picked up and used by OpenAI. It was obviously very convenient for us. That's because it meant that not only can you do the validation, but because Pydantic will generate you the JSON schema, it will it kind of can be one source of source of truth for structured outputs and tools.Swyx [00:03:09]: Before we dive in further on the on the AI side of things, something I'm mildly curious about, obviously, there's Zod in JavaScript land. Every now and then there is a new sort of in vogue validation library that that takes over for quite a few years and then maybe like some something else comes along. Is Pydantic? Is it done like the core Pydantic?Samuel [00:03:30]: I've just come off a call where we were redesigning some of the internal bits. There will be a v3 at some point, which will not break people's code half as much as v2 as in v2 was the was the massive rewrite into Rust, but also fixing all the stuff that was broken back from like version zero point something that we didn't fix in v1 because it was a side project. We have plans to move some of the basically store the data in Rust types after validation. Not completely. So we're still working to design the Pythonic version of it, in order for it to be able to convert into Python types. So then if you were doing like validation and then serialization, you would never have to go via a Python type we reckon that can give us somewhere between three and five times another three to five times speed up. That's probably the biggest thing. Also, like changing how easy it is to basically extend Pydantic and define how particular types, like for example, NumPy arrays are validated and serialized. But there's also stuff going on. And for example, Jitter, the JSON library in Rust that does the JSON parsing, has SIMD implementation at the moment only for AMD64. So we can add that. We need to go and add SIMD for other instruction sets. So there's a bunch more we can do on performance. I don't think we're going to go and revolutionize Pydantic, but it's going to continue to get faster, continue, hopefully, to allow people to do more advanced things. We might add a binary format like CBOR for serialization for when you'll just want to put the data into a database and probably load it again from Pydantic. So there are some things that will come along, but for the most part, it should just get faster and cleaner.Alessio [00:05:04]: From a focus perspective, I guess, as a founder too, how did you think about the AI interest rising? And then how do you kind of prioritize, okay, this is worth going into more, and we'll talk about Pydantic AI and all of that. What was maybe your early experience with LLAMP, and when did you figure out, okay, this is something we should take seriously and focus more resources on it?Samuel [00:05:28]: I'll answer that, but I'll answer what I think is a kind of parallel question, which is Pydantic's weird, because Pydantic existed, obviously, before I was starting a company. I was working on it in my spare time, and then beginning of 22, I started working on the rewrite in Rust. And I worked on it full-time for a year and a half, and then once we started the company, people came and joined. And it was a weird project, because that would never go away. You can't get signed off inside a startup. Like, we're going to go off and three engineers are going to work full-on for a year in Python and Rust, writing like 30,000 lines of Rust just to release open-source-free Python library. The result of that has been excellent for us as a company, right? As in, it's made us remain entirely relevant. And it's like, Pydantic is not just used in the SDKs of all of the AI libraries, but I can't say which one, but one of the big foundational model companies, when they upgraded from Pydantic v1 to v2, their number one internal model... The metric of performance is time to first token. That went down by 20%. So you think about all of the actual AI going on inside, and yet at least 20% of the CPU, or at least the latency inside requests was actually Pydantic, which shows like how widely it's used. So we've benefited from doing that work, although it didn't, it would have never have made financial sense in most companies. In answer to your question about like, how do we prioritize AI, I mean, the honest truth is we've spent a lot of the last year and a half building. Good general purpose observability inside LogFire and making Pydantic good for general purpose use cases. And the AI has kind of come to us. Like we just, not that we want to get away from it, but like the appetite, uh, both in Pydantic and in LogFire to go and build with AI is enormous because it kind of makes sense, right? Like if you're starting a new greenfield project in Python today, what's the chance that you're using GenAI 80%, let's say, globally, obviously it's like a hundred percent in California, but even worldwide, it's probably 80%. Yeah. And so everyone needs that stuff. And there's so much yet to be figured out so much like space to do things better in the ecosystem in a way that like to go and implement a database that's better than Postgres is a like Sisyphean task. Whereas building, uh, tools that are better for GenAI than some of the stuff that's about now is not very difficult. Putting the actual models themselves to one side.Alessio [00:07:40]: And then at the same time, then you released Pydantic AI recently, which is, uh, um, you know, agent framework and early on, I would say everybody like, you know, Langchain and like, uh, Pydantic kind of like a first class support, a lot of these frameworks, we're trying to use you to be better. What was the decision behind we should do our own framework? Were there any design decisions that you disagree with any workloads that you think people didn't support? Well,Samuel [00:08:05]: it wasn't so much like design and workflow, although I think there were some, some things we've done differently. Yeah. I think looking in general at the ecosystem of agent frameworks, the engineering quality is far below that of the rest of the Python ecosystem. There's a bunch of stuff that we have learned how to do over the last 20 years of building Python libraries and writing Python code that seems to be abandoned by people when they build agent frameworks. Now I can kind of respect that, particularly in the very first agent frameworks, like Langchain, where they were literally figuring out how to go and do this stuff. It's completely understandable that you would like basically skip some stuff.Samuel [00:08:42]: I'm shocked by the like quality of some of the agent frameworks that have come out recently from like well-respected names, which it just seems to be opportunism and I have little time for that, but like the early ones, like I think they were just figuring out how to do stuff and just as lots of people have learned from Pydantic, we were able to learn a bit from them. I think from like the gap we saw and the thing we were frustrated by was the production readiness. And that means things like type checking, even if type checking makes it hard. Like Pydantic AI, I will put my hand up now and say it has a lot of generics and you need to, it's probably easier to use it if you've written a bit of Rust and you really understand generics, but like, and that is, we're not claiming that that makes it the easiest thing to use in all cases, we think it makes it good for production applications in big systems where type checking is a no-brainer in Python. But there are also a bunch of stuff we've learned from maintaining Pydantic over the years that we've gone and done. So every single example in Pydantic AI's documentation is run on Python. As part of tests and every single print output within an example is checked during tests. So it will always be up to date. And then a bunch of things that, like I say, are standard best practice within the rest of the Python ecosystem, but I'm not followed surprisingly by some AI libraries like coverage, linting, type checking, et cetera, et cetera, where I think these are no-brainers, but like weirdly they're not followed by some of the other libraries.Alessio [00:10:04]: And can you just give an overview of the framework itself? I think there's kind of like the. LLM calling frameworks, there are the multi-agent frameworks, there's the workflow frameworks, like what does Pydantic AI do?Samuel [00:10:17]: I glaze over a bit when I hear all of the different sorts of frameworks, but I like, and I will tell you when I built Pydantic, when I built Logfire and when I built Pydantic AI, my methodology is not to go and like research and review all of the other things. I kind of work out what I want and I go and build it and then feedback comes and we adjust. So the fundamental building block of Pydantic AI is agents. The exact definition of agents and how you want to define them. is obviously ambiguous and our things are probably sort of agent-lit, not that we would want to go and rename them to agent-lit, but like the point is you probably build them together to build something and most people will call an agent. So an agent in our case has, you know, things like a prompt, like system prompt and some tools and a structured return type if you want it, that covers the vast majority of cases. There are situations where you want to go further and the most complex workflows where you want graphs and I resisted graphs for quite a while. I was sort of of the opinion you didn't need them and you could use standard like Python flow control to do all of that stuff. I had a few arguments with people, but I basically came around to, yeah, I can totally see why graphs are useful. But then we have the problem that by default, they're not type safe because if you have a like add edge method where you give the names of two different edges, there's no type checking, right? Even if you go and do some, I'm not, not all the graph libraries are AI specific. So there's a, there's a graph library called, but it allows, it does like a basic runtime type checking. Ironically using Pydantic to try and make up for the fact that like fundamentally that graphs are not typed type safe. Well, I like Pydantic, but it did, that's not a real solution to have to go and run the code to see if it's safe. There's a reason that starting type checking is so powerful. And so we kind of, from a lot of iteration eventually came up with a system of using normally data classes to define nodes where you return the next node you want to call and where we're able to go and introspect the return type of a node to basically build the graph. And so the graph is. Yeah. Inherently type safe. And once we got that right, I, I wasn't, I'm incredibly excited about graphs. I think there's like masses of use cases for them, both in gen AI and other development, but also software's all going to have interact with gen AI, right? It's going to be like web. There's no longer be like a web department in a company is that there's just like all the developers are building for web building with databases. The same is going to be true for gen AI.Alessio [00:12:33]: Yeah. I see on your docs, you call an agent, a container that contains a system prompt function. Tools, structure, result, dependency type model, and then model settings. Are the graphs in your mind, different agents? Are they different prompts for the same agent? What are like the structures in your mind?Samuel [00:12:52]: So we were compelled enough by graphs once we got them right, that we actually merged the PR this morning. That means our agent implementation without changing its API at all is now actually a graph under the hood as it is built using our graph library. So graphs are basically a lower level tool that allow you to build these complex workflows. Our agents are technically one of the many graphs you could go and build. And we just happened to build that one for you because it's a very common, commonplace one. But obviously there are cases where you need more complex workflows where the current agent assumptions don't work. And that's where you can then go and use graphs to build more complex things.Swyx [00:13:29]: You said you were cynical about graphs. What changed your mind specifically?Samuel [00:13:33]: I guess people kept giving me examples of things that they wanted to use graphs for. And my like, yeah, but you could do that in standard flow control in Python became a like less and less compelling argument to me because I've maintained those systems that end up with like spaghetti code. And I could see the appeal of this like structured way of defining the workflow of my code. And it's really neat that like just from your code, just from your type hints, you can get out a mermaid diagram that defines exactly what can go and happen.Swyx [00:14:00]: Right. Yeah. You do have very neat implementation of sort of inferring the graph from type hints, I guess. Yeah. Is what I would call it. Yeah. I think the question always is I have gone back and forth. I used to work at Temporal where we would actually spend a lot of time complaining about graph based workflow solutions like AWS step functions. And we would actually say that we were better because you could use normal control flow that you already knew and worked with. Yours, I guess, is like a little bit of a nice compromise. Like it looks like normal Pythonic code. But you just have to keep in mind what the type hints actually mean. And that's what we do with the quote unquote magic that the graph construction does.Samuel [00:14:42]: Yeah, exactly. And if you look at the internal logic of actually running a graph, it's incredibly simple. It's basically call a node, get a node back, call that node, get a node back, call that node. If you get an end, you're done. We will add in soon support for, well, basically storage so that you can store the state between each node that's run. And then the idea is you can then distribute the graph and run it across computers. And also, I mean, the other weird, the other bit that's really valuable is across time. Because it's all very well if you look at like lots of the graph examples that like Claude will give you. If it gives you an example, it gives you this lovely enormous mermaid chart of like the workflow, for example, managing returns if you're an e-commerce company. But what you realize is some of those lines are literally one function calls another function. And some of those lines are wait six days for the customer to print their like piece of paper and put it in the post. And if you're writing like your demo. Project or your like proof of concept, that's fine because you can just say, and now we call this function. But when you're building when you're in real in real life, that doesn't work. And now how do we manage that concept to basically be able to start somewhere else in the in our code? Well, this graph implementation makes it incredibly easy because you just pass the node that is the start point for carrying on the graph and it continues to run. So it's things like that where I was like, yeah, I can just imagine how things I've done in the past would be fundamentally easier to understand if we had done them with graphs.Swyx [00:16:07]: You say imagine, but like right now, this pedantic AI actually resume, you know, six days later, like you said, or is this just like a theoretical thing we can go someday?Samuel [00:16:16]: I think it's basically Q&A. So there's an AI that's asking the user a question and effectively you then call the CLI again to continue the conversation. And it basically instantiates the node and calls the graph with that node again. Now, we don't have the logic yet for effectively storing state in the database between individual nodes that we're going to add soon. But like the rest of it is basically there.Swyx [00:16:37]: It does make me think that not only are you competing with Langchain now and obviously Instructor, and now you're going into sort of the more like orchestrated things like Airflow, Prefect, Daxter, those guys.Samuel [00:16:52]: Yeah, I mean, we're good friends with the Prefect guys and Temporal have the same investors as us. And I'm sure that my investor Bogomol would not be too happy if I was like, oh, yeah, by the way, as well as trying to take on Datadog. We're also going off and trying to take on Temporal and everyone else doing that. Obviously, we're not doing all of the infrastructure of deploying that right yet, at least. We're, you know, we're just building a Python library. And like what's crazy about our graph implementation is, sure, there's a bit of magic in like introspecting the return type, you know, extracting things from unions, stuff like that. But like the actual calls, as I say, is literally call a function and get back a thing and call that. It's like incredibly simple and therefore easy to maintain. The question is, how useful is it? Well, I don't know yet. I think we have to go and find out. We have a whole. We've had a slew of people joining our Slack over the last few days and saying, tell me how good Pydantic AI is. How good is Pydantic AI versus Langchain? And I refuse to answer. That's your job to go and find that out. Not mine. We built a thing. I'm compelled by it, but I'm obviously biased. The ecosystem will work out what the useful tools are.Swyx [00:17:52]: Bogomol was my board member when I was at Temporal. And I think I think just generally also having been a workflow engine investor and participant in this space, it's a big space. Like everyone needs different functions. I think the one thing that I would say like yours, you know, as a library, you don't have that much control of it over the infrastructure. I do like the idea that each new agents or whatever or unit of work, whatever you call that should spin up in this sort of isolated boundaries. Whereas yours, I think around everything runs in the same process. But you ideally want to sort of spin out its own little container of things.Samuel [00:18:30]: I agree with you a hundred percent. And we will. It would work now. Right. As in theory, you're just like as long as you can serialize the calls to the next node, you just have to all of the different containers basically have to have the same the same code. I mean, I'm super excited about Cloudflare workers running Python and being able to install dependencies. And if Cloudflare could only give me my invitation to the private beta of that, we would be exploring that right now because I'm super excited about that as a like compute level for some of this stuff where exactly what you're saying, basically. You can run everything as an individual. Like worker function and distribute it. And it's resilient to failure, et cetera, et cetera.Swyx [00:19:08]: And it spins up like a thousand instances simultaneously. You know, you want it to be sort of truly serverless at once. Actually, I know we have some Cloudflare friends who are listening, so hopefully they'll get in front of the line. Especially.Samuel [00:19:19]: I was in Cloudflare's office last week shouting at them about other things that frustrate me. I have a love-hate relationship with Cloudflare. Their tech is awesome. But because I use it the whole time, I then get frustrated. So, yeah, I'm sure I will. I will. I will get there soon.Swyx [00:19:32]: There's a side tangent on Cloudflare. Is Python supported at full? I actually wasn't fully aware of what the status of that thing is.Samuel [00:19:39]: Yeah. So Pyodide, which is Python running inside the browser in scripting, is supported now by Cloudflare. They basically, they're having some struggles working out how to manage, ironically, dependencies that have binaries, in particular, Pydantic. Because these workers where you can have thousands of them on a given metal machine, you don't want to have a difference. You basically want to be able to have a share. Shared memory for all the different Pydantic installations, effectively. That's the thing they work out. They're working out. But Hood, who's my friend, who is the primary maintainer of Pyodide, works for Cloudflare. And that's basically what he's doing, is working out how to get Python running on Cloudflare's network.Swyx [00:20:19]: I mean, the nice thing is that your binary is really written in Rust, right? Yeah. Which also compiles the WebAssembly. Yeah. So maybe there's a way that you'd build... You have just a different build of Pydantic and that ships with whatever your distro for Cloudflare workers is.Samuel [00:20:36]: Yes, that's exactly what... So Pyodide has builds for Pydantic Core and for things like NumPy and basically all of the popular binary libraries. Yeah. It's just basic. And you're doing exactly that, right? You're using Rust to compile the WebAssembly and then you're calling that shared library from Python. And it's unbelievably complicated, but it works. Okay.Swyx [00:20:57]: Staying on graphs a little bit more, and then I wanted to go to some of the other features that you have in Pydantic AI. I see in your docs, there are sort of four levels of agents. There's single agents, there's agent delegation, programmatic agent handoff. That seems to be what OpenAI swarms would be like. And then the last one, graph-based control flow. Would you say that those are sort of the mental hierarchy of how these things go?Samuel [00:21:21]: Yeah, roughly. Okay.Swyx [00:21:22]: You had some expression around OpenAI swarms. Well.Samuel [00:21:25]: And indeed, OpenAI have got in touch with me and basically, maybe I'm not supposed to say this, but basically said that Pydantic AI looks like what swarms would become if it was production ready. So, yeah. I mean, like, yeah, which makes sense. Awesome. Yeah. I mean, in fact, it was specifically saying, how can we give people the same feeling that they were getting from swarms that led us to go and implement graphs? Because my, like, just call the next agent with Python code was not a satisfactory answer to people. So it was like, okay, we've got to go and have a better answer for that. It's not like, let us to get to graphs. Yeah.Swyx [00:21:56]: I mean, it's a minimal viable graph in some sense. What are the shapes of graphs that people should know? So the way that I would phrase this is I think Anthropic did a very good public service and also kind of surprisingly influential blog post, I would say, when they wrote Building Effective Agents. We actually have the authors coming to speak at my conference in New York, which I think you're giving a workshop at. Yeah.Samuel [00:22:24]: I'm trying to work it out. But yes, I think so.Swyx [00:22:26]: Tell me if you're not. yeah, I mean, like, that was the first, I think, authoritative view of, like, what kinds of graphs exist in agents and let's give each of them a name so that everyone is on the same page. So I'm just kind of curious if you have community names or top five patterns of graphs.Samuel [00:22:44]: I don't have top five patterns of graphs. I would love to see what people are building with them. But like, it's been it's only been a couple of weeks. And of course, there's a point is that. Because they're relatively unopinionated about what you can go and do with them. They don't suit them. Like, you can go and do lots of lots of things with them, but they don't have the structure to go and have like specific names as much as perhaps like some other systems do. I think what our agents are, which have a name and I can't remember what it is, but this basically system of like, decide what tool to call, go back to the center, decide what tool to call, go back to the center and then exit. One form of graph, which, as I say, like our agents are effectively one implementation of a graph, which is why under the hood they are now using graphs. And it'll be interesting to see over the next few years whether we end up with these like predefined graph names or graph structures or whether it's just like, yep, I built a graph or whether graphs just turn out not to match people's mental image of what they want and die away. We'll see.Swyx [00:23:38]: I think there is always appeal. Every developer eventually gets graph religion and goes, oh, yeah, everything's a graph. And then they probably over rotate and go go too far into graphs. And then they have to learn a whole bunch of DSLs. And then they're like, actually, I didn't need that. I need this. And they scale back a little bit.Samuel [00:23:55]: I'm at the beginning of that process. I'm currently a graph maximalist, although I haven't actually put any into production yet. But yeah.Swyx [00:24:02]: This has a lot of philosophical connections with other work coming out of UC Berkeley on compounding AI systems. I don't know if you know of or care. This is the Gartner world of things where they need some kind of industry terminology to sell it to enterprises. I don't know if you know about any of that.Samuel [00:24:24]: I haven't. I probably should. I should probably do it because I should probably get better at selling to enterprises. But no, no, I don't. Not right now.Swyx [00:24:29]: This is really the argument is that instead of putting everything in one model, you have more control and more maybe observability to if you break everything out into composing little models and changing them together. And obviously, then you need an orchestration framework to do that. Yeah.Samuel [00:24:47]: And it makes complete sense. And one of the things we've seen with agents is they work well when they work well. But when they. Even if you have the observability through log five that you can see what was going on, if you don't have a nice hook point to say, hang on, this is all gone wrong. You have a relatively blunt instrument of basically erroring when you exceed some kind of limit. But like what you need to be able to do is effectively iterate through these runs so that you can have your own control flow where you're like, OK, we've gone too far. And that's where one of the neat things about our graph implementation is you can basically call next in a loop rather than just running the full graph. And therefore, you have this opportunity to to break out of it. But yeah, basically, it's the same point, which is like if you have two bigger unit of work to some extent, whether or not it involves gen AI. But obviously, it's particularly problematic in gen AI. You only find out afterwards when you've spent quite a lot of time and or money when it's gone off and done done the wrong thing.Swyx [00:25:39]: Oh, drop on this. We're not going to resolve this here, but I'll drop this and then we can move on to the next thing. This is the common way that we we developers talk about this. And then the machine learning researchers look at us. And laugh and say, that's cute. And then they just train a bigger model and they wipe us out in the next training run. So I think there's a certain amount of we are fighting the bitter lesson here. We're fighting AGI. And, you know, when AGI arrives, this will all go away. Obviously, on Latent Space, we don't really discuss that because I think AGI is kind of this hand wavy concept that isn't super relevant. But I think we have to respect that. For example, you could do a chain of thoughts with graphs and you could manually orchestrate a nice little graph that does like. Reflect, think about if you need more, more inference time, compute, you know, that's the hot term now. And then think again and, you know, scale that up. Or you could train Strawberry and DeepSeq R1. Right.Samuel [00:26:32]: I saw someone saying recently, oh, they were really optimistic about agents because models are getting faster exponentially. And I like took a certain amount of self-control not to describe that it wasn't exponential. But my main point was. If models are getting faster as quickly as you say they are, then we don't need agents and we don't really need any of these abstraction layers. We can just give our model and, you know, access to the Internet, cross our fingers and hope for the best. Agents, agent frameworks, graphs, all of this stuff is basically making up for the fact that right now the models are not that clever. In the same way that if you're running a customer service business and you have loads of people sitting answering telephones, the less well trained they are, the less that you trust them, the more that you need to give them a script to go through. Whereas, you know, so if you're running a bank and you have lots of customer service people who you don't trust that much, then you tell them exactly what to say. If you're doing high net worth banking, you just employ people who you think are going to be charming to other rich people and set them off to go and have coffee with people. Right. And the same is true of models. The more intelligent they are, the less we need to tell them, like structure what they go and do and constrain the routes in which they take.Swyx [00:27:42]: Yeah. Yeah. Agree with that. So I'm happy to move on. So the other parts of Pydantic AI that are worth commenting on, and this is like my last rant, I promise. So obviously, every framework needs to do its sort of model adapter layer, which is, oh, you can easily swap from OpenAI to Cloud to Grok. You also have, which I didn't know about, Google GLA, which I didn't really know about until I saw this in your docs, which is generative language API. I assume that's AI Studio? Yes.Samuel [00:28:13]: Google don't have good names for it. So Vertex is very clear. That seems to be the API that like some of the things use, although it returns 503 about 20% of the time. So... Vertex? No. Vertex, fine. But the... Oh, oh. GLA. Yeah. Yeah.Swyx [00:28:28]: I agree with that.Samuel [00:28:29]: So we have, again, another example of like, well, I think we go the extra mile in terms of engineering is we run on every commit, at least commit to main, we run tests against the live models. Not lots of tests, but like a handful of them. Oh, okay. And we had a point last week where, yeah, GLA is a little bit better. GLA1 was failing every single run. One of their tests would fail. And we, I think we might even have commented out that one at the moment. So like all of the models fail more often than you might expect, but like that one seems to be particularly likely to fail. But Vertex is the same API, but much more reliable.Swyx [00:29:01]: My rant here is that, you know, versions of this appear in Langchain and every single framework has to have its own little thing, a version of that. I would put to you, and then, you know, this is, this can be agree to disagree. This is not needed in Pydantic AI. I would much rather you adopt a layer like Lite LLM or what's the other one in JavaScript port key. And that's their job. They focus on that one thing and they, they normalize APIs for you. All new models are automatically added and you don't have to duplicate this inside of your framework. So for example, if I wanted to use deep seek, I'm out of luck because Pydantic AI doesn't have deep seek yet.Samuel [00:29:38]: Yeah, it does.Swyx [00:29:39]: Oh, it does. Okay. I'm sorry. But you know what I mean? Should this live in your code or should it live in a layer that's kind of your API gateway that's a defined piece of infrastructure that people have?Samuel [00:29:49]: And I think if a company who are well known, who are respected by everyone had come along and done this at the right time, maybe we should have done it a year and a half ago and said, we're going to be the universal AI layer. That would have been a credible thing to do. I've heard varying reports of Lite LLM is the truth. And it didn't seem to have exactly the type safety that we needed. Also, as I understand it, and again, I haven't looked into it in great detail. Part of their business model is proxying the request through their, through their own system to do the generalization. That would be an enormous put off to an awful lot of people. Honestly, the truth is I don't think it is that much work unifying the model. I get where you're coming from. I kind of see your point. I think the truth is that everyone is centralizing around open AIs. Open AI's API is the one to do. So DeepSeq support that. Grok with OK support that. Ollama also does it. I mean, if there is that library right now, it's more or less the open AI SDK. And it's very high quality. It's well type checked. It uses Pydantic. So I'm biased. But I mean, I think it's pretty well respected anyway.Swyx [00:30:57]: There's different ways to do this. Because also, it's not just about normalizing the APIs. You have to do secret management and all that stuff.Samuel [00:31:05]: Yeah. And there's also. There's Vertex and Bedrock, which to one extent or another, effectively, they host multiple models, but they don't unify the API. But they do unify the auth, as I understand it. Although we're halfway through doing Bedrock. So I don't know about it that well. But they're kind of weird hybrids because they support multiple models. But like I say, the auth is centralized.Swyx [00:31:28]: Yeah, I'm surprised they don't unify the API. That seems like something that I would do. You know, we can discuss all this all day. There's a lot of APIs. I agree.Samuel [00:31:36]: It would be nice if there was a universal one that we didn't have to go and build.Alessio [00:31:39]: And I guess the other side of, you know, routing model and picking models like evals. How do you actually figure out which one you should be using? I know you have one. First of all, you have very good support for mocking in unit tests, which is something that a lot of other frameworks don't do. So, you know, my favorite Ruby library is VCR because it just, you know, it just lets me store the HTTP requests and replay them. That part I'll kind of skip. I think you are busy like this test model. We're like just through Python. You try and figure out what the model might respond without actually calling the model. And then you have the function model where people can kind of customize outputs. Any other fun stories maybe from there? Or is it just what you see is what you get, so to speak?Samuel [00:32:18]: On those two, I think what you see is what you get. On the evals, I think watch this space. I think it's something that like, again, I was somewhat cynical about for some time. Still have my cynicism about some of the well, it's unfortunate that so many different things are called evals. It would be nice if we could agree. What they are and what they're not. But look, I think it's a really important space. I think it's something that we're going to be working on soon, both in Pydantic AI and in LogFire to try and support better because it's like it's an unsolved problem.Alessio [00:32:45]: Yeah, you do say in your doc that anyone who claims to know for sure exactly how your eval should be defined can safely be ignored.Samuel [00:32:52]: We'll delete that sentence when we tell people how to do their evals.Alessio [00:32:56]: Exactly. I was like, we need we need a snapshot of this today. And so let's talk about eval. So there's kind of like the vibe. Yeah. So you have evals, which is what you do when you're building. Right. Because you cannot really like test it that many times to get statistical significance. And then there's the production eval. So you also have LogFire, which is kind of like your observability product, which I tried before. It's very nice. What are some of the learnings you've had from building an observability tool for LEMPs? And yeah, as people think about evals, even like what are the right things to measure? What are like the right number of samples that you need to actually start making decisions?Samuel [00:33:33]: I'm not the best person to answer that is the truth. So I'm not going to come in here and tell you that I think I know the answer on the exact number. I mean, we can do some back of the envelope statistics calculations to work out that like having 30 probably gets you most of the statistical value of having 200 for, you know, by definition, 15% of the work. But the exact like how many examples do you need? For example, that's a much harder question to answer because it's, you know, it's deep within the how models operate in terms of LogFire. One of the reasons we built LogFire the way we have and we allow you to write SQL directly against your data and we're trying to build the like powerful fundamentals of observability is precisely because we know we don't know the answers. And so allowing people to go and innovate on how they're going to consume that stuff and how they're going to process it is we think that's valuable. Because even if we come along and offer you an evals framework on top of LogFire, it won't be right in all regards. And we want people to be able to go and innovate and being able to write their own SQL connected to the API. And effectively query the data like it's a database with SQL allows people to innovate on that stuff. And that's what allows us to do it as well. I mean, we do a bunch of like testing what's possible by basically writing SQL directly against LogFire as any user could. I think the other the other really interesting bit that's going on in observability is OpenTelemetry is centralizing around semantic attributes for GenAI. So it's a relatively new project. A lot of it's still being added at the moment. But basically the idea that like. They unify how both SDKs and or agent frameworks send observability data to to any OpenTelemetry endpoint. And so, again, we can go and having that unification allows us to go and like basically compare different libraries, compare different models much better. That stuff's in a very like early stage of development. One of the things we're going to be working on pretty soon is basically, I suspect, GenAI will be the first agent framework that implements those semantic attributes properly. Because, again, we control and we can say this is important for observability, whereas most of the other agent frameworks are not maintained by people who are trying to do observability. With the exception of Langchain, where they have the observability platform, but they chose not to go down the OpenTelemetry route. So they're like plowing their own furrow. And, you know, they're a lot they're even further away from standardization.Alessio [00:35:51]: Can you maybe just give a quick overview of how OTEL ties into the AI workflows? There's kind of like the question of is, you know, a trace. And a span like a LLM call. Is it the agent? It's kind of like the broader thing you're tracking. How should people think about it?Samuel [00:36:06]: Yeah, so they have a PR that I think may have now been merged from someone at IBM talking about remote agents and trying to support this concept of remote agents within GenAI. I'm not particularly compelled by that because I don't think that like that's actually by any means the common use case. But like, I suppose it's fine for it to be there. The majority of the stuff in OTEL is basically defining how you would instrument. A given call to an LLM. So basically the actual LLM call, what data you would send to your telemetry provider, how you would structure that. Apart from this slightly odd stuff on remote agents, most of the like agent level consideration is not yet implemented in is not yet decided effectively. And so there's a bit of ambiguity. Obviously, what's good about OTEL is you can in the end send whatever attributes you like. But yeah, there's quite a lot of churn in that space and exactly how we store the data. I think that one of the most interesting things, though, is that if you think about observability. Traditionally, it was sure everyone would say our observability data is very important. We must keep it safe. But actually, companies work very hard to basically not have anything that sensitive in their observability data. So if you're a doctor in a hospital and you search for a drug for an STI, the sequel might be sent to the observability provider. But none of the parameters would. It wouldn't have the patient number or their name or the drug. With GenAI, that distinction doesn't exist because it's all just messed up in the text. If you have that same patient asking an LLM how to. What drug they should take or how to stop smoking. You can't extract the PII and not send it to the observability platform. So the sensitivity of the data that's going to end up in observability platforms is going to be like basically different order of magnitude to what's in what you would normally send to Datadog. Of course, you can make a mistake and send someone's password or their card number to Datadog. But that would be seen as a as a like mistake. Whereas in GenAI, a lot of data is going to be sent. And I think that's why companies like Langsmith and are trying hard to offer observability. On prem, because there's a bunch of companies who are happy for Datadog to be cloud hosted, but want self-hosted self-hosting for this observability stuff with GenAI.Alessio [00:38:09]: And are you doing any of that today? Because I know in each of the spans you have like the number of tokens, you have the context, you're just storing everything. And then you're going to offer kind of like a self-hosting for the platform, basically. Yeah. Yeah.Samuel [00:38:23]: So we have scrubbing roughly equivalent to what the other observability platforms have. So if we, you know, if we see password as the key, we won't send the value. But like, like I said, that doesn't really work in GenAI. So we're accepting we're going to have to store a lot of data and then we'll offer self-hosting for those people who can afford it and who need it.Alessio [00:38:42]: And then this is, I think, the first time that most of the workloads performance is depending on a third party. You know, like if you're looking at Datadog data, usually it's your app that is driving the latency and like the memory usage and all of that. Here you're going to have spans that maybe take a long time to perform because the GLA API is not working or because OpenAI is kind of like overwhelmed. Do you do anything there since like the provider is almost like the same across customers? You know, like, are you trying to surface these things for people and say, hey, this was like a very slow span, but actually all customers using OpenAI right now are seeing the same thing. So maybe don't worry about it or.Samuel [00:39:20]: Not yet. We do a few things that people don't generally do in OTA. So we send. We send information at the beginning. At the beginning of a trace as well as sorry, at the beginning of a span, as well as when it finishes. By default, OTA only sends you data when the span finishes. So if you think about a request which might take like 20 seconds, even if some of the intermediate spans finished earlier, you can't basically place them on the page until you get the top level span. And so if you're using standard OTA, you can't show anything until those requests are finished. When those requests are taking a few hundred milliseconds, it doesn't really matter. But when you're doing Gen AI calls or when you're like running a batch job that might take 30 minutes. That like latency of not being able to see the span is like crippling to understanding your application. And so we've we do a bunch of slightly complex stuff to basically send data about a span as it starts, which is closely related. Yeah.Alessio [00:40:09]: Any thoughts on all the other people trying to build on top of OpenTelemetry in different languages, too? There's like the OpenLEmetry project, which doesn't really roll off the tongue. But how do you see the future of these kind of tools? Is everybody going to have to build? Why does everybody want to build? They want to build their own open source observability thing to then sell?Samuel [00:40:29]: I mean, we are not going off and trying to instrument the likes of the OpenAI SDK with the new semantic attributes, because at some point that's going to happen and it's going to live inside OTEL and we might help with it. But we're a tiny team. We don't have time to go and do all of that work. So OpenLEmetry, like interesting project. But I suspect eventually most of those semantic like that instrumentation of the big of the SDKs will live, like I say, inside the main OpenTelemetry report. I suppose. What happens to the agent frameworks? What data you basically need at the framework level to get the context is kind of unclear. I don't think we know the answer yet. But I mean, I was on the, I guess this is kind of semi-public, because I was on the call with the OpenTelemetry call last week talking about GenAI. And there was someone from Arize talking about the challenges they have trying to get OpenTelemetry data out of Langchain, where it's not like natively implemented. And obviously they're having quite a tough time. And I was realizing, hadn't really realized this before, but how lucky we are to primarily be talking about our own agent framework, where we have the control rather than trying to go and instrument other people's.Swyx [00:41:36]: Sorry, I actually didn't know about this semantic conventions thing. It looks like, yeah, it's merged into main OTel. What should people know about this? I had never heard of it before.Samuel [00:41:45]: Yeah, I think it looks like a great start. I think there's some unknowns around how you send the messages that go back and forth, which is kind of the most important part. It's the most important thing of all. And that is moved out of attributes and into OTel events. OTel events in turn are moving from being on a span to being their own top-level API where you send data. So there's a bunch of churn still going on. I'm impressed by how fast the OTel community is moving on this project. I guess they, like everyone else, get that this is important, and it's something that people are crying out to get instrumentation off. So I'm kind of pleasantly surprised at how fast they're moving, but it makes sense.Swyx [00:42:25]: I'm just kind of browsing through the specification. I can already see that this basically bakes in whatever the previous paradigm was. So now they have genai.usage.prompt tokens and genai.usage.completion tokens. And obviously now we have reasoning tokens as well. And then only one form of sampling, which is top-p. You're basically baking in or sort of reifying things that you think are important today, but it's not a super foolproof way of doing this for the future. Yeah.Samuel [00:42:54]: I mean, that's what's neat about OTel is you can always go and send another attribute and that's fine. It's just there are a bunch that are agreed on. But I would say, you know, to come back to your previous point about whether or not we should be relying on one centralized abstraction layer, this stuff is moving so fast that if you start relying on someone else's standard, you risk basically falling behind because you're relying on someone else to keep things up to date.Swyx [00:43:14]: Or you fall behind because you've got other things going on.Samuel [00:43:17]: Yeah, yeah. That's fair. That's fair.Swyx [00:43:19]: Any other observations just about building LogFire, actually? Let's just talk about this. So you announced LogFire. I was kind of only familiar with LogFire because of your Series A announcement. I actually thought you were making a separate company. I remember some amount of confusion with you when that came out. So to be clear, it's Pydantic LogFire and the company is one company that has kind of two products, an open source thing and an observability thing, correct? Yeah. I was just kind of curious, like any learnings building LogFire? So classic question is, do you use ClickHouse? Is this like the standard persistence layer? Any learnings doing that?Samuel [00:43:54]: We don't use ClickHouse. We started building our database with ClickHouse, moved off ClickHouse onto Timescale, which is a Postgres extension to do analytical databases. Wow. And then moved off Timescale onto DataFusion. And we're basically now building, it's DataFusion, but it's kind of our own database. Bogomil is not entirely happy that we went through three databases before we chose one. I'll say that. But like, we've got to the right one in the end. I think we could have realized that Timescale wasn't right. I think ClickHouse. They both taught us a lot and we're in a great place now. But like, yeah, it's been a real journey on the database in particular.Swyx [00:44:28]: Okay. So, you know, as a database nerd, I have to like double click on this, right? So ClickHouse is supposed to be the ideal backend for anything like this. And then moving from ClickHouse to Timescale is another counterintuitive move that I didn't expect because, you know, Timescale is like an extension on top of Postgres. Not super meant for like high volume logging. But like, yeah, tell us those decisions.Samuel [00:44:50]: So at the time, ClickHouse did not have good support for JSON. I was speaking to someone yesterday and said ClickHouse doesn't have good support for JSON and got roundly stepped on because apparently it does now. So they've obviously gone and built their proper JSON support. But like back when we were trying to use it, I guess a year ago or a bit more than a year ago, everything happened to be a map and maps are a pain to try and do like looking up JSON type data. And obviously all these attributes, everything you're talking about there in terms of the GenAI stuff. You can choose to make them top level columns if you want. But the simplest thing is just to put them all into a big JSON pile. And that was a problem with ClickHouse. Also, ClickHouse had some really ugly edge cases like by default, or at least until I complained about it a lot, ClickHouse thought that two nanoseconds was longer than one second because they compared intervals just by the number, not the unit. And I complained about that a lot. And then they caused it to raise an error and just say you have to have the same unit. Then I complained a bit more. And I think as I understand it now, they have some. They convert between units. But like stuff like that, when all you're looking at is when a lot of what you're doing is comparing the duration of spans was really painful. Also things like you can't subtract two date times to get an interval. You have to use the date sub function. But like the fundamental thing is because we want our end users to write SQL, the like quality of the SQL, how easy it is to write, matters way more to us than if you're building like a platform on top where your developers are going to write the SQL. And once it's written and it's working, you don't mind too much. So I think that's like one of the fundamental differences. The other problem that I have with the ClickHouse and Impact Timescale is that like the ultimate architecture, the like snowflake architecture of binary data in object store queried with some kind of cache from nearby. They both have it, but it's closed sourced and you only get it if you go and use their hosted versions. And so even if we had got through all the problems with Timescale or ClickHouse, we would end up like, you know, they would want to be taking their 80% margin. And then we would be wanting to take that would basically leave us less space for margin. Whereas data fusion. Properly open source, all of that same tooling is open source. And for us as a team of people with a lot of Rust expertise, data fusion, which is implemented in Rust, we can literally dive into it and go and change it. So, for example, I found that there were some slowdowns in data fusion's string comparison kernel for doing like string contains. And it's just Rust code. And I could go and rewrite the string comparison kernel to be faster. Or, for example, data fusion, when we started using it, didn't have JSON support. Obviously, as I've said, it's something we can do. It's something we needed. I was able to go and implement that in a weekend using our JSON parser that we built for Pydantic Core. So it's the fact that like data fusion is like for us the perfect mixture of a toolbox to build a database with, not a database. And we can go and implement stuff on top of it in a way that like if you were trying to do that in Postgres or in ClickHouse. I mean, ClickHouse would be easier because it's C++, relatively modern C++. But like as a team of people who are not C++ experts, that's much scarier than data fusion for us.Swyx [00:47:47]: Yeah, that's a beautiful rant.Alessio [00:47:49]: That's funny. Most people don't think they have agency on these projects. They're kind of like, oh, I should use this or I should use that. They're not really like, what should I pick so that I contribute the most back to it? You know, so but I think you obviously have an open source first mindset. So that makes a lot of sense.Samuel [00:48:05]: I think if we were probably better as a startup, a better startup and faster moving and just like headlong determined to get in front of customers as fast as possible, we should have just started with ClickHouse. I hope that long term we're in a better place for having worked with data fusion. We like we're quite engaged now with the data fusion community. Andrew Lam, who maintains data fusion, is an advisor to us. We're in a really good place now. But yeah, it's definitely slowed us down relative to just like building on ClickHouse and moving as fast as we can.Swyx [00:48:34]: OK, we're about to zoom out and do Pydantic run and all the other stuff. But, you know, my last question on LogFire is really, you know, at some point you run out sort of community goodwill just because like, oh, I use Pydantic. I love Pydantic. I'm going to use LogFire. OK, then you start entering the territory of the Datadogs, the Sentrys and the honeycombs. Yeah. So where are you going to really spike here? What differentiator here?Samuel [00:48:59]: I wasn't writing code in 2001, but I'm assuming that there were people talking about like web observability and then web observability stopped being a thing, not because the web stopped being a thing, but because all observability had to do web. If you were talking to people in 2010 or 2012, they would have talked about cloud observability. Now that's not a term because all observability is cloud first. The same is going to happen to gen AI. And so whether or not you're trying to compete with Datadog or with Arise and Langsmith, you've got to do first class. You've got to do general purpose observability with first class support for AI. And as far as I know, we're the only people really trying to do that. I mean, I think Datadog is starting in that direction. And to be honest, I think Datadog is a much like scarier company to compete with than the AI specific observability platforms. Because in my opinion, and I've also heard this from lots of customers, AI specific observability where you don't see everything else going on in your app is not actually that useful. Our hope is that we can build the first general purpose observability platform with first class support for AI. And that we have this open source heritage of putting developer experience first that other companies haven't done. For all I'm a fan of Datadog and what they've done. If you search Datadog logging Python. And you just try as a like a non-observability expert to get something up and running with Datadog and Python. It's not trivial, right? That's something Sentry have done amazingly well. But like there's enormous space in most of observability to do DX better.Alessio [00:50:27]: Since you mentioned Sentry, I'm curious how you thought about licensing and all of that. Obviously, your MIT license, you don't have any rolling license like Sentry has where you can only use an open source, like the one year old version of it. Was that a hard decision?Samuel [00:50:41]: So to be clear, LogFire is co-sourced. So Pydantic and Pydantic AI are MIT licensed and like properly open source. And then LogFire for now is completely closed source. And in fact, the struggles that Sentry have had with licensing and the like weird pushback the community gives when they take something that's closed source and make it source available just meant that we just avoided that whole subject matter. I think the other way to look at it is like in terms of either headcount or revenue or dollars in the bank. The amount of open source we do as a company is we've got to be open source. We're up there with the most prolific open source companies, like I say, per head. And so we didn't feel like we were morally obligated to make LogFire open source. We have Pydantic. Pydantic is a foundational library in Python. That and now Pydantic AI are our contribution to open source. And then LogFire is like openly for profit, right? As in we're not claiming otherwise. We're not sort of trying to walk a line if it's open source. But really, we want to make it hard to deploy. So you probably want to pay us. We're trying to be straight. That it's to pay for. We could change that at some point in the future, but it's not an immediate plan.Alessio [00:51:48]: All right. So the first one I saw this new I don't know if it's like a product you're building the Pydantic that run, which is a Python browser sandbox. What was the inspiration behind that? We talk a lot about code interpreter for lamps. I'm an investor in a company called E2B, which is a code sandbox as a service for remote execution. Yeah. What's the Pydantic that run story?Samuel [00:52:09]: So Pydantic that run is again completely open source. I have no interest in making it into a product. We just needed a sandbox to be able to demo LogFire in particular, but also Pydantic AI. So it doesn't have it yet, but I'm going to add basically a proxy to OpenAI and the other models so that you can run Pydantic AI in the browser. See how it works. Tweak the prompt, et cetera, et cetera. And we'll have some kind of limit per day of what you can spend on it or like what the spend is. The other thing we wanted to b
Static pressure is the ballooning effect of a duct system and it's measured with a dual port manometer. Airflow is the measurement of air velocity (air speed) in the duct. Velocity is then converted to air flow using duct dimensions. Check out Jobber and all they have to offer. getjobber.com/hvacknowitall
Join Mark Rittman in this special end-of-year episode as he speaks with Noel Gomez, co-founder of DataCoves about the challenges and opportunities of orchestrating dbt and other tools within the open-source Modern Data Stack, navigating the evolving semantic layer landscape and the future of modular, vendor-agnostic data solutions.Datacoves Platform OverviewBuild vs Buy Analytics Platform: Hosting Open-Source ToolsScale the benefits of Core with dbt CloudDagster vs. Airflow
Cultivation Elevated - Indoor Farming, Cannabis Growers & Cultivators - Pipp Horticulture
In this episode, join us at the Roadshow as we explore the cutting-edge world of HVAC technology, focusing on Greenheck's control dampers. We delve into the specifics of various damper designs, including the robust Galvanized Steel Airfoil Blade, the versatile Extruded Aluminum Frame and Blade, and the specialized Thermally Broken Frame and Blade for enhanced thermal insulation. Discover how Greenheck is pushing the boundaries of airflow management with their innovative Variable Size Blade technology. Tune in to gain expert insights into optimizing air control systems for various applications.
Picture this: Your organization's data infrastructure resembles a busy kitchen with too many cooks. You're juggling Kafka for messaging, Flink for processing, Spark for analytics, Airflow for orchestration, and various Lambda functions scattered about. Each tool excellent at its job, but together they've created a complex feast of integration challenges. Your data teams are spending more time managing tools than extracting value from data. InfinyOn reimagines this chaos with a radically simple approach: a unified system for data streaming that runs everywhere. Unlike traditional solutions that struggle at the edge, InfinyOn gracefully handles data streams from IoT devices to cloud servers. And instead of cobbling together different tools, developers can build complete data pipelines using their preferred languages - be it Rust, Python, or SQL - with built-in state management. At the heart of InfinyOn is Fluvio, a Rust-based data streaming platform that's fast, reliable, and easy to use.
Last year, we talked with Alex LeBlanc about starting Calibration Coffee Lab in Greenville, South Carolina (CRAFTED ep #26). Today, Alex is back to update us on how it's going, what he's been learning, how his thinking about coffee has evolved, and how he's going about trying to build a brand that people will love.And if you'd like to check out Alex's coffee, CRAFTED listeners can get 15% off by using the code BLISTER at calibrationcoffeelab.comRELATED LINKS:Become a BLISTER+ MemberCheck out the Blister Craft CollectiveCRAFTED ep #26: Calibration Coffee LabTOPICS & TIMES:Backstory of Calibration Coffee Lab (2:53)3rd Wave Coffee (9:32)1st & 2nd Crack (14:18)Alex's Philosophy of Roasting (16:46)Espresso (19:30)Roasting Temperature & Airflow (25:47)Catering to Customers' Tastes (27:06)Coffee, Wine, & Beer Comparisons (33:08)‘Single Origin' Coffees vs Blends (35:10)How Long to “Rest” after Roasting (36:53)Natural vs Washed (45:48)Brand Building (51:03)SEE OUR OTHER PODCASTS:Blister CinematicBikes & Big IdeasGEAR:30Blister Podcast Hosted on Acast. See acast.com/privacy for more information.
In this episode, microgreens grower Vincent Cuneo talks about improving their microgreens' quality by improving air flow in their growing system. Get time and labor-saving farm tools and microgreen seeds at shop.modern grower.co Listen to other podcasts on the Modern Grower Podcast Network: Farm Small, Farm Smart Farm Small, Farm Smart Daily The Growing Microgreens Podcast Carrot Cashflow Podcast In Search of Soil Check out Diego's book Sell Everything You Grow on Amazon. https://www.amazon.com/Sell-Everything-You-Grow-Homestead-ebook/dp/B0CJC9NTZF
Cultivation Elevated - Indoor Farming, Cannabis Growers & Cultivators - Pipp Horticulture
The expression “Less is More” applies to many things but not attic airflow. In this episode of Airing it Out with Air Vent, we examine the benefits of using the 1/150 airflow ratio (more) instead of the 1/300 (less) backed by third-party research. Have a suggestion for a future Podcast topic? Send us your ideas and feedback to pscelsi@gibraltar1.com.
Get on the Supermarket Academy waitlist now! New program to supercharge your supermarket refrigeration expertise launching soon. BOOK A CALL with Trevor to learn more about refrigeration training programs. In this conversation, we're talking with Aidan Lucey, Refrigeration Mechanic at RAC Services/The Articom Group, about getting into supermarket refrigeration and taking on service calls as a new refrigeration technician. Aidan has taken a number of Refrigeration Mentor courses and here, shares some valuable tips for building confidence and experience, what to look for on service calls, how to diagnose root causes of issues, and how to build trust with store owners and managers. He also shares technical tips and examples of troubleshooting common issues in supermarkets. In this conversation, we cover: -Importance of understanding refrigeration basics -Tips for managing on-calls -Building relationships with your coworkers -How to better prioritize jobs when on-call -Building relationships with store owners and managers -Being assertive and decisive as a technician -How to build confidence taking service calls -Compressor and electrical checks -The role of superheat in diagnostics -Understanding system pressures and temperatures -Creating a checklist for service calls -Airflow and humidity considerations -Distinguishing causes vs symptoms of service call issues -Providing added value to your customers -Key measurements for technicians -The importance of detailed service tickets Helpful Links & Resources: Aidan on LinkedIn The Articom Group: https://thearcticomgroup.com/ Episode 217. Compressor Inspections and Identifying Common Failures with Dean Steliga of Bitzer Canada Episode 099: Controlling CO2 HPV & FGBV with Micro Thermo Upcoming Servicing Compressors, Supermarket and CO2 Trainings: Learn More Here Learn More About Refrigeration Mentor: https://refrigerationmentor.com/ Get your FREE Service & Compressor Troubleshooting Guide: Access Here Refrigeration Mentor on Instagram Refrigeration Mentor YouTube Channel
Have you ever been driving and suddenly come up with a brilliant solution to an HVACR industry challenge? What happens next? Many innovative tools and techniques in our field come from technicians in the field and industry experts who use their creativity to develop practical solutions. In this episode, Don Prather joins us to share how he tackled a common problem in zoning systems, especially in low-load scenarios, and turned it into both a product and a training initiative.
In this episode of the HVAC podcast, Bryan and Max Johnson from Kalos discuss the critical role of a startup and commissioning technician in the HVAC industry. Max, who has experience in both residential and commercial HVAC, shares his insights on the importance of understanding the scope of work, equipment specifications, and code requirements. One of the key responsibilities of a startup and commissioning technician is to prevent any costly issues that may arise during the installation process. This includes identifying and addressing potential problems with ductwork, refrigerant charge, electrical wiring, and airflow. A comprehensive checklist ensures that no crucial steps are overlooked, such as setting up communicating equipment properly, ensuring the correct accessories are installed, and verifying the drain system is functioning correctly. Proper electrical work is another critical aspect of the startup and commissioning process. Max highlights the importance of using the right connectors and wire sizes to prevent issues like loose connections or overloaded circuits, which can pose fire hazards. Additionally, he stresses the importance of verifying the voltage is within the acceptable range for the equipment, as over-voltage can lead to premature failures. Airflow is another crucial factor that the startup and commissioning technician must address. Setting the correct airflow before charging the system is essential, as it ensures the equipment operates efficiently and effectively removes the necessary amount of latent heat. He recommends using tools like the TrueFlow grid and DG8 manometer to accurately measure and validate the airflow. Follow the manufacturer's charging recommendations closely, as each piece of equipment may have unique requirements. Use a comprehensive calculator, such as the one available on the HVAC School website, to determine the proper charge based on factors like line set length and size. Key Topics Covered: · Understanding the role and importance of a startup and commissioning technician · Developing a comprehensive checklist to ensure no critical steps are missed · Addressing potential issues with ductwork, accessories, and drain systems · Proper electrical work, including connector selection and voltage verification · Importance of setting the correct airflow before charging the system · Following manufacturer guidelines for refrigerant charging Have a question that you want us to answer on the podcast? Submit your questions at https://www.speakpipe.com/hvacschool. Purchase your tickets or learn more about the 6th Annual HVACR Training Symposium at https://hvacrschool.com/symposium. Subscribe to our podcast on your iPhone or Android. Subscribe to our YouTube channel. Check out our handy calculators here or on the HVAC School Mobile App for Apple and Android.
In this short podcast, Bryan talks about how to pay close attention to airflow issues and use your "spidey sense" when you're doing a visual inspection or commissioning a system. He also covers some causes of common airflow problems and some services and upgrades you can offer to your customers. The skill of being able to use your senses and notice when something isn't quite right is a valuable one, especially when you're getting ready to set the charge. Not every technician has access to the tools to do a comprehensive airflow assessment, but every tech can use their senses to determine when something is wrong with the system airflow. Keep an ear out for whistling or other strange noises, and watch out for cabinet shaking, which may indicate an airflow problem. Airflow restrictions are also significant issues. Filter cleanliness (or lack thereof) and improper filter selection are very common causes of airflow issues, including high static pressure drop. Most filters should also not be doubled up (in series). Watch out for furniture blocking vents and registers that are partially (or fully) closed; shutting off registers is NOT a good strategy. Air movement throughout the building is also important, including the presence or absence of returns, open doors, etc., and these things affect MAD-AIR. Watch out for things like leakage as well, which can be around the platform, in ducts around the equipment, and around vents or recessed lighting. Have a question that you want us to answer on the podcast? Submit your questions at https://www.speakpipe.com/hvacschool. Purchase your tickets or learn more about the 6th Annual HVACR Training Symposium at https://hvacrschool.com/symposium. Subscribe to our podcast on your iPhone or Android. Subscribe to our YouTube channel. Check out our handy calculators here or on the HVAC School Mobile App for Apple and Android.
Hey all you Gardeners, welcome to the latest episode of the Grow Hour podcast. Together Mr Weedman and Big Earl talk about all things home grow, from genetics to breeding, from soil to bud, and everything in between. They often have guests on the show, for a deep dive into specific gardening topics. In this episode, Big Earl is tokin' Drunk Rider #1 and some Stanky Glue #10, while Mr WM is puffing on Jack Frost, both from their personal gardens. Together the duo discusses building out your grow space - your home garden. Airflow, hygrometers, water, lights, temperature, and so much more. Learn to recognize when you're ready to get growing and know when & how to do a trial run, along with more tips and tricks from Big Earl, who's a licensed Michigan Medical Caregiver. Thanks for listening and as always, hit us up...---IG: @earl217 and @iamtheregalbeagleEmail: ThatRegalBegal@gmail.com---IG: @weedman420chronicles2.0Twitter: @weedman420podYouTube: Weedman420 ChroniclesEmail: weedman420chronicles@gmail.com---Swag/Shop: https://eightdecades.comIG: @eightdecadesEmail: eightdecadesinfo@gmail.com---#High #Cannabis #StomptheStigma #FreethePlant #CannabisEducation #CannabisResearch #Weed #Marijuana #LegalizeIt #CannabisNews #CBD #Terpenes #CannabisPodcast #Podcast #eightdecades #Homegrow #Cultivation #BigEarl #Weedman420Chronicles #GrowHour #seeds #genetics #nutrients #IPM #Burpinthebag #LED #Lights #Atmosphere #TheRegalBegalBeanCo #Autoflower #autos #regs #photos #feminized #terps #plantmedicine #holistichealing #holistic #seedbreeder #seedbank #beans #forage #chemisty #science #plants #hash #collabCOPYRIGHT 2021 Weedman420Chronicles©
Collaboration: Enhancing Efficiency Through Industry Partnerships Welcome to this week's edition of Scaling UP! H2O, where we explore the critical role of water treatment in optimizing industrial processes. Today, we are privileged to hear from two distinguished guests: Tony Mormino and Justin Lynch. Tony is Technical Sales and Marketing Director for Insight Partners and host of the The Engineers HVAC Podcast specializing in education, while Justin focuses on cooling tower Reconstruction Specialists. Together, they share invaluable insights into collaborative strategies that ensure the best and most cost-effective solutions for cooling towers and closed loop systems. Their discussion focused on the importance of collaboration, cost efficiency, and proactive maintenance in the field of water treatment. Tony and Justin's insights provide a roadmap for water treaters to enhance client outcomes and operational efficiency through strategic partnerships and informed decision-making. What Are the Cost and Efficiency Benefits of Proper Water Treatment? Cost efficiency emerges as a significant topic. Tony Mormino underscores the financial benefits of proper water treatment, citing examples where a modest investment in water treatment can yield substantial savings. "According to the Department of Energy, 40% of a commercial building's energy consumption goes to HVAC systems," explains Tony. "Simply improving water quality can lead to 5-10% savings in energy costs. It's a quick win for green building initiatives." How Do You Prevent Vibrations in Cooling Towers? Bad vibrations in cooling towers can be a significant issue if not addressed early. Justin Lynch highlights the importance of monitoring biological buildup and evaporative salts on the fans. "It's very difficult to do that if you don't catch it early. Let's say we go to a facility with an old tower and an old fan—there's going to be a little bit of biology on top, which is not a big deal. You can brush that off, do a light pressure washing, and it's not going to hurt it," explains Justin. However, the real issue arises when scale develops unevenly on each blade. "At that point, the fan may look horrible, but the tower still operates without vibration. If you clean five out of six blades well but can't get the scale off one blade, you just created a vibration, leading to other issues in the tower." Justin advises that while chemical treatments are effective, they should be done with caution and under professional guidance to avoid exacerbating the problem. This proactive maintenance is less of a concern for newer towers that have had chemical treatment from the start. How Does Air in Closed Loop Chilled Water Systems Affect Performance? Tony Mormino highlights a critical yet often overlooked issue in water treatment: Air in closed loop chilled water systems. This issue not only leads to rust and oxidation but also significantly impacts the system efficiency and longevity. Studies and practical examples underscore the importance of air removal systems: Removing air from the chill water system can result in substantial benefits: Increase in Tonnage Output: Youngstown State University reported a 16% increase in tonnage output, equivalent to 400 additional tons. Improvement in Delta T: From 8.5-10°, enhancing heat transfer efficiency across chiller barrels. Enhanced Building Discharge Air Temperatures: Temperatures improved from 65° to 55°, optimizing HVAC system performance. Reduction in Pump Energy Consumption: A notable 37% reduction in annual KWH requirements due to cleaner water and improved system operation. Moreover, practical cases like at Waukesha Memorial Hospital in Wisconsin showed a 22% reduction in Variable Frequency Drive (VFD) speed, leading to an 85% decrease in corrosion preventative chemical usage. These examples illustrate the direct correlation between air removal and energy savings, reinforcing the significant impact of proper water treatment practices on operational efficiency and cost savings in commercial HVAC systems. How Important is Passivation for Equipment Longevity and Performance? Justin Lynch highlights the critical role of passivation in maintaining equipment longevity, particularly in galvanized towers. "Passivation is essential to prevent corrosion and ensure optimal performance," Justin explains. "For instance, following Marley's guidelines for pH and calcium hardness during passivation can extend the life of galvanized towers significantly." Conclusion In the fast-evolving landscape of industrial water treatment and HVAC systems, collaboration and continuous learning are paramount. Justin Lynch's closing thoughts encapsulate this spirit perfectly: “Don't be afraid to call, don't be afraid to collaborate. You are the expert in your field; I'm supposed to be the expert in mine. There's too much going on in this industry. It's growing too fast for everyone to really understand everything. So, if you don't know, ask questions and learn together. When you can do that together, you build a good network, and customers trust you and respect you after that.” Embracing this collaborative approach not only enhances our expertise but also ensures that we provide the best possible solutions for our customers, fostering trust and respect in our professional relationships. Timestamps 01:00 - Free Legionella Awareness Month and Industrial Water Week resources can be found on our website 09:10 - Interview with Tony Mormino and Justin Lynch 50:00 - Closing thoughts about the power of collaboration with Trace 54:30 - Upcoming Events for Water Treatment Professionals 56:42 - Evaporative Salts, Scale, and using the correct language with clients 59:00 - Drop by Drop With James McDonald Quotes “Downtime is lost profit.” - Justin Lynch “Water quality is key. It's crucial for maintaining a tower's expected lifespan, and without it, customers could face significant costs." - Justin Lynch “I consider the water the lifeblood of the system because it touches every component.” - Tony Mormino “In our industry, collaboration is essential. As experts in our respective fields, we have a responsibility to work together, share knowledge, and tackle challenges as a unified front." - shares Justin Lynch “The best way to market is to give away good, free content.” - Tony Mormino Connect with Justin Lynch Phone: 919.602.1658 Email: jlynch@insightusa.com LinkedIn: linkedin.com/in/justin-lynch-0355458b Read or Download Tony and Justin's Press Release HERE Connect with Tony Mormino Phone: 828.712.4769 Email: tmormino@insightusa.com Website: www.insightusa.com LinkedIn: linkedin.com/in/tony-mormino linkedin.com/company/insightusa YouTube: @InsightPartnersHVACTV Podcast: The Engineers HVAC Podcast Resources Mentioned All Cooling Tower resources can be found on our Free Industrial Water Week Page HERE in the Cooling Wednesday Tab All Legionella Resources can be found on our Free Legionella Page HERE Check out our Scaling UP! H2O Events Calendar where we've listed every event Water Treaters should be aware of by clicking HERE. Water Cake Recipe
Pusher Intakes joins us today to talk intake manifolds! They tell us how they started on Cummins 5.9's, and how it progressed to the first CARB tested manifold for the 6.7L Powerstroke. We were shocked how much airflow was improved. Learn more about your ad choices. Visit megaphone.fm/adchoices
Standard 310 is a technical workflow created by ACCA, ResNet, and ANSI for grading the installation of HVAC systems, typically in new home construction. It plays a crucial role in obtaining Energy Star certification, which can qualify homeowners for tax credits under the Inflation Reduction Act. The five steps of Standard 310 are design review, duct leakage test, total system airflow, blower fan watt draw, and refrigerant charge verification. In this podcast episode, host Bryan Orr is joined by guests Chris Hughes and Eric Kaiser to discuss Standard 310 and its implications for HVAC contractors. The standard aims to ensure that HVAC systems are installed correctly and operate as designed. The process involves a third-party HERS rater conducting various tests and measurements, which contractors need to be prepared for. Proper duct sealing, airflow settings, and refrigerant charging are critical for passing the assessments. One of the challenging aspects highlighted is the refrigerant charge verification step. The standard requires either non-invasive testing (which has temperature limitations) or weigh-in verification with geotagged photos. Chris Hughes suggests manufacturers could develop more consistent commissioning protocols to streamline this process. Topics covered in the podcast: Overview of Standard 310 and its five steps Importance for Energy Star certification and tax credits Role of HERS raters and HVAC contractors Duct leakage testing and proper sealing Airflow measurement methods Blower fan watt draw challenges Refrigerant charge verification options Need for consistent commissioning protocols Coordination and documentation required Future improvements to the standard Have a question that you want us to answer on the podcast? Submit your questions at https://www.speakpipe.com/hvacschool. Purchase your virtual tickets for the 5th Annual HVACR Training Symposium at https://hvacrschool.com/Symposium24. Subscribe to our podcast on your iPhone or Android. Subscribe to our YouTube channel. Check out our handy calculators here or on the HVAC School Mobile App for Apple and Android.
Discovering the science behind airflow and temperature for better sleep on the newest episode on Deep Into Sleep Podcast. Eugene Alletto shares insights on optimizing sleep environments for quality rest with BEDGEAR!Watch out for the COUPON code that will be given away during the episode! Let's join Dr. Yishan and Eugene in exploring the impact of breathable materials and modular sleep systems on sleep quality.Show Notes: deepintosleep.co/episode/bedgearRESOURCESAre you so sleepy that you cannot focus? Are you tired of getting through the day drinking coffee? Are you worried how your poor sleep may impact your health?Checkout Dr. Yishan Xu's Insomnia Treatment Course! Connect with Dr. YishanInstagram: @dr.yishanTwitter: @dryishanFacebook:@dr.yishanConnect with Eugene AllettoWebsiteFREE Coupon Code to purchase at https://www.bedgear.com/:DEEPINTOSLEEP15Newsletter and Download Free Sleep Guidence E-Book:https://www.mindbodygarden.com/sleepCBT-I Courses:English: https://www.deepintosleep.co/insomniaChinese: https://www.mindbodygarden.com/shimianPodcast Links:Apple Podcast: https://podcasts.apple.com/us/podcast/deep-into-sleep/id1475295840Google Podcast: https://podcasts.google.com/search/deepintosleepStitcher: https://www.stitcher.com/show/deep-into-sleepSpotify: https://open.spotify.com/show/2Vxyyj9Cswuk91OYztzcMSiHeartRadio: https://www.iheart.com/podcast/269-deep-into-sleep-47827108/Support our Podcast: https://www.buymeacoffee.com/dryishanLeave us a Rating: https://podcasts.apple.com/us/podcast/deep-into-sleep/id1475295840If you're interested in learning more about psychological testing and the services offered at the MindBodyGarden make sure to visit their website at mindbodygarden.com/AssessmentClinic.
Brian Johnson, the visionary CEO of Senergy360, is at the forefront of redefining healthy living by constructing homes that embody holistic principles for today's evolving world incorporating modern cutting edge technologies using proven practices for multigenerational homes. With over two decades of experience as a licensed general contractor and a solid background in the lumber industry, Brian's expertise is unparalleled. His commitment to excellence is further demonstrated by his triple certifications from the Building Biology Institute, a testament to his dedication to health, performance, wellness, and longevity. Under Brian's leadership, SENERGY360 is on a mission to democratize access to advanced technologies and systems for creating holistic living spaces. The firm offers top-tier services, including healthy home building and land development, environmental assessments, healthy home specifications and project management for all types of construction , serving clients across the United States and internationally. Senergy360 Website: www.senergy360.com Work With Me: Mineral Balancing HTMA Consultation: https://www.integrativethoughts.com/category/all-products My Instagram: @integrativematt My Website: Integrativethoughts.com Advertisements: Valence Nutraceuticals: Use code ITP20 for 20% off https://valencenutraceuticals.myshopify.com/ Zeolite Labs Zeocharge: Use Code ITP for 10% off https://www.zeolitelabs.com/product-page/zeocharge?ref=ITP Magnesium Breakthrough: Use Code integrativethoughts10 for 10% OFF https://bioptimizers.com/shop/products/magnesium-breakthrough Just Thrive: Use Code ITP15 for 15% off https://justthrivehealth.com/discount/ITP15 Therasage: Use Code Coffman10 for 10% off https://www.therasage.com/discount/COFFMAN10?rfsn=6763480.4aed7f&utm_source=refersion&utm_medium=affiliate&utm_campaign=6763480.4aed7f Chapters: 00:00 Introduction and Background 02:54 From Building to Health 06:01 The Importance of Building Materials 08:06 Protecting Against Toxic Exposure 11:35 The Lifelong Process of Detoxification 23:46 Building Harmonious and Healthy Homes 25:44 Being in Tune with Nature in Location Selection 28:14 Preventing Mold Growth in New Builds 35:41 Auto Shut-off Valves for Water Leaks 47:18 The Presence of VOCs in New Builds 52:04 Choosing Low or Non-VOC Materials 52:56 Creating a Healthy Home: Non-VOC Materials and Natural Building 55:09 The Importance of Ventilation and Airflow in Building Design 56:24 The Role of Negative and Positive Ions in a Healthy Living Environment 59:47 Air Filtration Systems: Energy Recovery Ventilators and HEPA Filters 01:08:06 The Cost of Building a Healthy Home and the Potential for Affordability Takeaways: Building homes that are resistant to mold and other toxins is crucial for maintaining a healthy living environment. Toxic exposure, such as mold and chemicals, can have a significant impact on health and should be addressed. Considering the geographic landscape and harmonizing the energy of the environment is important when building homes. Working with building biologists and architects can help create harmonious and healthy living spaces. Being in tune with nature is important when selecting a location for a home. Proper inspection and drying of lumber is crucial to prevent mold growth. New builds often contain volatile organic compounds (VOCs) that can be harmful. Choosing low or non-VOC materials is important for a healthier indoor environment. Building a healthy home involves using non-VOC materials, proper ventilation, and natural building materials. Negative ions, which are found in nature, help create a grounding and balanced atmosphere. Air filtration systems, such as energy recovery ventilators (ERVs) and HEPA filters, are essential for maintaining clean indoor air. The cost of building a healthy home can range from 10% to 30% more than a standard home, but as more people adopt these practices, the cost may decrease. Collaboration with architects, builders, and manufacturers is crucial in promoting and implementing healthy home practices. Summary: Brian discusses his background in building healthy homes and how he got into the field. He shares his experience with mold exposure and the importance of creating homes that are resistant to mold and other toxins. He also talks about the impact of toxic exposure on health and the need for detoxification. Brian emphasizes the importance of considering the geographic landscape and harmonizing the energy of the environment when building homes. He highlights the role of building biologists and architects in creating harmonious and healthy living spaces. In this part of the conversation, Brian and Matthew discuss the importance of being in tune with nature when selecting a location for a home. They also talk about the prevalence of mold in new builds and the use of moldy lumber. Brian emphasizes the need for proper inspection and drying of lumber to prevent mold growth. They also discuss the presence of volatile organic compounds (VOCs) in new builds and the importance of choosing low or non-VOC materials. In this conversation, Brian from SENERGY360 discusses the importance of building a healthy home and the various factors that contribute to a healthy living environment. He emphasizes the use of non-VOC materials, proper ventilation, and natural building materials. Brian also touches on the significance of negative and positive ions in creating a balanced and grounding atmosphere. He explains the role of air filtration systems, the cost of building a healthy home, and the potential for future affordability as more people adopt these practices. Keywords: building healthy homes, mold resistance, toxic exposure, detoxification, geographic landscape, harmonizing energy, building biologists, architects, nature, location selection, home building, mold, moldy lumber, inspection, drying, VOCs, new builds, low VOC materials, healthy home, non-VOC materials, ventilation, natural building materials, negative ions, positive ions, air filtration systems, cost of building, affordability
Ketan Umare is Co-Founder & CEO of Union AI, the scalable MLOps platform focused on AI orchestration based on the flyte open source project. Union AI has raised $29M from investors including NEA & Nava Ventures. In this episode, we dig into the differences between Union AI and Airflow, what's unique about orchestrating AI workloads, bringing software engineering practices to AI & more!
Dean takes listener calls and addresses concerns about: peeling granite countertop slabs, frozen coils, and heat pumps and how air flow works, and restoring engineer wooden floors with scuffs.
Rachel Finn is a fearless, free spirit that everyone loves. After attending Yale University in graduate school, she followed her heart and began guiding up in the Adirondacks some 30 years ago. Rachel is also a wonderful artist, friend, and inspiration to never grow up and never stop chasing your passions. Finn, a certified Federation of Fly Fishers Instructor, is a well-known presence in the Adirondack guide scene throughout the fishing season. Serving as the head guide at the Hungry Trout Fly Shop in Wilmington, New York, she accompanies clients on expeditions across the numerous rivers, streams, and ponds nestled within the breathtaking mountains. Additionally, during July and August, Rachel leads summer float trips in Alaska. She holds positions as a pro staff member for Scott Fly Rods, Airflow, Nautilus Reels, and Lund Boats, while also being enlisted by Patagonia as one of their fly fishing ambassadors. Her expertise has been showcased on ESPN's Great Outdoor Games and the Outdoor Life Network's Fly Fishing Masters.
No episódio de hoje, Mateus Oliveira entrevistam Franklin Ferreira (Arquiteto de Dados) e Vinicius Gasparaini (Engenheiro de Dados), ambos integrantes do time de dados da Clicksign.Arquitetura de Dados & Engenharia de Dados, são áreas que estão ganhando muita tração nos últimos anos, entender como elas funcionam dentro de uma empresas data-driven é, não só um dos melhores metódos de estudo de mercado, como também escolha de qual caminho seguir.Neste bate papo iremos falar sobre:Arquiteturas de DadosEngenharia de DadosEsse podcast tem como principal intuito entender melhor como criar e evoluir arquiteturas de dados para melhor atender o negócio e como a engenharia de dados é usada dentro das grandes empresas, indo além de tecnologias e falando de metodologias e processos.Linkedin do time ClicksignFranklin Ferreira (Arquiteto de dados): https://www.linkedin.com/in/franklinfs390/Vinicius Gasparini (Engenharia de Dados): https://www.linkedin.com/in/vngasp/ (editado) Luan Moreno = https://www.linkedin.com/in/luanmoreno/