Podcasts about Bundler

  • 75PODCASTS
  • 137EPISODES
  • 39mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Mar 18, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Bundler

Latest podcast episodes about Bundler

Programming By Stealth
PBS 178 of X — Getting Started with Jekyll Pages

Programming By Stealth

Play Episode Listen Later Mar 18, 2025 73:33


Last time we learned how to install Ruby, install Bundler, install Gems, and build a very simple website using Jekyll as our static site generator into GitHub. In this installment of our Jekyll miniseries, Bart explains Jekyll's build process which is mostly automated by how you name things and the content of the files you create (like adding YAML front matter.) Then we spend some quality time bemoaning how the Jekyll developers reuse the word "assets" to mean two different things. Bart avoids some of the associated confusion by creating some naming conventions of our own. We get to do a worked example where we learn a little bit about Pages in Jekyll and do a few things the hard way that we'll redo the easy way in the coming installments. If you're following along realtime, note that we won't be recording for 6 weeks because of some birthdays and Allison's trip to Japan.

Remote Ruby
Inside Ruby 3.4

Remote Ruby

Play Episode Listen Later Jan 18, 2025 52:01


Welcome to the first episode of the new year where Chris and Andrew discuss their holiday activities and recent breaks from work, including travel experiences and Christmas celebrations. They delve into updates on Ruby and Bundler enhancements, and they emphasize the importance of Ruby Central's role in maintaining Ruby's security. The conversation also touches on various tech and entertainment topics including movie reviews, gaming experiences, and smart home projects with Raspberry Pi. The hosts share insights on JSON gem performance improvements and considerations for Ruby's frozen string literals. The episode concludes with discussions on practical applications for Home Assistant and reminiscing about their experiences with different programming languages. Hit download to hear more! HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you. Jason Charnes X/Twitter Chris Oliver X/Twitter Andrew Mason X/Twitter

PodRocket - A web development podcast from LogRocket
void(0) with Evan You [Repeat]

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Dec 26, 2024 46:48


In this holiday repeat episode, Evan You, creator of Vue and Vite, discusses his new venture, void(0). He discusses the motivations behind founding void(0), the inefficiencies in JavaScript tooling, and the future of unified tooling stacks. Links https://evanyou.me https://x.com/youyuxi https://github.com/yyx990803 https://sg.linkedin.com/in/evanyou https://voidzero.dev We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Evan You.

NOCTURNAL TRANSMISSIONS : short horror story podcast
TRANSMISSMAS SPECIAL 2024 - featuring JERRY BUNDLER by W. W. Jacobs

NOCTURNAL TRANSMISSIONS : short horror story podcast

Play Episode Listen Later Dec 24, 2024 25:14


MERRY TRANSMISSMAS gentle listeners.  We proudly present our most annoying Transmissmas episode introduction yet AAAAND W. W. Jacobs' wonderful tale of Yuletide dread and mishap - 'JERRY BUNDLER'. Thank you for listening in 2024. We look forward to joining you again in 2025.    ————   NOCTURNAL TRANSMISSIONS is a fortnightly podcast featuring inspired performances of short horror stories, both old and new, by voice artist Kristin Holland.   https://www.nocturnaltransmissions.com.au   You can support us (and access lots of exclusive content) by becoming a patron at Patreon.com: https://www.patreon.com/nocturnaltransmissions

PodRocket - A web development podcast from LogRocket
TanStack and TanRouter with Tanner Linsley [Repeat]

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Nov 28, 2024 30:34


In this Thanksgiving repeat episode, Tanner Linsley, creator of TanStack and co-founder at Nozzle, dives into the evolution and philosophy behind TanStack, his work on TanRouter, and shares insights on the importance of type safety in routing within web development. Links https://x.com/tannerlinsley https://tannerlinsley.com https://www.youtube.com/tannerlinsley https://github.com/tannerlinsley https://www.linkedin.com/in/tannerlinsley https://tanstack.com We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Tanner Linsley.

Just Chills - Scary Stories To Hear In The Dark
Jerry Bundler by W W Jacobs

Just Chills - Scary Stories To Hear In The Dark

Play Episode Listen Later Nov 21, 2024 19:29


A scary story by the author of The Monkey's Paw. If you like this episode, please remember to follow on Spotify, Apple Podcasts, or your favourite podcast app.

PodRocket - A web development podcast from LogRocket

Evan You, creator of Vue and Vite, discusses his new venture, voidI0). He discusses the motivations behind founding void(0), the inefficiencies in JavaScript tooling, and the future of unified tooling stacks. Links https://evanyou.me https://x.com/youyuxi https://github.com/yyx990803 https://sg.linkedin.com/in/evanyou https://voidzero.dev We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Evan You.

Proof of Coverage
Building the First Programmable Datachain with Irys

Proof of Coverage

Play Episode Listen Later Oct 24, 2024 23:25


Connor welcomes back old co-host Sami Kassab (now a Partner at OSS Capital) and guest Josh Benaron, founder of Irys, a next-generation data provenance and storage L1. They delved into the evolution of Irys from its origins as Bundlr, an L2 solution for Arweave, to its current status as a programmable data chain. Josh shared his insights on the importance of storytelling and writing for founders, emphasizing how effective communication can significantly impact hiring, fundraising, and overall success. They explored Josh's unique journey, including his decision to drop out of university to pursue his entrepreneurial ambitions in the crypto space. He discussed the vision behind Irys, which aims to unify data services on-chain and support applications that rely heavily on data. Josh also addressed the differences between Irys and other projects like Filecoin, highlighting how Irys enables programmability at the data level, which he believes is crucial for mass adoption. As they wrapped up, they touched on the future of decentralized applications (dApps) and the potential of DePIN networks. Josh expressed his bullish outlook on the DePIN space, noting its significant market fit despite being less hyped compared to other sectors like AI. They also discussed the upcoming token for Irys, with Josh hinting at a thoughtful approach to tokenomics that prioritizes community strength and sustainability. 00:00 - Introduction 01:23 - The Importance of Writing and Storytelling for Founders 04:11 - Josh's Decision to Drop Out of University 06:54 - Transition from Bundler to Irys 10:04 - Irys vs. Filecoin: Differentiating Approaches 14:05 - Incentivizing Builders on Irys 16:57 - The Future of DeFi and Deepin 19:32 - Unified Layer for Permanent and Non-Permanent Data 21:14 - Tokenomics and Future Plans for Irys Disclaimer: The hosts and the firms they represent may hold stakes in the companies mentioned in this podcast. None of this is financial advice.

PodRocket - A web development podcast from LogRocket
TanStack and TanRouter with Tanner Linsley

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Sep 19, 2024 30:36


Tanner Linsley, creator of TanStack and co-founder at Nozzle, dives into the evolution and philosophy behind TanStack, his work on TanRouter, and shares insights on the importance of type safety in routing within web development. Links https://x.com/tannerlinsley https://tannerlinsley.com https://www.youtube.com/tannerlinsley https://github.com/tannerlinsley https://www.linkedin.com/in/tannerlinsley https://tanstack.com We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Tanner Linsley.

Modern Web
Modern Web Podcast S12E16- Tim Neutkens, Co-Author of Next.js on the State of Next

Modern Web

Play Episode Listen Later Aug 7, 2024 45:21


Tim Neutkens, Co-author and Tech Lead for Next.js, discusses how open source maintainers are simplifying the web, and covers the challenges faced with the current Next.js setup. Tim  talks about TurboPack, a solution that optimizes bundling, improves parallelism, caching, and module graph calculations. He also talks about TurboAC, which focuses on addressing performance and compatibility issues, providing seamless transitions for Next.js users. Tim highlights the importance of efficient bundling processes to avoid excessive recompilation and discusses the updates in Next.js versions to enhance caching, rendering behavior, and client-side caching. Tim also discusses some exciting upcoming features in Next.js 15.  Socials Twitter: @timneutkens GitHub: timneutkens Bluesky: timneutkens.bsky.social Website: https://timn.tech/ Links Vercel on Twitter, LinkedIn, Facebook, Instagram, YouTube, GitHub and Vercel's website Next.js on Twitter, GitHub, LinkedIn, YouTube, Instagram, Facebook, official Next.js website Turbopack on Twitter, GitHub, YouTube, LinkedIn, Instagram, Official Turbopack Docs Webpack on GitHub, Twittvser, YouTube, and Official Webpack Website Show Notes [00:00:02] Next.js and the upcoming release of TurboPack. [00:04:27] JavaScript bundlers evolving to handle growth. [00:07:58] TurboPack solves Webpack limitations efficiently. [00:12:12] Bundler compatibility for optimal app performance. [00:16:50] Client components separated in webpack instance. Turbo pack for better parallelism and stability. Industry moving towards server-side. Feed and rollup still relevant. Collaboration between tools for future. [00:20:57] Replacing part with roll down, similar to Webpack. Overlapping ecosystem with Avonetic Conference. Limits with unbundling and loading on demand. Cycle of building frameworks and hitting limits. History of using Webpack for client-side code. Two compiler architecture for server and client. Coordination between server and client with Webpack. [00:25:38] Server action imports, turbo pack improves performance. [00:30:04] Next.js is popular for websites. [00:34:18] Chipotle using Next in Vercel, exciting improvements. [00:38:51] Next.js 15 release candidate with changes. Sponsored by Wix Studio.

The Moscow Murders and More
Former Kinahan Cartel Money Bundler Johnny Morrissey Is Now Persona Non Grata (6/12/24)

The Moscow Murders and More

Play Episode Listen Later Jun 12, 2024 11:58


Johnny Morrissey, a key figure in the Kinahan Cartel, served as one of their primary money launderers. His criminal activities included laundering vast sums of money for the cartel and setting up an empire of extortion and other illegal enterprises in Spain. Morrissey was known for his involvement in moving finances not just for the Kinahan Cartel but also for other criminal groups, making him a significant asset within the cartel's operations.In September 2022, Morrissey was arrested in Spain, accused of laundering up to €200 million. His arrest was part of a broader crackdown on the Kinahan Cartel, which has been described as a significant blow to the organization. Morrissey's detention raised concerns within the cartel about the potential information he might divulge to authorities in exchange for leniency​Following his arrest, it was reported that Morrissey split from the Kinahan Cartel, although detailed reasons for the split have not been explicitly documented. The separation likely stems from the intense scrutiny and pressure from law enforcement agencies, which may have led to internal discord and distrust within the cartel​.(commercial at 8:15)to contact me:bobbycapucci@protonmail.comsource:Kinahan cartel sever ties with money-launderer Johnny Morrissey following bail release in Spain - SundayWorld.com

Beyond The Horizon
Former Kinahan Cartel Money Bundler Johnny Morrissey Is Now Persona Non Grata (6/11/24)

Beyond The Horizon

Play Episode Listen Later Jun 11, 2024 11:58


Johnny Morrissey, a key figure in the Kinahan Cartel, served as one of their primary money launderers. His criminal activities included laundering vast sums of money for the cartel and setting up an empire of extortion and other illegal enterprises in Spain. Morrissey was known for his involvement in moving finances not just for the Kinahan Cartel but also for other criminal groups, making him a significant asset within the cartel's operations.In September 2022, Morrissey was arrested in Spain, accused of laundering up to €200 million. His arrest was part of a broader crackdown on the Kinahan Cartel, which has been described as a significant blow to the organization. Morrissey's detention raised concerns within the cartel about the potential information he might divulge to authorities in exchange for leniency​Following his arrest, it was reported that Morrissey split from the Kinahan Cartel, although detailed reasons for the split have not been explicitly documented. The separation likely stems from the intense scrutiny and pressure from law enforcement agencies, which may have led to internal discord and distrust within the cartel​.(commercial at 8:15)to contact me:bobbycapucci@protonmail.comsource:Kinahan cartel sever ties with money-launderer Johnny Morrissey following bail release in Spain - SundayWorld.com

Beyond The Horizon
Former Kinahan Cartel Money Bundler Johnny Morrissey Is Now Persona Non Grata (6/11/24)

Beyond The Horizon

Play Episode Listen Later Jun 11, 2024 11:58


Johnny Morrissey, a key figure in the Kinahan Cartel, served as one of their primary money launderers. His criminal activities included laundering vast sums of money for the cartel and setting up an empire of extortion and other illegal enterprises in Spain. Morrissey was known for his involvement in moving finances not just for the Kinahan Cartel but also for other criminal groups, making him a significant asset within the cartel's operations.In September 2022, Morrissey was arrested in Spain, accused of laundering up to €200 million. His arrest was part of a broader crackdown on the Kinahan Cartel, which has been described as a significant blow to the organization. Morrissey's detention raised concerns within the cartel about the potential information he might divulge to authorities in exchange for leniency​Following his arrest, it was reported that Morrissey split from the Kinahan Cartel, although detailed reasons for the split have not been explicitly documented. The separation likely stems from the intense scrutiny and pressure from law enforcement agencies, which may have led to internal discord and distrust within the cartel​.(commercial at 8:15)to contact me:bobbycapucci@protonmail.comsource:Kinahan cartel sever ties with money-launderer Johnny Morrissey following bail release in Spain - SundayWorld.com

The Epstein Chronicles
Former Kinahan Cartel Money Bundler Johnny Morrissey Is Now Persona Non Grata (6/11/24)

The Epstein Chronicles

Play Episode Listen Later Jun 11, 2024 11:58


Johnny Morrissey, a key figure in the Kinahan Cartel, served as one of their primary money launderers. His criminal activities included laundering vast sums of money for the cartel and setting up an empire of extortion and other illegal enterprises in Spain. Morrissey was known for his involvement in moving finances not just for the Kinahan Cartel but also for other criminal groups, making him a significant asset within the cartel's operations.In September 2022, Morrissey was arrested in Spain, accused of laundering up to €200 million. His arrest was part of a broader crackdown on the Kinahan Cartel, which has been described as a significant blow to the organization. Morrissey's detention raised concerns within the cartel about the potential information he might divulge to authorities in exchange for leniency​Following his arrest, it was reported that Morrissey split from the Kinahan Cartel, although detailed reasons for the split have not been explicitly documented. The separation likely stems from the intense scrutiny and pressure from law enforcement agencies, which may have led to internal discord and distrust within the cartel​.(commercial at 8:15)to contact me:bobbycapucci@protonmail.comsource:Kinahan cartel sever ties with money-launderer Johnny Morrissey following bail release in Spain - SundayWorld.comBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-epstein-chronicles--5003294/support.

Maintainable
Martin Emde - Ruby Central and the Art of Being Tolerant to Change

Maintainable

Play Episode Listen Later Apr 23, 2024 52:47


In this episode of Maintainable, our host Robby Russell sits down with Martin Emde, a sage in the Ruby community and the current Director of Open Source at Ruby Central. Together, they weave through the intricacies of maintainable software, legacy code, and the unwavering power of the Ruby ecosystem. Martin, with his wealth of experience, shares tales from the trenches of open-source software development, focusing on RubyGems and Bundler, and how they've evolved to face the challenges of modern software needs.Martin addresses the elephant in the room - complexity in software. He muses on the natural progression of software projects from simplicity to complexity, drawing parallels to the growth of living organisms. It's not about fighting complexity, but embracing it with open arms, ensuring the software remains adaptable and maintainable. This conversation sheds light on the importance of testing, documentation, and community support in navigating the seas of complex software development.Diving deeper, they discuss the essence of technical debt, not as a villain in our stories but as a necessary step in the rapid evolution of technology. Martin's perspective on technical debt as a tool for progress rather than an obstacle is refreshing, encouraging developers to approach their work with more kindness and understanding.The discussion also highlights Ruby Central's pivotal role in nurturing the Ruby community, emphasizing the importance of contributions, whether code, conversation, or financial support. Martin's call to action for developers to engage with open-source projects, to adopt gems in need, and to provide support where possible, is a heartwarming reminder of the collective effort required to sustain the vibrant Ruby ecosystem.For those curious minds eager to dive into the world of Ruby, contribute to its growth, or simply enjoy a captivating discussion on software development, this episode is a delightful journey through the challenges and joys of maintaining open-source software. Don't miss out on the gems of wisdom shared in this episode, and be sure to check out the useful links below for more information on how you can contribute to the Ruby community.Book Recommendation:Project Hail Marry by Andy WeirHelpful Links:BundlerRuby CentralAdopt a GemMartin on GithubMartin's websiteThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and soon, other frameworks. It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications. Keep your coding cool and error-free, one line at a time! Check them out! Subscribe to Maintainable on:Apple PodcastsOvercastSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

React Native Radio
RNR 292 - RNR Explains: Metro Bundler

React Native Radio

Play Episode Listen Later Mar 22, 2024 36:15


Dive into the Metro bundler with Jamon, Robin, and Mazen on this new installment of RNR Explains! It's packed with insights for even the most advanced React Native developers.This episode is brought to you by Infinite Red! Infinite Red is a premier React Native design and development agency located in the USA. With five years of React Native experience and deep roots in the React Native community (hosts of Chain React and the React Native Newsletter), Infinite Red is the best choice for your next React Native app.Episode LinksMetro BundlerRePackRNR on YouTube MusicConnect With Us!React Native Radio: @ReactNativeRdioJamon - @jamonholmgrenRobin -  @robin_heinzeMazen - @mazenchami---Quotes you can share on X!"The bundler is what allows React Native to run a command... and then here's your app. It really is pretty magical." - @robin_heinze on @reactnativerdio"As React Native developers, I think we need to pay more attention to it and understand it more because it is the secret sauce." - @mazenchami on @reactnativerdio

Baxter's Buzz
Creating HR Departments From Scratch - with Brad Voorhees

Baxter's Buzz

Play Episode Listen Later Mar 5, 2024 19:04


Brad Voorhees is the Owner and Founder of Scale TX, where he specializes in employee value propositions, talent experience strategies, and HR leadership. Brad "The Bundler" and I discuss his choice to leave corporate America, creating HR departments, and how being an entrepreneur includes sales in addition to his consulting work! #BaxtersBuzz  #HumanResources #Entrepreneur Angelic 8s: A Letter To Zara" and is available. ⁠https://amzn.to/37BIX44 --- Support this podcast: https://podcasters.spotify.com/pod/show/baxter-hall/support

The Bike Shed
413: Developer Tales of Package Management

The Bike Shed

Play Episode Listen Later Jan 23, 2024 33:33


Stephanie shares her task of retiring a small, internally-used link-shortening app. She describes the process as both celebratory and a bit mournful. Meanwhile, Joël discusses his deep dive into ActiveRecord, particularly in the context of debugging. He explores the complexities of ActiveRecord querying schemas and the additional latency this introduces. Together, the hosts discuss the nuances of package management systems and their implications for developers. They touch upon the differences between system packages and language packages, sharing personal experiences with tools like Homebrew, RubyGems, and Docker. Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, this week, I got to have some fun working on some internal thoughtbot work. And what I focused on was retiring one of our just, like, small internal self-hosted on Heroku apps in favor of going with a third-party service for this functionality. We basically had a tiny, little app that we used as a link-shortening service. So, if you've ever seen a tbot.io short link out in the world, we were using our just, like, an in-house app to do that, you know, but for various reasons, we wanted to...just it wasn't worth maintaining anymore. So, we wanted to just use a purchased service. But today, I got to just, like, do the little bit of, like, tidying up, you know, in preparation to archive a repo and kind of delete the app from Heroku, and I hadn't done that before. So, it felt a little bit celebratory and a little bit mournful even [laughs] to, you know, retire something like that. And I was pairing with another thoughtbot developer, and we used a pairing app called Tuple. And you can just send, like, fun reactions to each other. Like, you could send, like, a fire emoji [laughs] or something if that's what you're feeling. And so, I sent some, like, confetti when we clicked the, "I understand what deleting this app means on GitHub." But I joked that "Actually, I feel like what I really needed was a, like, a salute kind of like thank you for your service [laughs] type of reaction." JOËL: I love those moments when you're kind of you're hitting those kind of milestone-y moments, and then you get to send a reaction. I should do that more often in Tuple. Those are fun. STEPHANIE: They are fun. There's also a, like, table flip reaction, too, is one that I really enjoy [laughs], you know, you just have to manifest that energy somehow. And then, after we kind of sent out an email to the company saying like, "Oh yeah, we're not using our app anymore for link shortening," someone had a great suggestion to make our archived repo public instead of private. I kind of liked it as a way of, like, memorializing this application and let community members see, you know, real code in a real...the application that we used here at thoughtbot. So, hopefully, if not me, then someone else will be able to do that and maybe publish a little blog post about that. JOËL: That's exciting. So, it's not currently public, the repo, but it might be at some point in the future. STEPHANIE: Yeah, that's right. JOËL: We'll definitely have to mention it on a future episode if that happens so that people following along with the story can go check out the code. STEPHANIE: So, Joël, what's new in your world? JOËL: I've been doing a deep dive into how ActiveRecord works. Particularly, I am debugging some pretty significant slowdowns in querying ActiveRecord models that are backed not by a regular Postgres database but instead a Snowflake data warehouse via an ODBC connection. So, there's a bunch of moving pieces going on here, and it would just take forever to make any queries. And sure, the actual reported query time is longer than for a local Postgres database, but then there's this sort of mystery extra waiting time, and I couldn't figure out why is it taking so much longer than the actual sort of recorded query time. And I started digging into all of this, and it turns out that in addition to executing queries to pull actual data in, ActiveRecord needs to, at various points, query the schema of your data store to pull things like names of tables and what are the indexes and primary keys and things like that. STEPHANIE: Wow. That sounds really cool and something that I have never needed to do before. I'm curious if you noticed...you said that it takes, I guess, longer to query Snowflake than it would a more common Postgres database. Were you noticing this performance slowness locally or on production? JOËL: Both places. So, the nice thing is I can reproduce it locally, and locally, I mean running the Rails app locally. I'm still talking to a remote Snowflake data warehouse, which is fine. I can reproduce that slowness locally, which has made it much easier to experiment and try things. And so, from there, it's really just been a bit of a detective case trying to, I guess, narrow the possibility space and try to understand what are the parts that trigger slowness. So, I'm printing timestamps in different places. I've got different things that get measured. I've not done, like, a profiling tool to generate a flame graph or anything like that. That might have been something cool to try. I just did old-school print statements in a couple of places where I, like, time before, time after, print the delta, and that's gotten me pretty far. STEPHANIE: That's pretty cool. What do you think will be an outcome of this? Because I remember you saying you're digging a little bit into ActiveRecord internals. So, based on, like, what you're exploring, what do you think you could do as a developer to increase some of the performance there? JOËL: I think probably what this ends up being is finding that the Snowflake adapter that I'm using for ActiveRecord maybe has some sort of small bug in it or some implementation that's a little bit too naive that needs to be fine-tuned. And so, probably what ends up happening here is that this finishes as, like, an open-source pull request to the Snowflake Adapter gem. STEPHANIE: Yeah, that's where I thought maybe that might go. And that's pretty cool, too, and to, you know, just be investigating something on your app and being able to make a contribution that it benefits the community. JOËL: And that's what's so great about open source because not only am I able to get the source to go source diving through all of this, because I absolutely need to do that, but also, then if I make a fix, I can push that fix back out to the community, and everybody gets to benefit. STEPHANIE: Cool. Well, that's another thing that I look forward to hearing more on the development of [laughs] later if it pans out that way. JOËL: One thing that has been interesting with this Snowflake work is that there are a lot of moving parts and multiple different packages that I need to install to get this all to work. So, I mentioned that I might be doing a pull request against the Snowflake Adapter for ActiveRecord, but all of this talks through a sort of lower-level technology protocol called ODBC, which is a sort of generic protocol for speaking to data stores, and that actually has two different pieces. I had to install two different packages. There is a sort of low-level executable that I had to install on my local dev machine and that I have to install on our servers. And on my Mac, I'm installing that via Homebrew, which is a system package. And then to get Ruby bindings for that, there is a Ruby gem that I install that allows Ruby code to talk to ODBC, and that's installed via RubyGems or Bundler. And that got me thinking about sort of these two separate ecosystems that I tend to work with every day. We've got sort of the system packages and the, I don't know what you want to call them, language packages maybe, things like RubyGems, but that could also be NPM or whatever your language of choice is, and realizing that we kind of have things split into two different zones, and sometimes we need both and wondering a little bit about why is that difference necessary. STEPHANIE: Yeah, I don't have an answer to that [laughs] question right now, but I can say that that was an area that really tripped me up, I think, when I was first a fledgling developer. And I was really confused about where all of these dependencies were coming from and going through, you know, setting up my first project and being, like, asked to install Postgres on my machine but then also Bundler, which then also installs more dependencies [laughs]. The lines between those ecosystems were not super clear to me. And, you know, even now, like, I find myself really just kind of, like, learning what I need to know to get by [laughs] with my day-to-day work. But I do like what you said about these are kind of the two main layers that you're working with in terms of package management. And it's really helpful to have that knowledge so you can troubleshoot when there is an issue at one or the other. JOËL: And you mentioned Postgres. That's another one that's interesting because there are components in both of those ecosystems. Postgres itself is typically installed via a system package manager, so something like Homebrew on a Mac or apt-get on a Linux machine. But then, if you're interacting with Postgres in a Ruby app, you're probably also installing the pg gem, which are Ruby's bindings for Postgres to allow Ruby to talk to Postgres, and that lives in the package ecosystem on RubyGems. STEPHANIE: Yeah, I've certainly been in the position of, you know, again, as consultants, we oftentimes are also setting up new laptops entirely [laughs] like client laptops and such and bundling and the pg gem is installed. And then at least I have, you know, I have to give thanks to the very clear error message that [laughs] tells me that I don't have Postgres installed on my machine. Because when I mentioned, you know, troubleshooting earlier, I've certainly been in positions where it was really unclear what was going on in terms of the interaction between what I guess we're calling the Ruby package ecosystem and our system level one. JOËL: Especially for things like the pg gem, which need to compile against some existing libraries, those always get interesting where sometimes they'll fail to compile because there's a path to some C compiler that's not set correctly or something like that. For me, typically, that means I need to update the macOS command line tools or the Xcode command line tools; I forget what the name of that package is. And, usually, that does the trick. That might happen if I've upgraded my OS version recently and haven't downloaded the latest version of the command line tools. STEPHANIE: Yeah. Speaking of OS versions, I have a bit of a story to share about using...I've never said this name out loud, but I am pretty sure that it would just be pronounced as wkhtmltopdf [laughs]. For some reason, whenever I see words like that in my brain, I want to, like, make it into a pronounceable thing [laughs]. JOËL: Right, just insert some vowels in there. STEPHANIE: Yeah, wkhtmltopdf [laughs]. Anyway, that was being used in an app to generate PDF invoices or something. It's a pretty old tool. It's a CLI tool, and it's, as far as I can tell, it's been around for a long time but was recently no longer maintained. And so, as I was working on this app, I was running into a bug where that library was causing some issues with the PDF that was generated. So, I had to go down this route of actually finding a Ruby gem that would figure out which package binary to use, you know, based off of my system. And that worked great locally, and I was like, okay, cool, I fixed the issue. And then, once I pushed my change, it turns out that it did not work on CI because CI was running on Ubuntu. And I guess the binary didn't work with the latest version of Ubuntu that was running on CI, so there was just so many incompatibilities there. And I was wanting to fix this bug. But the next step I took was looking into community-provided packages because there just simply weren't any, like, up-to-date binaries that would likely work with these new operating systems. And I kind of stopped at that point because I just wasn't really sure, like, how trustworthy were these community packages. That was an ecosystem I didn't know enough about. In particular, I was having to install some using apt from, you know, just, like, some Linux community. But yeah, I think I normally have a little bit more experience and confidence in terms of the Ruby package ecosystem and can tell, like, what gems are popular, which ones are trustworthy. There are different heuristics I have for evaluating what dependency to pull in. But here I ended up just kind of bailing out of that endeavor because I just didn't have enough time to go down that rabbit hole. JOËL: It is interesting that learning how to evaluate packages is a skill you have to learn that varies from package community to package community. I know that when I used to be very involved with Elm, we would often have people who would come to the Elm community from the JavaScript community who were used to evaluating NPM packages. And one of the metrics that was very popular in the JavaScript community is just stars on GitHub. That's a really important metric. And that wasn't really much of a thing in the Elm community. And so, people would come and be like, "Wait, how do I know which package is good? I don't see any stars on GitHub." And then, it turns out that there are other metrics that people would use. And similarly, you know, in Ruby, there are different ways that you might use to evaluate Ruby gems that may or may not involve stars on GitHub. It might be something entirely different. STEPHANIE: Yeah. Speaking of that, I wanted to plug a website that I have used before called the Ruby Toolbox, and that gives some suggestions for open-source Ruby libraries of various categories. So, if you're looking for, like, a JSON parser, it has some of the more popular ones. If you're looking for, you know, it stores them by category, and I think it is also based on things like stars and forks like that, so that's a good one to know. JOËL: You could probably also look at something like download numbers to see what's popular, although sometimes it's sort of, like, an emergent gem that's more popular. Some of that almost you just need to be a little bit in the community, like, hearing, you know, maybe listening to podcasts like this one, subscribing to Ruby newsletters, going to conferences, things like that, and to realize, okay, maybe, you know, we had sort of an old staple for JSON parsing, but there's a new thing that's twice as fast. And this is sort of becoming the new standard, and the community is shifting towards that. You might not know that just by looking at raw stats. So, there's a human component to it as well. STEPHANIE: Yeah, absolutely. I think an extension of knowing how to evaluate different package systems is this question of like, how much does an average developer need to know about package management? [laughs] JOËL: Yeah, a little bit to a medium amount, and then if you're writing your own packages, you probably need to know a little bit more. But there are some things that are really maybe best left to the maintainers of package managers. Package managers are actually pretty complex pieces of software in terms of all of the dependency management and making sure that when you say, "Oh, I've got Rails, and this other gem, and this other gem, and it's going to find the exact versions of all those gems that play nicely together," that's non-trivial. As a sort of working developer, you don't need to know all of the algorithms or the graph theory or any of that that underlies a package manager to be able to be productive in your career. And even as a package developer, you probably don't need to really know a whole lot of that. STEPHANIE: Yeah, that makes sense. I actually had referred to our internal at thoughtbot here, our kind of, like, expectations for skill levels for developers. And I would say for an average developer, we kind of just expect a basic understanding of these more complex parts of our toolchain, I think, specifically, like, command line tools and package management. And I think I'd mentioned earlier that, for me, it is a very need-to-know basis. And so, yeah, when I was going down that little bit of exploration around why wkhtmltopdf [chuckles] wasn't working [chuckles], it was a bit of a twisty and turning journey where I, you know, wasn't really sure where to go. I was getting very obtuse error messages, and, you know, I had to dive deep into all these forums [laughs] for all the various platforms [laughs] about why libraries weren't working. And I think what I did come away with was that like, oh, like, even though I'm mostly working on my local machine for development, there was some amount of knowledge I needed to have about the systems that my CI and, you know, production servers are running on. The project I was working on happened to have, like, a Docker file for those environments, and, you know, kind of knowing how to configure them to install the packages I needed to install and just knowing a little bit about the different ways of doing that on systems outside of my usual daily workflows. JOËL: And I think that gets back to some of the interesting distinctions between what we might call language packages versus system packages is that language packages more or less work the same across all operating systems. They might have a build step that's slightly different or something like that, but system packages might be pretty different between different operating systems. So, development, for me, is a Mac, and I'm probably installing system packages via something like Homebrew. If I then want that Rails app to run on CI or some Linux server somewhere, I can't use Homebrew to install things there. It's going to be a slightly different package ecosystem. And so, now I need to find something that will install Postgres for Linux, something that will install, I guess, wkhtmltopdf [laughs] for Linux. And so, when I'm building that Docker file, that might be a little bit different for Mac versus for...or I guess when you run a Docker file, you're running a containerized system. So, the goal there is to make this system the same everywhere for everyone. But when you're setting that up, typically, it's more of a Linux-like system. And so running inside the Docker container versus outside on the native Mac might involve a totally different set of packages and a different package tool. As opposed to something like Bundler, you've got your gem file; you bundle install. It doesn't matter if you're on Linux or macOS. STEPHANIE: Yes, I think you're right. I think we kind of answered our own question at the top of the show [laughs] about differences and what do you need to know about them. And I also like how you pointed out, oh yeah, like, Docker is supposed to [laughs], you know, make sure that we're all developing in the same system, essentially. But, you know, sometimes you have different use cases for it. And, yeah, when you were talking about installing an application on your native Mac and using Homebrew, but even, you know, not everyone even uses Homebrew, right? You can install manually [laughs] through whatever official installer that application might provide. So, there's just so many different ways of doing something. And I had the thought that it's too bad that we both [chuckles] develop on Mac because it could be really interesting to get a Linux user's perspective in here. JOËL: You mentioned not installing via Homebrew. A kind of glaring example of that in my personal setup is that I use Postgres.app to manage Postgres on my machine rather than using Homebrew. I've just...over the years, the Homebrew version every time I upgrade my operating system or something, it's just such a pain to update, and I've lost too many hours to it, and Postgres.app just works, and so I've switched to that. Most other things, I'll use the Homebrew version, but Postgres it's now Postgres.app. It's not even a command line install, and it works fine for me. STEPHANIE: Nice. Yeah. That's interesting. That's a good tip. I'll have to look into that next time because I have also certainly had to just install so many [laughs] various versions of Postgres and figure out what's going on with them every time I upgrade my OS. I'm with you, though, in terms of the packages world I'm looking for, it works [laughs]. JOËL: So, you'd mentioned earlier that packages is sort of an area that's a bit of a need-to-know basis for you. Are there, like, particular moments in your career that you remember like, oh, that's the moment where I needed to, like, take some time and learn a little bit of the next level of packages? STEPHANIE: That's a great question. I think the very beginnings of understanding how package versions work when you have multiple projects on your machine; I just remember that being really confusing for me. When I started out, like, you know, as soon as I cloned my second repo [laughs], and was very confused about, like, I'm sure I went through the process of not installing gems using Bundler, and then just having so much chaos [laughs] wrecked in my development environment and, you know, having to ask someone, "I don't understand how this works. Like, why is it saying I have multiple versions of this library or whatever?" JOËL: Have you ever sudo gem installed a gem? STEPHANIE: Oh yeah, I definitely have. I can't [laughs], like, even give a good reason for why I have done it, but I probably was just, like, pulling my hair out, and that's what Stack Overflow told me to do. I don't know if I can recommend that, but it is [chuckles] one thing to do when you just are kind of totally stuck. JOËL: There was a time where I think that that was in the READMEs for most projects. STEPHANIE: Yeah, that's a really good point. JOËL: So, that's probably why a lot of people end up doing that, but then it tends to install it for your system Ruby rather than for...because if you're using something like Rbenv or RVM or ASDF to manage multiple Ruby versions, those end up being what's using or even Homebrew to manage your Ruby. It wouldn't be installing it for those versions of Ruby. It would be installing it for the one that shipped with your Mac. I actually...you know what? I don't even know if Mac still ships with Ruby. It used to. It used to ship with a really old version of Ruby, and so the advice was like, "Hey, every repo tells you to install it with sudo; don't do that. It will mess you up." STEPHANIE: Huh. I think Mac still does ship with Ruby, but don't quote me on that [laughter]. And I think that's really funny that, like, yeah, people were just writing those instructions in READMEs. And I'm glad that we've collectively [laughs] figured out that difference and want to, hopefully, not let other developers fall into that trap [laughs]. Do you have a particular memory or experience when you had to kind of level up your knowledge about the package ecosystem? JOËL: I think one sort of moment where I really had to level up is when I started really needing to understand how install paths worked, especially when you have, let's say, multiple versions of a gem installed because you have different projects. And you want to know, like, how does it know which one it's using? And then you see, oh, there are different paths that point to different directories with the installs. Or when you might have an executable you've installed via Homebrew, and it's like, oh yeah, so I've got this, like, command that I run on my shell, but actually that points to a very particular path, you know, in my Homebrew directory. But maybe it could also point to some, like, pre-installed system binaries or some other custom things I've done. So, there was a time where I had to really learn about how the path shell variable worked on a machine in order to really understand how the packages I installed were sometimes showing up when I invoked a binary and sometimes not. STEPHANIE: Yeah, that is another really great example that I have memories of [laughs] being really frustrated by, especially if...because, you know, we had talked earlier about all the different ways that you can install applications on your system, and you don't always know where they end up [laughs]. JOËL: And this particular memory is tied to debugging Postgres because, you know, you're installing Postgres, and some paths aren't working. Or maybe you try to update Postgres and now it's like, oh, but, like, I'm still loading the wrong one. And why does PSQL not do the thing that I think it does? And so, that forced me to learn a little bit about, like, under the hood, what happens when I type brew install PostgreSQL? And how does that mesh with the way my shell interprets commands and things like that? So, it was maybe a little bit of a painful experience but eye-opening and definitely then led to me, I think, being able to debug my setup much more effectively in the future. STEPHANIE: Yeah. I like that you also pointed out how it was interacting with your shell because that's, like, another can of worms, right? [laughs] In terms of just the complexity of how these things are talking to each other. JOËL: And for those of our listeners who are not familiar with this, there is a shell command that you can use called which, W-H-I-C-H. And you can prefix that in front of another command, and it will tell you the path that it's using for that binary. So, in my case, if I'm looking like, why is this PSQL behaving weirdly or seems to be using the old version, I can type 'which space psql', and it'll say, "Oh, it's going to this path." And I can look at it and be like, oh, it's using my system install of Postgres. It's not using the Homebrew one. Or, oh, maybe it's using the Homebrew install, not my Postgres.app version. I need to, like, tinker with the paths a little bit. So, that has definitely helped me debug my package system more than once. STEPHANIE: Yeah, that's a really good tip. I can recall just totally uninstalling everything [laughs] and reinstalling and fingers crossed it would figure out a route to the right thing [laughs]. JOËL: You know what? That works. It's not the, like, most precise solution but resetting your environment when all else fails it's not a bad solution. So, we've been talking a lot about what it's like to interact with a package ecosystem as developers, as users of packages, but what if you're a package developer? Sometimes, there's a very clear-cut place where to publish, and sometimes it's a little bit grayer. So, I could see, you know, I'm developing a database, and I want that to be on operating systems, probably should be a system-level package rather than a Ruby gem. But what if I'm building some kind of command line tool, and I write it in Ruby because I like writing Ruby? Should I publish that as a gem, or should I publish that as some kind of system package that's installed via Homebrew? Any opinions or heuristics that you would use to choose where to publish on one side or the other? STEPHANIE: As not a package developer [laughs], I can only answer from that point of view. That is interesting because if you publish on a, you know, like, a system repository, then yeah, like, you might get a lot more people using your tool out there because you're not just targeting a specific language's community. But I don't know if I have always enjoyed downloading various things to my system's OS. I think that actually, like, is a bit complicated for me or, like, I try to avoid it if I can because if something can be categorized or, like, containerized in a way that, like, feels right for my mental model, you know, if it's written in Ruby or something really related to things I use Ruby in, it could be nice to have that installed in my, like, systems RubyGems. But I would be really interested to hear if other people have opinions about where they might want to publish a package and what kind of developers they're hoping to find to use their tool. JOËL: I like the heuristic that you mentioned here, the idea of who the audience is because, yeah, as a Ruby developer who already has a Ruby setup, it might be easier for me to install something via a gem. But if I'm not a Ruby developer who wants to use the packages maybe a little bit more generic, you know, let's say, I don't know, it's some sort of command line tool for interacting with GitHub or something like that. And, like, it happens to be written in Ruby, but you don't particularly care about that as a user of this. Maybe you don't have Ruby installed and now you've got to, like, juggle, like, oh, what is RubyGems, and Bundler, and all this stuff? And I've definitely felt that occasionally downloading packages sort of like, oh, this is a Python package. And you're going to need to, like, set up all this stuff. And it's maybe designed for a Python audience. And so, it's like, oh, you're going to set up a virtual environment and all these things. I'm like, I just want your command line tools. I don't want to install a whole language. And so, sometimes there can be some frustration there. STEPHANIE: Yeah, that is very true. Before you even said that, I was like, oh, I've definitely wanted to download a command line tool and be like, first install [laughs] Python. And I'm like, nope, I'm bailing out of this. JOËL: On the other hand, as a developer, it can be a lot harder to write something that's a bit more cross-platform and managing all that. And I've had to deal a little bit with this for thoughtbot's Parity tool, which is a command-line tool for working with Heroku. It allows you to basically run commands on either staging or production by giving you a staging command and a production command for common Heroku CLI tasks, which makes it really nice if you're working and you're having to do some local, some development, some staging, and some production things all from your command line. It initially started as a gem, and we thought, you know what? This is mostly command line, and it's not just Rubyists who use Heroku. Let's try to put this on Homebrew. But then it depends on Ruby because it's written in Ruby. And now we had to make sure that we marked Ruby as a dependency in Homebrew, which meant that Homebrew would then also pull in Ruby as a dependency. And that got a little bit messy. For a while, we even experimented with sort of briefly available technology called Traveling Ruby that allowed you to embed Ruby in your binary, and you could compile against that. That had some drawbacks. So, we ended up rolling that back as well. And eventually, just for maintenance ease, we went back to making this a Ruby gem and saying, "Look, you install it via RubyGems." It does mean that we're targeting more of the Ruby community. It's going to be a little bit harder for other people to install, but it is easier for us to maintain. STEPHANIE: That's really interesting. I didn't know that history about Parity. It's a tool that I have used recently and really enjoyed. But yeah, I think I remember someone having some issues between installing it as a gem and installing it via Homebrew and some conflicts there as well. So, I can also see how trying to decide or maybe going down one path and then realizing, oh, like, maybe we want to try something else is certainly not trivial. JOËL: I think, in me, I have a little bit of the idealist and the pragmatist that fight. The idealist says, "Hey, if it's not, like, aimed for Ruby developers as a, like, you can pull this into your codebase, if it's just command line tools and the fact that it's written in Ruby is an implementation detail, that should be a system package. Do not distribute binaries via RubyGems." That's the idealist in me. The pragmatist says, "Oh, that's a lot of work and not always worth it for both the maintainers and sometimes for the users, and so it's totally okay to ship binaries as RubyGems." STEPHANIE: I was totally thinking that I'm sure that you've been in that position of being a user and trying to download a system package and then seeing it start to download, like, another language. And you're like, wait, what? [laughter] That's not what I want. JOËL: So, you and I have shared some of our heuristics in the way we approach this problem. Now, I'm curious to hear from the audience. What are some heuristics that you use to decide whether your package is better shipped on RubyGems versus, let's say, Homebrew? Or maybe as a user, what do you prefer to consume? STEPHANIE: Yes. And speaking of getting listener feedback, we're also looking for some listener questions. We're hoping to do a bit of a grab-bag episode where we answer your questions. So, if you have anything that you're wanting to hear me and Joël's thoughts on, write us at hosts@bikeshed.fm. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at referrals@thoughtbot.com with any questions.

Smart Software with SmartLogic
Package Management in Elixir vs. JavaScript with Wojtek Mach & Amal Hussein

Smart Software with SmartLogic

Play Episode Listen Later Jan 4, 2024 54:06


Today on Elixir Wizards, Wojtek Mach of HexPM and Amal Hussein, engineering leader and former NPM team member, join Owen Bickford to compare notes on package management in Elixir vs. JavaScript. This lively conversation covers everything from best practices for dependency management to API design, SemVer (semantic versioning), and the dark ages of web development before package managers existed. The guests debate philosophical differences between the JavaScript and Elixir communities. They highlight the JavaScript ecosystem's maturity and identify potential areas of improvement, contrasted against Elixir's emphasis on minimal dependencies. Both guests encourage engineers to publish packages, even small ones, as a learning opportunity. Topics discussed in this episode: Leveraging community packages rather than reinventing the wheel Vetting packages carefully before adopting them as dependencies Evaluating security, performance, and bundle size when assessing packages Managing transitive dependencies pulled in by packages Why semantic versioning is difficult to consistently enforce Designing APIs with extensibility and backward compatibility in mind Using tools like deprecations to avoid breaking changes in new releases JavaScript's preference for code reuse over minimization The Elixir community's minimal dependencies and avoidance of tech debt Challenges in early package management, such as global dependency Learning from tools like Ruby Gems and Bundler to improve experience How log files provide visibility into dependency management actions How lock files pin dependency versions for consistency Publishing packages democratizes access and provides learning opportunities Linting to enforce standards and prevent certain bugs Primitive-focused packages provide flexibility over highly opinionated ones Suggestions for improving documentation and guides Benefits of collaboration between programming language communities Links mentioned in this episode: Node.js https://github.com/nodejs npm JavaScript Package Manager  https://github.com/npm JS Party Podcast https://changelog.com/jsparty Dashbit https://dashbit.co/ HexPM Package Manager for Erlang https://hex.pm/ HTTP Client for Elixir https://github.com/wojtekmach/req Ecto Database-Wrapper for Elixir https://github.com/elixir-ecto (Not an ORM) XState Actor-Based State Management for JavaScript https://xstate.js.org/docs/ Supply Chain Protection for JavaScript, Python, and Go  https://socket.dev/ MixAudit https://github.com/mirego/mixaudit NimbleTOTP Library for 2FA https://hexdocs.pm/nimbletotp/NimbleTOTP.html Microsoft Azure https://github.com/Azure Patch Package https://www.npmjs.com/package/patch-package Ruby Bundler to manage Gem dependencies https://github.com/rubygems/bundler npm-shrinkwrap https://docs.npmjs.com/cli/v10/commands/npm-shrinkwrap SemVer Semantic Versioner for NPM https://www.npmjs.com/package/semver Spec-ulation Keynote - Rich Hickey https://www.youtube.com/watch?v=oyLBGkS5ICk Amal's favorite Linter https://eslint.org/ Elixir Mint Functional HTTP Client for Elixir https://github.com/elixir-mint Tailwind Open Source CSS Framework https://tailwindcss.com/ WebauthnComponents https://hex.pm/packages/webauthn_components Special Guests: Amal Hussein and Wojtek Mach.

Web3 Galaxy Brain
Ahmed Al-Balaghi, CEO of Biconomy

Web3 Galaxy Brain

Play Episode Listen Later Nov 29, 2023 81:04


My guest today is Biconomy co-founder and CEO Ahmed Al-Balaghi. Since 2019, Biconomy has executed over 40 million metatransactions to help devs make crypto UX easier for their users. Today, Biconomy is one of the top Account Abstraction service providers, boasting significant markeshare across popular EVM chains. In this episode, Ahmed and I sit down to discuss Biconomy's ERC-4337 smart accounts, Paymasters, and Bundlers as a service. We cover session keys, Passkey signers, EIP-7212, multichain permissions. We also touch on ERC-6900 AA Module, which Biconomy and Rhinestone are collaborating on for their forthcoming Module Store, which is planned to launch in Q1 2024. It was a pleasure chatting with Ahmed about his journey building Biconomy into one of the most important players in the AA ecosystem. I hope you enjoy the show. As always, this show is provided as entertainment and does not constitute legal, financial, or tax advice or any form of endorsement or suggestion. Crypto has risks and you alone are responsible for doing your research and making your own decisions. Links Hosted by @nnnnicholas Support on Gitcoin Biconomy 4337 provider stats on BundleBear by 0xKofi Biconomy Blog Multichain Validation Module EIP-4337 Biconomy on Session Keys Biconomy x Rhinestone Module Store & blog Chapters (00:00:00) Intro (00:02:30) Interview start (00:20:00) Why AA matters (00:21:40) Ideal Biconomy Customers (00:22:00) Appchains (00:26:10) Rollups as a Service (00:29:00) How to choose between ERC-4337 Account implementations (00:42:08) Safe vs Diamond (00:44:20) AA Session keys (00:46:30) ERC-6900 AA Modules (00:50:45) Multichain Smart Accounts (00:55:45) Embedded Wallets (01:00:30) Paymasters: Which transactions should app devs subsidize? (01:06:29) Bundlers (01:07:50) Bundler aggregators (01:08:18) Builders, Bundlers, and MEV (01:09:30) Sequencer-sponsored gas (01:13:25) Signers: Passkeys, EOAs, and more (01:16:40) EIP-7212 (01:16:45) Biconomy's team & Dubai Ethereum scene (01:19:30) Outro

Rails with Jason
199 - Samuel Giddins

Rails with Jason

Play Episode Listen Later Oct 15, 2023 41:43


This week, Samuel Giddins and I discuss life on call as a developer, the upcoming RubyConf,  the pitfalls of online communications, Sam's beginnings as a developer, software supply chain security, and the difference between "amicable" and "amiable."  Sam will be at the Ruby Gems and Bundler open space at RubyConf in San Diego on Monday, November 13th 2023.Samuel Giddins' SiteSamuel Giddins on Hachyderm.ioRubyGems BlogRubyConf

Rustacean Station
rb-sys with Ian Ker-Seymer

Rustacean Station

Play Episode Listen Later Sep 28, 2023 56:10


Allen Wyma talks with Ian Ker-Seymer about his work on rb-sys which easily allows you to integrate Ruby with Rust. Contributing to Rustacean Station Rustacean Station is a community project; get in touch with us if you'd like to suggest an idea for an episode or offer your services as a host or audio editor! Twitter: @rustaceanfm Discord: Rustacean Station Github: @rustacean-station Email: hello@rustacean-station.org Timestamps [@00:00] - Guest introduction: Ian Ker-Seymer - Staff Software Engineer at Shopify [@02:04] - The connection between Liquid and Shopify [@06:19] - The nenefits of using WebAssembly [@11:14] - Exploring the languages in Shopify's stack, including Ruby [@14:24] - Rust's practical use cases [@16:44] - How Rust became part of Shopify's stack [@19:14] - Deep dive into rb-sys [@24:17] - RubyGems and Bundler: insights and considerations [@36:41] - Integrating Rust into the stack [@40:52] - Addressing challenges with Windows compilation [@47:46] - Spotlight on rb-sys: why it's worth exploring Credits Intro Theme: Aerocity Audio Editing: Plangora Hosting Infrastructure: Jon Gjengset Show Notes: Plangora Hosts: Allen Wyma

programmier.bar – der Podcast für App- und Webentwicklung
News 38/23: Bun 1.0 // Flutter 3.13 // PowerSync // Jetpack Compose Multiplatform // Astro 3.0 // Unity Fee // Node 20.6

programmier.bar – der Podcast für App- und Webentwicklung

Play Episode Listen Later Sep 20, 2023 38:44


Die neueste Flutter-Version 3.13 bringt Preview Versionen der Impeller Runtime auf neue Plattformen und unterstützt nun 2-dimensionales Scrolling.PowerSync ist eine neue Offline-First-Datenbanklösung, die in ihrer Beta-Phase als erste ein SDK für Flutter bietet und mit Supabase integriert werden kann.Jetbrains bringt seine Cross-Platform-Lösung Jetpack Compose Multiplatform als Preview auch auf andere Plattformen und unterstützt nun Popups und Dialoge direkt im Framework.Die neue Version 3.0 von Astro bringt ein spannendes Update: View Transitions. Aber auch Image Optimization, bessere Render Performance und SSR-Verbesserungen für Serverless sind dabei.Unity hat mit ihrer “Runtime Fee”, einem neuen Bezahlmodell, für viel Aufregung gesorgt. Das neue Modell basiert auf Kosten pro Installation, was bei vielen Entwicklungsstudios zu finanziellen Problemen führen könnte.Die aktuelle Version von Node 20.6 unterstützt nun das Laden von .env-Dateien, was das sehr oft verwendete dotenv-Package überflüssig macht.Bun ist die performante JavaScript Runtime, die auch Bundler, Dependency Manager und Script Executor ist, und geht jetzt öffentlichkeitswirksam in die Version 1.0.Schreibt uns! Schickt uns eure Themenwünsche und euer Feedback: podcast@programmier.barFolgt uns! Bleibt auf dem Laufenden über zukünftige Folgen und virtuelle Meetups und beteiligt euch an Community-Diskussionen. TwitterInstagramFacebookMeetupYouTube

Beyond The Horizon
Let's Meet The Epstein Connected Billionaire Bundler Reid Hoffman (9/01/23)

Beyond The Horizon

Play Episode Listen Later Sep 1, 2023 11:29


Jeffrey Epstein had many, many powerful friends and supporters and the vast majority of those people stuck with him even after his conviction. One such man was Reid Hoffman. So, Who is Reid Hoffman? In this episode, we are going to take a look at the man and how he has not only helped to refurbish Epstein's reputation, but how he has also poured 100s of thousands of dollars into American politics in support of the Democrats. Reid Hoffman is an American entrepreneur, venture capitalist, and author known for his influential contributions to the tech industry. Here is a summary of his key attributes and achievements:Co-founder of LinkedIn: Reid Hoffman is best known as one of the co-founders of LinkedIn, a professional networking platform that has revolutionized the way people connect and find job opportunities online. LinkedIn has grown into a global platform with millions of users.Accomplished Investor: Hoffman is a prominent venture capitalist and has invested in numerous successful tech companies, including PayPal, Airbnb, and Facebook. He is associated with venture capital firms like Greylock Partners and was instrumental in their investments.Author and Thought Leader: Hoffman has written books and articles that offer insights into entrepreneurship, leadership, and the future of work. His book "The Start-up of You" encourages individuals to think of themselves as entrepreneurs of their own careers.Philanthropy: Hoffman is actively involved in philanthropic endeavors. He has donated to various causes, including education and social impact initiatives. He has also signed The Giving Pledge, committing to donate the majority of his wealth to charitable causes.Educational Background: Reid Hoffman holds degrees from prestigious institutions, including a bachelor's degree in Symbolic Systems from Stanford University and a master's degree in Philosophy from the University of Oxford, where he was a Marshall Scholar.Thoughtful Networker: Hoffman is known for his extensive network of influential contacts in the tech industry. He leverages these connections to provide mentorship and support to emerging entrepreneurs and startups.Entrepreneurship Advocate: He is a strong advocate for entrepreneurship and innovation, frequently speaking at conferences and events, and serving on boards and advisory panels for organizations focused on promoting entrepreneurship.AI and Ethics: Hoffman has also been involved in discussions around the ethical implications of artificial intelligence and has shared insights on the responsible development and use of AI technologies.(commercial at 7:41)to contact me:bobbycapucci@protonmail.comsoruce:Billionaire who visited Epstein island pours thousands into coffers of vulnerable Dem Senate races | Fox NewsThis show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/5080327/advertisement

The Epstein Chronicles
Let's Meet The Epstein Connected Billionaire Bundler Reid Hoffman (8/31/23)

The Epstein Chronicles

Play Episode Listen Later Aug 31, 2023 11:29


Jeffrey Epstein had many, many powerful friends and supporters and the vast majority of those people stuck with him even after his conviction. One such man was Reid Hoffman. So, Who is Reid Hoffman? In this episode, we are going to take a look at the man and how he has not only helped to refurbish Epstein's reputation, but how he has also poured 100s of thousands of dollars into American politics in support of the Democrats. Reid Hoffman is an American entrepreneur, venture capitalist, and author known for his influential contributions to the tech industry. Here is a summary of his key attributes and achievements:Co-founder of LinkedIn: Reid Hoffman is best known as one of the co-founders of LinkedIn, a professional networking platform that has revolutionized the way people connect and find job opportunities online. LinkedIn has grown into a global platform with millions of users.Accomplished Investor: Hoffman is a prominent venture capitalist and has invested in numerous successful tech companies, including PayPal, Airbnb, and Facebook. He is associated with venture capital firms like Greylock Partners and was instrumental in their investments.Author and Thought Leader: Hoffman has written books and articles that offer insights into entrepreneurship, leadership, and the future of work. His book "The Start-up of You" encourages individuals to think of themselves as entrepreneurs of their own careers.Philanthropy: Hoffman is actively involved in philanthropic endeavors. He has donated to various causes, including education and social impact initiatives. He has also signed The Giving Pledge, committing to donate the majority of his wealth to charitable causes.Educational Background: Reid Hoffman holds degrees from prestigious institutions, including a bachelor's degree in Symbolic Systems from Stanford University and a master's degree in Philosophy from the University of Oxford, where he was a Marshall Scholar.Thoughtful Networker: Hoffman is known for his extensive network of influential contacts in the tech industry. He leverages these connections to provide mentorship and support to emerging entrepreneurs and startups.Entrepreneurship Advocate: He is a strong advocate for entrepreneurship and innovation, frequently speaking at conferences and events, and serving on boards and advisory panels for organizations focused on promoting entrepreneurship.AI and Ethics: Hoffman has also been involved in discussions around the ethical implications of artificial intelligence and has shared insights on the responsible development and use of AI technologies.(commercial at 7:41)to contact me:bobbycapucci@protonmail.comsoruce:Billionaire who visited Epstein island pours thousands into coffers of vulnerable Dem Senate races | Fox NewsThis show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/5003294/advertisement

Rooftop Ruby Podcast
23: Head of Open Source at Ruby Central André Arko

Rooftop Ruby Podcast

Play Episode Listen Later Aug 30, 2023 46:43 Transcription Available


Ruby Central head of open source André Arko talks Bundler, Ruby Gems, supporting the community, and more.André Arko will be speaking at RubyConf 2023 this year Support Bundler/RubyGems open source work via Ruby CentralFollow us on Mastodon: Rooftop Ruby Collin Joel Show art created by JD Davis.

The Bike Shed
397: Dependency Graphs

The Bike Shed

Play Episode Listen Later Aug 15, 2023 42:53


Stephanie is consciously trying to make meetings better for herself by limiting distractions. A few episodes ago, Joël talked about a frustrating bug he was chasing down and couldn't get closure on, so he had to move on. This week, that bug popped up again and he chased it down! AND he got to use binary search to find its source–which was pretty cool! Together, Stephanie and Joël discuss dependency graphs as a mental model, and while they apply to code, they also help when it comes to planning tasks and systems. They talk about coupling, cycles, re-structuring, and visualizations. Ruby Graph Library (https://github.com/monora/rgl) Graphviz (https://graphviz.org/) Using a Dependency Graph to Visualize RSpec let (https://thoughtbot.com/blog/using-a-dependency-graph-to-visualize-rspec-let) Mermaid.js (https://mermaid.js.org/) Strangler Fig pattern (https://martinfowler.com/bliki/StranglerFigApplication.html) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, I'm always trying to make meetings better for me [chuckles], more tolerable or more enjoyable. And in meetings a lot, I find myself getting distracted when I don't necessarily want to be. You know, oftentimes, I really do want to try to pay attention to just what I'm doing in that meeting in the moment. In fact, just now, I was thinking about the little tidbit I had shared on a previous episode about priorities, where really, you know, you can only have one priority [laughs] at a time. And so, in that moment, hopefully, my priority is the meeting that I'm in. But, you know, I find myself, like, accidentally opening Slack or, like, oh, was I running the test suite just a few minutes before the meeting started? Let me just go check on that really quick. And, oh no, there's a failure, oh God, that red is really, you know, drawing my eye. And, like, could I just debug it really quick and get that satisfying green so then I can pay attention to the meeting? And so on and so forth. I'm sure I'm not alone in this [laughs]. And I end up not giving the meeting my full attention, even though I want to be, even though I should be. So, one thing that I started doing about a year ago is origami. [laughs] And that ended up being a thing that I would do with my hands during meetings so that I wasn't using my mouse, using my keyboard, and just, like, looking at other stuff in the remote meeting world that I live in. So, I started with paper stars, made many, many paper stars, [laughs] and then, I graduated to paper cranes. [laughs] And so, that's been my origami craft of choice lately. Then now, I have little cranes everywhere around the house. I've kind of created a little paper crane army. [laughs] And my partner has enjoyed putting them in random places around the house for me [laughs] to find. So, maybe I'll open a cabinet, and suddenly, [laughs] a paper crane is just there. And I think I realized that I've actually gotten quite good at doing these crafts. And it's been interesting to kind of be putting in the hours of doing this craft but also not be investing time, like, outside of meetings. And I'm finding that I'm getting better at this thing, so that seemed pretty cool. And it is mindless enough that I'm mentally just paying attention but, yeah, like, building that muscle memory to perfecting the craft of origami. JOËL: I'm curious, for your army of paper cranes, is there a standard size that you make, or do you have, like, a variety of sizes? STEPHANIE: I have this huge stack of, like, 500 sheets of origami paper that are all the same size. So, they're all about, let's say, two or three inches large. But I think the tiny ones I've seen, really small paper cranes, maybe that would be, like, the next level to tackle because working with smaller paper seems, you know, even more challenging. JOËL: I'd imagine the ratio of, like, paper thickness to the size of the thing that you're making is different. STEPHANIE: At this point, they say that if you make 1,000, then you bring good luck. I think I'm well on my way [laughs] to hopefully being blessed with good luck in this household of my little paper crane army. JOËL: It's interesting that you mentioned the power of having something tactile to do with your hands during a meeting, and I definitely relate to that. I feel like it's so easy, even, like, mindlessly, to just hit Command-Tab when I'm doing things on a screen. Like, my hands are on the keyboard. If I'm not doing something, I'm just going to mindlessly hit Command-Tab. It's kind of like on your phone sometimes. I don't know if you do this, like, just scrolling side to side. You're not actually doing anything. You just want motion with your fingers. STEPHANIE: Yes. I know exactly what you're talking about. And it's funny because it's a bit of a duality where, you know, when you are in your development workflow, you want things to be as quick and convenient as possible, so that Command-Tab, you know, is very easy. It's just built in, and that helps speed up your, you know, day-to-day work. But then it's also that little bit of mindlessness, I think, that can get you down the distraction path. When I was first looking for something to do with my hands, to have, like, a little tactile thing to keep me focused in meetings, I did explore getting one of those fidget cubes; I have to say. [laughs] It's just a little toy, you know, that comes with a bunch of different settings for you to fidget with. There's, like, a ball you can roll, you know, with your thumb, or maybe some buttons to click, and it gives you that really satisfying tactile experience. And I know they work really well for a lot of people, but I've really enjoyed the, I guess, the unexpected benefits [chuckles] of getting better at a hobby [laughs] while spending my time at my work. Joël, what is new with you? JOËL: So, a few episodes ago, I talked about a really kind of frustrating bug that I was chasing down that was due to some, like, non-determinism in the environment. And it kind of came, and then it went away. And I wasn't able to get sort of closure on that and had to move on. Well, this week, that bug popped up again, and this time, I was actually able to chase it down. So, that felt really exciting. And I got to use binary search to try to find the source of it, which made me feel really cool. STEPHANIE: Oooh, do tell. What ended up being the issue? JOËL: I'm connecting to an external Snowflake data warehouse, and ActiveRecord tries to fetch the schema and crashes as part of that with some cryptic error that originates from the C extension ODBC Ruby driver package. I figured out that it's probably something to do with, like, a particular table name or something in the table metadata when we're pulling this schema that we're not happy about. But I don't know which table is the one that it's not happy with. Well, this time, I was able to figure out, by reading through some of the documentation, that I can pull subsets of the schema. So, I can pull the first n values of that schema, and it won't crash. It only crashes if I try to fetch the entire set, which is what is happening under the hood. At that point, you know, I could fetch each row individually, but there's hundreds of these. So, you know, I try, okay, what happens if I try to fetch 1,000 of these? Is it going to crash? Because it's a massive system. So, yes, I get a crash. So, I know that a table less than a thousandth in the list of tables is what's causing the problems. So, okay, fetch 500 halfway in between there. It's still going to crash. Okay, 250, 125. I then kind of keep halving all the time until I find one that doesn't crash. And now I know that it is somewhere between the last crash and this one. So, I think it was between 125 and 250. And now I can say, okay, well, let's fetch the first, you know, maybe 200 tables, okay, that crashes. And I keep halving that space until you finally find it. And then, like, okay, so it's this one right here. Now, the problem is the bad table actually crashes. So, I think it ended up being, like, number 175 or something like that. So, I never get to see the actual table itself. But because the list of tables is in alphabetical order, and I can see because I can fetch the first 174 and it succeeds, so I can tell what the previous 5, 6, you know, previous 174 are. I can pretty easily go and look at the actual database and the list of tables and say, okay, well, it's in the same order. And the next one is this one, and hey, look, there is some metadata there that has some very long fields that are longer than one might expect, specifically going over a potentially implied 256-character limit. That seems somewhat suspicious. And, oh, if we remove this table, all of a sudden, everything works. STEPHANIE: Wow, binary search, an excellent debugging tool [laughs] when you have no idea, you know, what could possibly be causing your issue. JOËL: It's such a cool tool. Like, I'm always so happy when I get a chance to use it. The problem is, you need a way to be able to answer the question, like, have I found it? Yes or no? Or, generally, is it greater or less than this current position? STEPHANIE: Well, that's really exciting that you ended up figuring out how to solve the bug. I know last time we talked about it, you kind of had left off in a space of, hopefully, we won't run into this issue again because it's no longer happening. But it seems like you were also set up this time around to be able to debug once it cropped up again. JOËL: Yes. So, binary search is really cool. It's got this, like, very, like, fancy computer science name. But in reality, it's a fairly simple, straightforward technique that I use fairly frequently in my development. And there's another kind of computer sciency fancy-sounding concept that I use all the time. You've all heard me reference this multiple times on the show. You're right; we're finally doing it. This is the dependency graph episode. STEPHANIE: Woo. [laughter] It's time. I'm excited to really dig into it because, you know, as someone who has heard you talk about it a lot, you know, and is maybe a little less familiar with graph theory and how, you know, it can be applied to my day to day work, I'm really excited to dig into a little bit about, you know, what a regular developer needs to know about dependency graphs to add to their toolbox of skills. JOËL: So, I think at its core, the idea of a dependency graph is that you have a group of entities, some of which depend on each other. They can't do a task, or they can't be created unless some other subtasks or dependent actions take place. And so, we have a sort of formal structural way of describing these things. Visually, we often draw these things out where each of the pieces is like a little bubble or a circle, and then we draw arrows towards the things that it depends on. So, if A cannot be done without B being done first, we draw an arrow from A to B. That's kind of how it is in the abstract. More concretely, this kind of thing shows up constantly throughout the work that we do because a lot of what we do as developers is managing things that are connected to each other or that depend on each other. We build complex systems out of smaller components that all rely on each other. STEPHANIE: Yeah, I think it's interesting because I use the word dependency, you know, very frequently when talking about normal work that I'm doing, you know, dependencies as in libraries, right? That we've pulled into our application, or dependencies, like, talking about other classes that are referenced in this class that I'm working in. And I never really thought about what could be explored further or, like, what could be learned from really digging into those connections. JOËL: It's a really powerful mental model. And, like you said, dependencies exist all over our work, and we often use that word. So, you mentioned something like packages, where your application depends on Rails, which in turn depends on ActiveRecord, which in turn depends on a bunch of other things. And so, you've got this whole chain of maybe immediate dependencies, and then those dependencies have dependencies, and those dependencies have dependencies, and it kind of, like, grows outward from there. And in a very kind of simplistic model, you might think, oh, well, it's more, like, a kind of a tree structure. But oftentimes, you'll have things like branches on one side that connect back to branches on the other. And now you've got something that's no longer really tree-like. It's more of a sort of interconnected web, and that is a graph. STEPHANIE: I think understanding the dependencies of your system has also become more important to me as I learn about things that can go wrong when I don't know enough about what my system is, you know, relying on that I had kind of taken for granted previously. I'm especially thinking about packages like we were mentioning, and, you know, not realizing that your application is dependent on this other library, right? That's brought in by a gem that you're using. And there's maybe, like, a security issue, right? With that. And suddenly, you have this problem on your hands that you didn't realize before. And I know that that has been more of a common discussion now in terms of security practices, just being more aware of all the things that you are depending on as really our work becomes more and more interconnected with the things available to us with open source. JOËL: I think where understanding the graph-like nature of this becomes really important is when you're doing something like an upgrade. So, let's say you do have a gem that has a security problem, and you want to upgrade it to fix that security issue. But the upgrade that includes the security patch is also a breaking upgrade. And so, now everything else in your system that depends on that gem or on that package is going to break unless you have them in a version that is compatible with the new version of that gem. And so, you might have to then go downstream and upgrade those packages in a way that's compatible with your app before you can bring in the security patch. And a lot of that can be done automatically by Bundler. Bundler is software that is built around navigating dependency graphs like that and finding versions that are compatible with each other. But sometimes, your code will need to change in order to upgrade one of these downstream gems so that you can then pull in the upgrade from the gem that needs a security patch. And so, understanding a little bit of that graph is going to be important to safely upgrading that gem. STEPHANIE: So, I know another application of dependency graphs that you have thought about and written a blog post for is RSpec let declarations and how a lot of the time when we are using let, you know, we are likely calling other variables defined by let. And so, when you are encountering a test file, it can be really hard to grok what data is being set up in your test. JOËL: Yeah, so that is really interesting because you can define something that will get executed in a lazy fashion if it gets referenced. But then not only is the let lazy and will not trigger unless it's referenced, but a let can reference other lets, which are also lazy, and only get triggered if they get referenced. So, you might have a bunch of lets defined in any order you want throughout a file, and they're all kind of interconnected with these references to each other. But they only get triggered if something calls it directly or it's in this, like, chain of dependencies. And getting a grasp on what actually gets created, which lets will actually execute, which ones don't in a file can quickly get out of hand. And so, thinking of this in terms of a dependency graph has been a really helpful mental model for me to understand what's going on in a complex test file. STEPHANIE: Yeah, absolutely. Especially when sometimes the lets are coming from all over the place, you know, maybe a describe block hundreds of lines away, or even a completely different file if you are using a shared context that's being pulled in. So, I can see why this was a complex problem that could be made a little simpler with plotting out a dependency graph. And in preparation for this episode, I was doing a little bit of my own exploration on this because I certainly know, you know, the pain of trying to figure out what is being executed in my tests when there are a lot of lets that reference each other. And in the blog post, you kind of gave a little step-by-step of how you could start with creating a dependency graph for the test that you're working with. And I was really curious if this process could be automated because, you know, I do enjoy, you know, pulling out the pen and paper [chuckles] every now and then. But I'm not, like, a particularly visual person. God forbid I, like, draw a circle, but then, like, don't have enough space for the rest of the circles. [laughs] So, I was really hoping for a tool that could do this for me, especially if, you know, you do, you have a lot of tests that you have to try to understand in a relatively short amount of time. And so, I ended up doing something kind of hacky with RSpec and overriding let definitions to automate this process. JOËL: That's really cool. So, is the tool that you're trying to build something where you feed it in a spec file, and it gives you some kind of graphical representation like an SVG or something as output? STEPHANIE: Yeah. I did consider that approach first, where you feed in the file, but then I ended up going with something more dynamic where you are running the test, and then as it gets executed, tracing the let definitions and then registering them to build your dependency graph. JOËL: So, you've got some sort of internal modeling that describes a dependency graph. And then, somehow, you're going to turn that, you know, a series of Ruby objects into some kind of visual. STEPHANIE: Yeah, exactly. And the bulk of that work was actually done with a library called RGL, which stands for just Ruby Graph Library. [laughs] And what's nice is that it has a really easy interface for plugging in the vertices and edges of the dependency graph that you want to build. And then, it is already hooked up with Graphviz to, you know, write the SVG to a file. And so, I ended up really just having to build up an array of my dependencies and the connections to each other and then feed it into the constructor of the graph. JOËL: And for all of our listeners, you mentioned Graphviz. That is a third-party tool that can be installed on your machine that can generate these SVG diagrams from...I believe it has its own sort of syntax. So, you create, I believe it's dot, D-O-T, so dot dot file. And based off of that, it generates all sorts of things, but SVG being potentially one of them. STEPHANIE: Yeah. The nice thing was that I actually didn't end up having to use the DSL of Graphviz because the RGL gem was doing them for me. JOËL: Nice. So, it plugs in directly. STEPHANIE: Yeah, exactly. And I was really curious about using this gem because I, you know, just wanted to write Ruby, especially to plug into other things that are already in Ruby. And I found that surprisingly easy, thanks to all of the RSpec config options that they make available to you, including an option to extend the example group class, which is actually where let and let bang is defined. And so, I ended up overriding those classes and using, you know, the name of the let that you're defining and then the block to basically register the dependencies. And I also ended up exploring a little bit with using Ruby's built-in parser to figure out in the block that's being passed to the let, what parts of that block could potentially be a reference to another let. JOËL: That's really cool. Did you get any fun results from that? STEPHANIE: I did. It worked pretty well in being able to capture all of the let declarations, and other lets that it references. And so, I was able to successfully, you know, like, generate a visual dependency graph of all of the lets, so that was really neat. The part that I was really kind of excited about trying next, though I didn't end up having time to yet, was figuring out which of those let values are executed by way of the let bang, right? Which is eager or what is referenced in the test that then gets executed as well. And so, the RGL library is pretty neat and has some formatting options, too, with the Graphviz output. So, you can change the font color or styling options for different, you know, nodes and edges. And so, I was really curious to pursue this further, maybe, and use it to show exactly what gets evaluated now that I have successfully mapped my let graph. JOËL: Right. Because the whole point of this exercise is that not the entire graph is going to get evaluated. The underlying question is, what data actually gets created when my test runs? And so, you build out this whole dependency graph, and then you can follow a few simple rules to say, okay, this branch gets called, this branch gets called, this series of things gets called. And okay, this subset of let blocks trigger, and therefore this data has been created for my given test. STEPHANIE: Yeah. Though I will say that even where I got so far to, just seeing all of the let definitions in a spec file was really helpful to have a better understanding, you know, if I do have to add a test in here, and I'm thinking about reaching for a pre-existing let declaration, to be like, oh, like, it actually, you know, goes on to reference all of these other things that may be factories [chuckles] that are created might make me, you know, think twice, or just have a little better understanding of what I'm really dealing with. JOËL: Right. The idea that when you're calling out to a let, or a factory, or something else that's just a node in a large graph, you're not necessarily referencing just one thing. You might actually be referencing the head of a very long chain of things that maybe you don't intend to trigger the whole thing. STEPHANIE: Yeah, exactly. JOËL: So, in that sense, having a sort of visual or at least an idea of the graph can give you a much better sense of the cost of certain operations that you might have to do. STEPHANIE: The cost of the operations certainly, especially when, you know, you are working in a legacy codebase, and you, you know, like, maybe don't know how everything plays together or is connected. And it's very tempting to just reach for [chuckles] the things that have been, you know, created or built for you. And I'm certainly guilty of that sometimes on this client project, where the domain is so complex, and there are so many associated models. And I'm like, well, like, let me just, you know, use this let that already, you know, has a factory set up for what I think I need for this test. But then realizing, oh, actually, like, it is creating all these things, and do I really need them? I think it can be really challenging to unravel all of that in your head. And so, with this very scrappy tool that I [chuckles] built for my own purposes, you know, maybe it makes it, like, one step easier to try to fully understand what I'm working with and maybe do something different. JOËL: One aspect that I think is really powerful about dependency graphs is that it takes this kind of, like, abstract concept that we oftentimes have an intuitive sense around, the idea that we have different components that depend on each other, and it shows it to us visually on, like, a 2D plane. And that can be really helpful to get an understanding or an overview of a system. You mentioned that RGL uses Graphviz to generate some SVGs. A visual tool that I've been using to draw some of my dependency graphs has been mermaid.js. It has a syntax that's, like, a text-based syntax, but it's almost visual in that you have a piece of text and name of a node. And then, you'll draw a little ASCII arrow, you know, two dashes and a greater than sign to say this thing depends on, and then write another name, and just have a row, like, a bunch of entries to say; A depends on B. A also depends on C. C depends on D, and so on, and, like, build up that list. And then Mermaid will just generate that diagram for you. STEPHANIE: Yeah. I've used Mermaid a few times. One really helpful use that I had for it was diagramming out a bunch of React components that I had and wanting to understand the connections between them. And I think you can even paste the Mermaid syntax into your GitHub pull request description, and it'll render as the graph image. JOËL: Yeah, that's what's really cool is that Mermaid syntax has become embedded in a lot of other places in the past few years. So, it's really easy to embed graphs now into all sorts of things. You mentioned GitHub. It works in pull requests descriptions, comments, I think pretty much anywhere that Markdown is accepted. So, you could put one in your README if you wanted. Another place that I use a lot, Obsidian, my note-taking tool, allows me to embed graphs directly in there, which is really much nicer than previously; sometimes, when I wanted to express something as a visual, I would use some sort of drawing tool to do something and export an image, and then embed that in my note. But now I can just put in this text, and it will automatically render that as a diagram. And part of what's really nice about that is that then it's really easy for me to go and change that if I'm like, oh, but actually, I want to add one more connection in here. I don't have to re go back to, hopefully, a file that I've saved somewhere and, like, change an image file and re-export it. I just, you know, I add one line of text to my note, and it just works. STEPHANIE: That's awesome. Yeah, the ability to change it seems really useful. So, we've talked a little bit about tools for creating a visual aid for understanding our dependencies. And now that we have our graph, maybe we might have some concerning observations about what we see, especially when perhaps some of our dependencies are pointing back to each other. JOËL: Yes. So, I think you're referencing cycles, in particular. That would be the formal term for it. And those are really interesting. They happen in dependency graphs. And I would say, in many cases, they can be a bit of a smell. There's definitely situations where they're fine. But there are things that you look at, and you're like, okay, this is going to be a more complex kind of tricky bit of the graph to work with. Some cases, you just straight up can't have them. So, I want to say that the way RSpec lets are set up, you cannot write code that produces cycles. But you might have...I think Ruby allows classes to reference each other in such a way that it creates a cycle, and not all languages do that. So, Elm and F#, I believe, require that modules cannot reference each other. The fancy term for this is a dependent acyclic graph, or DAG, which basically just means that there are no cycles in that graph. STEPHANIE: Yeah. What you said about classes referencing each other is very interesting because I've definitely seen that. And then, if I have to go about changing something, maybe even it's just the class name, right? Now there's no way in which I can really make just one change. I have to kind of do it all in one go. JOËL: I think that's a common property of a cycle, and a graph is that changes that happen somewhere in that cycle often need to be all shipped together as one piece. You can't break it up into smaller chunks because everything depends on everything else. So, it has to be kind of boxed together and shipped as one thing. STEPHANIE: And you'd mentioned that cycles, you know, can be a bit of a code smell. And if the goal is to be able to break it up so that it is a little bit more manageable to work with, how would you go about breaking a cycle? JOËL: So, I think breaking a cycle is going to vary a little bit based on your problem domain. So, are you modeling a series of classes that are referencing each other? Is this a function call graph? Is this even, like, a series of tasks that you're trying to do? But typically, what you want to do is make sure that eventually, at some point, like, something doesn't loop back to referencing something higher up in your hierarchy. And so, oftentimes, it ends up being about what is allowed to know about what? Do you have higher-level concepts that can know and depend on lower-level concepts but not vice versa? And again, we are talking about this a little bit at the abstract level. But in terms of, let's say, different code modules, or classes, or something like that, commonly, you might say, well, we want some sort of layering where we have almost, like, more primitive types of classes at the bottom. And they don't get to know about anything above them. But the ones above that might be more complex that are composed of smaller pieces know about the ones below them. And you might have multiple layers kind of like that that all kind of point down, but nothing points up. STEPHANIE: That is a very common heuristic. [chuckles] I think you were basically just describing how I also understand creating React components, where you want to separate your presentational ones from your functional ones. And, yeah, it makes a lot of sense that as soon as you start adding that complexity of, you know, those primitive classes at the bottom, starting to, you know, point to things higher up or to know about things higher up, that is where a cycle may be accidentally introduced. JOËL: It's interesting just how many design principles that we have in software. If you dig into them a little bit, you find out that they're about decoupling things, and oftentimes, it's specifically breaking up cycles. So, one way that you might have something like this that actually has dependency in the name, the dependency inversion principle, where what you're effectively doing is you're taking one of those dependency arrows, and you're flipping it the other way. So, instead of A depending on B, you're flipping it. Now B depends on A, and that can be enough to break a cycle. STEPHANIE: So, one thing I've picked up from our conversations about dependency graphs is that oftentimes, you know, when you're trying to figure out where to start, you want to look for those areas or those nodes where there's nothing else that depends on it. JOËL: Yeah. I think you have those nodes that, if this were a tree, you would call them the leaf nodes. In the case of a graph, I'm not sure if that's technically correct, but they don't depend on anything. They're kind of your base case. And so, you can, you know, if it's a function, you can run it. If it's a file, you can load it; if it's a class, also you can load it up and not have to do anything else because it has no dependencies. And knowing that those are there, I think, can be really useful in terms of knowing an order you might want to execute something in. And this is really interesting for one of my favorite uses of a graph, which is breaking down a series of tasks that you need to do. So, commonly, you might say, okay, I have a large task I need to do. I break it down into a series of subtasks. And, you know, maybe I draw out, like, a bulleted list and, you know, task 1, 2, 3, 4, 5. The problem is that they're not necessarily just a flat list. They all have, like, orders, like dependencies between each other. So, maybe one has to happen before 2, but it also has to happen before 3, which needs to happen before two, and, like, there's all these interconnections. And then, you find out that you can't ship them independently the way you thought initially. So, by building up a graph, you end up with something that shows you exactly what depends on what. And then, like you said, the parts that are really interesting where you can start doing work are the ones that have no dependencies themselves. Other things might depend on them, but they have no dependencies. Therefore, they can be safely built, shipped, deployed to production, and they can be done independently of the other subtasks. STEPHANIE: Yeah. I was also thinking about things that could be done in parallel as well. So, if you do have multiple of those items with no dependencies, like, that is a really good way to be able to break up that work and, yeah, identify things that are not blocked. JOËL: For a complex set of tasks, it's great to see, okay, these two pieces have no dependencies. We can have them be done in parallel, shipped independently. And then you can just kind of keep repeating that process. Because once all of the tasks that have no dependencies have been done, well, you can almost, like, remove them from the graph and see, okay, what's the new set of things that have no dependencies? And then, keep doing that until you've eventually done the whole graph. And that may sound like, oh okay, we're just kind of using a little bit of intuition and working through the graph. It turns out that this is a, like, actual, like, formal thing. When it comes to graphs, it's a traversal algorithm called topological sort is the fancy name for it, and it basically, yeah, it goes through that. It gives you a list of nodes in order where each node that you're given has no dependencies that have not been evaluated yet. So, it works from effectively to use our tree terminology, from the leaf nodes to the root, potentially roots plural, of the graph, and each step is independent. So that's a lot of, like, fancy terminology, and getting a little bit of, like, computer science graph theory into here. So, my, like, general heuristic is that graphs should be evaluated from the bottom up when you're trying to evaluate each piece independently. So, when you do that, you get to do each piece independently, as opposed to if you're evaluating from the top down. So, starting from the one thing that depends on everything else, well, it can't be shipped until all of its dependencies have been shipped. And all the transitional dependencies can't be shipped until their dependencies have been shipped. And so, you end up being not able to ship anything until you've built the entire graph. And that's when you end up with, you know, a 2,000-line PR that took you multiple weeks and might be buggy. And it's going to take a long time to review. And it's just not what anybody wants. STEPHANIE: I'm glad you brought this up because I think this is where I am really curious to get better at because oftentimes, when I am breaking down a complex task, it's quite hard for me to see all of the steps that need to happen. And so, you know, you maybe start out with that, like, top-level node, like, the task that needs to be done as you understand it immediately. And it's really hard to actually identify the dependencies and, like, the smaller pieces along the way. And because you're not able to identify that, you think that you do have to just do it all in one go. JOËL: Yeah, that sort of root node is typically the overarching task, the goal of what you want to do. And a common, I think, scenario for something like this would be, let's say, you're doing a Rails upgrade. And so, that root node is upgrade Rails. And a common thing that you might want to do is say, okay, let's go to the gem file, upgrade Rails, see what breaks, and then just keep fixing those things. That's working from the top down. And you're going to be in a long-running branch, and you're going to keep fixing things, fixing things, fixing things until you have found all the things but done all the things. And then you do a big bang upgrade that may have taken you weeks. As opposed to if you're working from the bottom up, you try to figure out, okay, what are all the subtasks? And that might take some exploration. You might not know upfront. But then you might say, okay, here, I can upgrade RSpec versus a dependency, or I need to change the interface of this class and ship all these pieces one at a time. And then, the final step is flipping that upgrade in the gem file, saying, okay, now I've upgraded Rails from 4 to 5, or whatever the version is that you're trying to do. STEPHANIE: I think you've really hit the nail on the head when it comes to trying to do something but not knowing what subtasks may compose of it and getting into that problem of, you know, having not broken it down, like, enough to really see all the dependencies. And, you know, maybe this is a conversation [chuckles] for another episode, but the skill of breaking up those tasks and exploring what those dependencies are, and being able to figure them out upfront before you start to just do that upgrade and then see what happens, that's definitely an area that I want to keep investing in. And I'm sure other people would be really curious about, too, to help them make their jobs easier. JOËL: I think one tip that I've learned that's really fun and that connects into all of this is sometimes you do end up with a cycle in your dependencies of tasks. A technique for breaking that up is a pattern that I have pitched multiple times on the show: the strangler fig pattern. And part of why it's so powerful is that it allows you to work incrementally by breaking up some of these cycles in your dependency graph. And one of the lessons that I've learned from that is that just because you have sort of an initial set of subtasks and you have a graph of them doesn't mean that you can't change them. If you're following strangler fig, what you're actually doing is introducing one or more new subtasks to that graph. But the way you introduce them breaks up that cycle. So, you can always add new tasks or split up existing ones as you get a better understanding of the work you need to do. It's not something that is fixed or set in stone upfront. STEPHANIE: Yeah, that's a really great tip. I think next time, what I really want to explore, you know, your heuristic of going from bottom up, yeah, sure, it sounds all fine and dandy. But how to get to a point where you're able to see everything at the bottom, right? And, like, when you are tasked, or you do start with the thing at the top, like, the end goal. Yeah, I'm sure that's something we'll explore [chuckles] another day. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.

UMAI Social Circle
#45: How to decide on your promo offer (ft. Carley)

UMAI Social Circle

Play Episode Listen Later Jul 23, 2023 30:33


Hi there, CPG friends! Welcome to Episode 45, where we're diving deeper into how to choose a successful promo offer for your brand. With Q4 just around the corner, it's time to gear up and make some serious cash! And guess what? We're here to equip you with the best tips and tricks to make this sales quarter your most successful one yet. Today's episode is extra special because we have our amazing team member Carley Jones joining us to share her promo expertise and insights. Together, we're going to unleash six revenue-driving tips that will help you pick killer promo offers, and set the stage for a profitable Q4. So buckle up; next stop: choosing your Q4 promo offer. 

Remote Ruby
The Case For NOT Taking A Management Path

Remote Ruby

Play Episode Listen Later Jun 30, 2023 36:58


In today's episode, Jason, Chris, and Andrew kick it off with a discussion about their work environments, seating options, and Andrew's hilarious story about going to IKEA, pencil behind his ear, tape measure, and his Mustang, to buy a new couch. We shift gears (see what we just did there) to the recent buzz surrounding the Rails World event and some speculations about Rails 7.1 features, and Chris tells us about Rails Hackathon that's coming up in July.  From there, we move into a more personal space as Jason shares his experience of shifting from coding to manager and the associated challenges, the productivity debate, and how we handle our time allocation between coding and managerial tasks.  We wrap up with reflections on career progression, with Jason's return to coding from management acting as an inspiration for others. Hit download now for an episode filled with humor, technical talk, and personal journeys in the world of coding. [00:00:58] Chris reveals he has acquired a new chair that belonged to his wife, leading to a discussion about comfortable seating options available on Amazon. Then the conversation turns towards their cars, as Andrew shares a funny story about his Mustang, which turns into a debate about the Mustang Mach-E.[00:04:42] There's a conversation about the recent excitement surrounding the Rails World event which sold out very quickly. If you missed out getting tickets, you can sign up for RubyConf in San Diego.  [00:07:15] Andrew wonders why it sold out so fast, and Chris and Jason believe it's the first official Ruby on Rails event, the size of the event, and the involvement of the creator of Rails as contributing factors to the excitement. They also speculate about the release of Rails 7.1 and other upcoming features in the Rails ecosystem. [00:11:00] Andrew shares a trick he stole from Ben that invalidates the bundle cache and re-downloads every gem on the system from scratch whenever Bundler is run. Chris brings up a Tweet that humorously tells Linux users to remove the French language pack, which is a trick to delete all files on the system. [00:11:56] Chris brings up another Tweet at GoRails about Homebrew issues related to using backups from an Intel Mac on an Apple silicon Mac. [00:12:54] Chris tells us they launched their new updated version of the Rails Hackathon site which will be going on July 28-30, 2023.[00:16:56] Jason shares that he's been more focused on project management than coding recently. Chris expresses that he still measures his productivity by how much code he wrote even though he does more management tasks now, and Andrew confesses to having backfilled his GitHub commit history. [00:21:01] Jason shares his experience of shifting from being a coder to a manager, and Chris questions Jason about the division of his time between coding and managing.[00:22:52] Chis shares how his productivity is also affected by various distractions and struggles of getting back into the zone after being interrupted.  [00:24:04] Jason explains that Podia was very supportive of his transition to management and understood that his output would be different. He found it challenging to adjust and decided that he wasn't interested in management at that point in his career and prefers problem-solving with code. Andrew shares his greatest output comes from working with other people.[00:27:04] Jason shares how he thought the only way to advance in his career was to move to management, but after reading the book, Build: An Unorthodox Guide to Making Things Worth Making by Tony Fadell, he realized this was not necessarily true. [00:31:32] Andrew expresses how Jason's transition back to coding from management inspired him. [00:32:20] Jason appreciates the ability to work on complex problems and help others get unstuck, emphasizing the pleasure he finds in thinking through technical problems.[00:33:00] Chris highlights the recent trend of companies figuring out ways to give to senior engineer's progression opportunities without pushing them into managerial roles.Panelists:Jason CharnesChris OliverAndrew MasonSponsor:HoneybadgerLinks:Jason Charnes TwitterChris Oliver TwitterAndrew Mason TwitterRails World 2023RubyConf 2023Rails Hackathon July-28-30, 2023Build: An Unorthodox Guide to Making Things Worth Making by Tony Fadell

PaperPlayer biorxiv cell biology
Mitotic spindle positioning protein (MISP) is an actin bundler that senses ADP-actin and binds near the pointed ends of filaments

PaperPlayer biorxiv cell biology

Play Episode Listen Later May 6, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.05.05.539649v1?rss=1 Authors: Morales, E. A., Tyska, M. J. Abstract: Actin bundling proteins crosslink filaments into polarized structures that shape and support membrane protrusions including filopodia, microvilli, and stereocilia. In the case of epithelial microvilli, mitotic spindle positioning protein (MISP) is an actin bundler that localizes specifically to the basal rootlets, where the pointed ends of core bundle filaments converge. Previous studies established that MISP is prevented from binding more distal segments of the core bundle by competition with other actin binding proteins. Yet whether MISP holds a preference for binding directly to rootlet actin remains an open question. Using in vitro TIRF microscopy assays, we found that MISP exhibits a clear binding preference for filaments enriched in ADP-actin monomers. Consistent with this, assays with actively growing actin filaments revealed that MISP binds at or near their pointed ends. Moreover, although substrate attached MISP assembles filament bundles in parallel and antiparallel configurations, in solution MISP assembles parallel bundles consisting of multiple filaments exhibiting uniform polarity. These discoveries highlight nucleotide state sensing as a mechanism for sorting actin bundlers along filaments and driving their accumulation near filament ends. Such localized binding might drive parallel bundle formation and/or locally modulate bundle mechanical properties in microvilli and related protrusions. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

Remote Ruby
We're the gem exec(utives)

Remote Ruby

Play Episode Listen Later Apr 7, 2023 45:31


On today's episode of Remote Ruby, the conversation begins with Jason, Chris and Andrew discussing their experiences with podcasting and how they started. Then, the conversation takes a shift to discussing using the latest version of RubyGems in Bundler, the addition of a new feature called, gem exec, that allows for easy running of executables from gems that may or may not be installed, and more about GemX.  Twitter's new algorithm is mentioned, along with someone who leaked Twitter's source code on GitHub. Chris talks about some frustrating experiences with his Rails for Beginner's Course that he's releasing very soon which will be free, and some plans to expand the curriculum. There's a discussion on the challenges of teaching and learning programming, the process of recording tutorials, and Chris shares some tips and tricks for Ruby programming. Ruby is magic, so go make some magic and press download to hear much more! [00:03:18] The guys catch up on what's been happening with work, and Andrew tells us he tried the new gem exec stuff in RubyGems, he explains the new feature, and there's a discussion about the advantages of the new feature and how it works, which ends with a bit of confusion. [00:10:03] Andrew brings up an example and mentions a gem called GemX that people are using.[00:12:09] We hear about a gem Andrew wrote that was printed out a like business card with cool texts in the terminal and how he was inspired by someone in the Node community.[00:14:04] Jason brings up Twitter releasing “The algorithm,” and how someone leaked Twitter's source code on GitHub. [00:17:52] In Chris's world, he tells us how he's been re-recording his Rails for Beginner's Course and his frustrating experience with trying to use Digital Ocean Spaces for image uploading, as well as frustrations with CORS configuration and policy instructions.[00:28:41] Chris and Andrew discuss the challenges of teaching and learning programming, specifically Ruby on Rails. [00:32:15] Chris mentions the upcoming release of a new Rails for Beginner's Course, which will include six hours of Ruby content, and plans to expand the curriculum to include more topics like HTML, CSS, and JavaScript.[00:33:35] Andrew and Chris discuss the process of recording tutorials, which can be time consuming and difficult to balance between explaining concepts and providing practical examples. [00:37:06] Listen here for some tips and tricks from Chris for Ruby programming, including using simple delegator and modules on individual instances of a class. He also talks about a blog post on Thoughtbot and about The Gilded Rose Code Kata. [00:42:28] Jason chimes in saying he's just been writing maintenance task and talks about his struggles with abstractions.Panelists:Jason CharnesChris OliverAndrew MasonSponsor:HoneybadgerLinks:Jason Charnes TwitterChris Oliver TwitterAndrew Mason TwitterGemX GoRails[Experimental] Add gem exec command to run executables from gems that may or may not be installed #6309Evaluating Alternative Decorator Implementations in Ruby (Dan Croak-Thoughtbot)Refactoring: The Gilded Rose-Rubies in the RoughRuby Radar TwitterRuby for All Podcast

programmier.bar – der Podcast für App- und Webentwicklung
News 11/23: GPT-4 // Android 14 Dev Preview 2 // View Transition API // Rspack

programmier.bar – der Podcast für App- und Webentwicklung

Play Episode Listen Later Mar 16, 2023 35:57


Neben privaten Hobbys geht es diesmal um Chrome 111, der als nettes Feature die View Transition API mitbringt.Android 14 gibt es in der zweiten Developer Preview. Passkeys sind eine der Funktionen, die nun unterstützt werden.Noch diese Woche soll GPT-4 released werden. Wir klären, was das Stichwort "multimodal" in diesem Kontext bedeutet. Ähnlich, allerdings über Umwege, hat Microsoft das mit Visual ChatGPT bereits veröffentlicht.Ein neuer Bundler betritt den Ring: Rspack setzt auf Rust und soll ein Drop-In-Replacement für Webpack werden.Schreibt uns! Schickt uns eure Themenwünsche und euer Feedback: podcast@programmier.barFolgt uns! Bleibt auf dem Laufenden über zukünftige Folgen und virtuelle Meetups und beteiligt euch an Community-Diskussionen. TwitterInstagramFacebookMeetupYouTube

Software Sessions
Luca Casonato on Deno

Software Sessions

Play Episode Listen Later Mar 2, 2023 80:27


Luca Casonato is the tech lead for Deno Deploy and a TC39 delegate. Deno is a JavaScript runtime from the original creator of NodeJS, Ryan Dahl. Topics covered: What's a JavaScript runtime How V8 is used Why Deno was created The W3C WinterCG for server-side JavaScript Why it's difficult to ship new features in Node The benefits of web standards Creating an all-inclusive toolset like Rust and Go Deno's node compatibility layer Use cases for WebAssembly Benefits and implementation of Deno Deploy Reasons to deploy on the edge What's coming next Luca Luca Casonato @lcasdev Deno Homepage Deploy Showcase Subhosting Fresh web framework The anatomy of an Isolate Cloud Deno Users Netlify Edge Functions Deno at Slack GitHub Flat Data Shopify Oxygen Other related links Cache Web API V8 (JavaScript and WebAssembly engine) TC39 (JavaScript specification group) Web-interoperable Runtimes Community Group (WinterCG) Cloudflare Workers (Deno Deploy competitor) How Cloudflare KV works CockroachDB (Distributed database) XKCD Standards Comic Transcript You can help edit this transcript on GitHub. [00:00:07] Jeremy: Today I'm talking to Luca Casonato. He's a member of the Deno Core team and a TC 39 Delegate. [00:00:06] Luca: Hey, thanks for having me. What's a runtime? [00:00:07] Jeremy: So today we're gonna talk about Deno, and on the website it says, Deno is a runtime for JavaScript and TypeScript. So I thought we could start with defining what a runtime is. [00:00:21] Luca: Yeah, that's a great question. I think this question actually comes up a lot. It's, it's like sometimes we also define Deno as a headless browser, or I don't know, a, a JavaScript script execution tool. what actually defines runtime? I, I think what makes a runtime a runtime is that it is a, it's implemented in native code. It cannot be self-hosted. Like you cannot self-host a JavaScript runtime. and it executes JavaScript or TypeScript or some other scripting language, without relying on, well, yeah, I guess it's the self-hosting thing. Like it's, it's essentially a, a JavaScript execution engine, which is not self-hosted. So yeah, it, it maybe has IO bindings, but it doesn't necessarily need to like, it. Maybe it allows you to read the, from the file system or, or make network calls. Um, but it doesn't necessarily have to. It's, I think the, the primary definition is something which can execute JavaScript without already being written in JavaScript. How V8 and JavaScript runtimes are related [00:01:20] Jeremy: And when we hear about JavaScript run times, whether it's Deno or Node or Bun, or anything else, we also hear about it in the context of v8. Could you explain the relationship between V8 and a JavaScript run time? [00:01:36] Luca: Yeah. So V8 and, and JavaScript core and Spider Monkey, these are all JavaScript engines. So these are the low level virtual machines that can execute or that can parse your JavaScript code. turn it into byte code, maybe turn it into, compiled machine code, and then execute that code. But these engines, Do not implement any IO functions. They do not. They implement the JavaScript spec as is written. and then they provide extension hooks for, they call these host environments, um, like environments that embed these engines to provide custom functionalities to essentially poke out of the sandbox, out of the, out of the virtual machine. Um, and this is used in browsers. Like browsers have, have these engines built in. This is where they originated from. Um, and then they poke holes into this, um, sandbox virtual machine to do things like, I don't know, writing to the dom or, or console logging or making fetch calls and all these kinds of things. And what a runtime essentially does, a JavaScript runtime is it takes one of these engines and. It then provides its own set of host APIs, like essentially its own set of holes. It pokes into the sandbox. and depending on what the runtime is trying to do, um, the weight will do. This is gonna be different and, and the sort of API that is ultimately exposed to the end user is going to be different. For example, if you compare Deno and node, like node is very loosey goosey, about how it pokes holds into the sandbox, it sort of just pokes them everywhere. And this makes it difficult to enforce things like, runtime permissions for example. Whereas Deno is much more strict about how it, um, pokes holds into its sandbox. Like everything is either a web API or it's behind in this Deno name space, which means that it's, it's really easy to find, um, places where, where you're poking out of the sandbox. and really you can also compare these to browsers. Like browsers are also JavaScript run times. Um, they're just not headless. JavaScript run times, but JavaScript run times that also have a ui. and. . Yeah. Like there, there's, there's a whole Bunch of different kinds of JavaScript run times, and I think we're also seeing a lot more like embedded JavaScript run times. Like for example, if you've used React Native before, you, you may be using Hermes as a, um, JavaScript engine in your Android app, which is like a custom JavaScript engine written just for, for, for React native. Um, and this also is embedded within a, like react native run time, which is specific to React native. so it's also possible to have run times, for example, that are, that can be where the, where the back backing engine can be exchanged, which is kind of cool. [00:04:08] Jeremy: So it sounds like V8's role, one way to look at it is it can execute JavaScript code, but only pure functions. I suppose you [00:04:19] Luca: Pretty much. Yep. [00:04:21] Jeremy: Do anything that doesn't interact with IO so you think about browsers, you were mentioning you need to interact with a DOM or if you're writing a server side application, you probably need to receive or make HTTP requests, that sort of thing. And all of that is not handled by v8. That has to be handled by an external runtime. [00:04:43] Luca: Exactly Like, like one, one. There's, there's like some exceptions to this. For example, JavaScript technically has some IO built in with, within its standard library, like math, random. It's like random number. Generation is technically an IO operation, so, Technically V8 has some IO built in, right? And like getting the current date from the user, that's also technically IO So, like there, there's some very limited edge cases. It's, it's not that it's purely pure, but V8 for example, has a flag to turn it completely deterministic. which means that it really is completely pure. And this is not something which run times usually have. This is something like the feature of an engine because the engine is like so low level that it can essentially, there's so little IO that it's very easy to make deterministic where a runtime higher level, um, has, has io, um, much more difficult to make deterministic. [00:05:39] Jeremy: And, and for things like when you're working with JavaScript, there's, uh, asynchronous programming [00:05:46] Luca: mm-hmm. Concurrent JavaScript execution [00:05:47] Jeremy: So you have concurrency and things like that. Is that a part of V8 or is that the responsibility of the run time? [00:05:54] Luca: That's a great question. So there's multiple parts to this. There's the part, um, there, there's JavaScript promises, um, and sort of concurrent Java or well, yes, concurrent JavaScript execution, which is sort of handled by v8, like v8. You can in, in pure v8, you can create a promise, and you can execute some code within that promise. But without IO there's actually no way to defer time, uh, which means that in with pure v8, you can either, you can create a promise. Which executes right now. Or you can create a promise that never executes, but you can't create a promise that executes in 10 seconds because there's no way to measure 10 seconds asynchronously. What run times do is they add something called an event loop on top of this, um, on top of the base engine and that event loop, for example, like a very simple event loop, for example, might have a timer in it, which every second looks at if there's a timer schedule to run within that second. And if it does, if, if that timer exists, it'll go call out to V8 and say, you can now execute that promise. but V8 is still the one that's keeping track of, of like which promises exist, and the code that is meant to be invoked when they resolve all that kind of thing. Um, but the underlying infrastructure that actually invokes which promises get resolved at what point in time, like the asynchronous, asynchronous IO is what this is called. This is driven by the event loop, um, which is implemented by around time. So Deno, for example, it uses, Tokio for its event loop. This is a, um, an event loop written in Rust. it's very popular in the Rust ecosystem. Um, node uses libuv. This is a relatively popular runtime or, or event loop, um, implementation for c uh, plus plus. And, uh, libuv was written for Node. Tokio was not written for Deno. But um, yeah, Chrome has its own event loop implementation. Bun has its own event loop implementation. [00:07:50] Jeremy: So we, we might go a little bit more into that later, but I think what we should probably go into now is why make Deno, because you have Node that's, uh, currently very popular. The co-creator of Deno, to my understanding, actually created Node. So maybe you could explain to our audience what was missing or what was wrong with Node, where they decided I need to create, a new runtime. Why create a new runtime? (standards compliance) [00:08:20] Luca: Yeah. So the, the primary point of concern here was that node was slowly diverging from browser standards with no real path to, to, to, re converging. Um, like there was nothing that was pushing node in the direction of standards compliance and there was nothing, that was like sort of forcing node to innovate. and we really saw this because in the time between, I don't know, 2015, 2018, like Node was slowly working on esm while browsers had already shipped ESM for like three years. , um, node did not have fetch. Node hasn't had, or node only at, got fetch last year. Right? six, seven years after browsers got fetch. Node's stream implementation is still very divergent from, from standard web streams. Node was very reliant on callbacks. It still is, um, like promises in many places of the Node API are, are an afterthought, which makes sense because Node was created in a time before promises existed. Um, but there was really nothing that was pushing Node forward, right? Like nobody was actively investing in, in, in improving the API of Node to be more standards compliant. And so what we really needed was a new like Greenfield project, which could demonstrate that actually writing a new server side run. Is A viable, and b is totally doable with an API that is more standards combined. Like essentially you can write a browser, like a headless browser and have that be an excellent to use JavaScript runtime, right? And then there was some things that were I on top of that, like a TypeScript support because TypeScript was incredibly, or is still incredibly popular. even more so than it was four years ago when, when Deno was created or envisioned, um, this permission system like Node really poked holes into the V8 sandbox very early on with, with like, it's gonna be very difficult for Node to ever, ever, uh, reconcile this, this. Especially cuz the, some, some of the APIs that it, that it exposes are just so incredibly low level that like, I don't know, you can mutate random memory within your process. Um, which like if you want to have a, a secure sandbox like that just doesn't work. Um, it's not compatible. So there was really needed to be a place where you could explore this, um, direction and, and see if it worked. And Deno was that. Deno still is that, and I think Deno has outgrown that now into something which is much more usable as, as like a production ready runtime. And many people do use it, in production. And now Deno is on the path of slowly converging back with Node, um, in from both directions. Like Node is slowly becoming more standards compliant. and depending on who you ask this was, this was done because of Deno and some people said it would had already been going on and Deno just accelerated it. but that's not really relevant because the point is that like Node is becoming more standard compliant and, and the other direction is Deno is becoming more node compliant. Like Deno is implementing node compatibility layers that allow you to run code that was originally written for the node ecosystem in the standards compliant run time. so through those two directions, the, the run times are sort of, um, going back towards each other. I don't think they'll ever merge. but we're, we're, we're getting to a point here pretty soon, I think, where it doesn't really matter what runtime you write for, um, because you'll be able to write code written for one runtime in the other runtime relatively easily. [00:12:03] Jeremy: If you're saying the two are becoming closer to one another, becoming closer to the web standard that runs in the browser, if you're talking to someone who's currently developing in node, what's the incentive for them to switch to Deno versus using Node and then hope that eventually they'll kind of meet in the middle. [00:12:26] Luca: Yeah, so I think, like Deno is a lot more than just a runtime, right? Like a runtime executes JavaScript, Deno executes JavaScript, it executes type script. But Deno is so much more than that. Like Deno has a built-in format, or it has a built-in linter. It has a built-in testing framework, a built-in benching framework. It has a built-in Bundler, it, it like can create self-hosted, um, executables. yeah, like Bundle your code and the Deno executable into a single executable that you can trip off to someone. Um, it has a dependency analyzer. It has editor integrations. it has, Yeah. Like I could go on for hours, (laughs) about all of the auxiliary tooling that's inside of Deno, that's not a JavaScript runtime. And also Deno as a JavaScript runtime is just more standards compliant than any of the other servers at Runtimes right now. So if, if you're really looking for something which is standards complaint, which is gonna like live on forever, then it's, you know, like you cannot kill off the Fetch API ever. The Fetch API is going to live forever because Chrome supports it. Um, and the same goes for local storage and, and like, I don't know, the Blob API and all these other web APIs like they, they have shipped and browsers, which means that they will be supported until the end of time. and yeah, maybe Node has also reached that with its api probably to some extent. but yeah, don't underestimate the power of like 3 billion Chrome users. that would scream immediately if the Fetch API stopped working Right? [00:13:50] Jeremy: Yeah, I, I think maybe what it sounds like also is that because you're using the API that's used in the browser places where you deploy JavaScript applications in the future, you would hope that those would all settle on using that same API so that if you were using Deno, you could host it at different places and not worry about, do I need to use a special API maybe that you would in node? WinterCG (W3C group for server side JavaScript) [00:14:21] Luca: Yeah, exactly. And this is actually something which we're specifically working towards. So, I don't know if you've, you've heard of WinterCG? It's a, it's a community group at the W3C that, um, CloudFlare and, and Deno and some others including Shopify, have started last year. Um, we're essentially, we're trying to standardize the concept of what a server side JavaScript runtime is and what APIs it needs to have available to be standards compliant. Um, and essentially making this portability sort of written down somewhere and like write down exactly what code you can write and expect to be portable. And we can see like that all of the big, all of the big players that are involved in, in, um, building JavaScript run times right now are, are actively, engaged with us at WinterCG and are actively building towards this future. So I would expect that any code that you write today, which runs. in Deno, runs in CloudFlare, workers runs on Netlify Edge functions, runs on Vercel's Edge, runtime, runs on Shopify Oxygen, is going to run on the other four. Um, of, of those within the next couple years here, like I think the APIs of these is gonna converge to be essentially the same. there's obviously gonna always be some, some nuances. Um, like, I don't know, Chrome and Firefox and Safari don't perfectly have the same API everywhere, right? Like Chrome has some web Bluetooth capabilities that Safari doesn't, or Firefox has some, I don't know, non-standard extensions to the error object, which none of the other runtimes do. But overall you can expect these front times to mostly be aligned. yeah, and I, I think that's, that's really, really, really excellent and that, that's I think really one of the reasons why one should really consider, like building for, for this standard runtime because it, it just guarantees that you'll be able to host this somewhere in five years time and 10 years time, with, with very little effort. Like even if Deno goes under or CloudFlare goes under, or, I don't know, nobody decides to maintain node anymore. It'll be easy to, to run somewhere else. And also I expect that the big cloud vendors will ultimately, um, provide, manage offerings for, for the standards compliant JavaScript on time as well. Is Node part of WinterCG? [00:16:36] Jeremy: And this WinterCG group is Node a part of that as well? [00:16:41] Luca: Um, yes, we've invited Node, um, to join, um, due to the complexities of how node's, internal decision making system works. Node is not officially a member of WinterCG. Um, there is some individual members of the node, um, technical steering committee, which are participating. for example, um, James m Snell is, is the co-chair, is my co-chair on, on WinterCG. He also works at CloudFlare. He's also a node, um, TSC member, Mateo Colina, who has been, um, instrumental to getting fetch landed in Node, um, is also actively involved. So Node is involved, but because Node is node and and node's decision making process works the way it does, node is not officially listed anywhere as as a member. but yeah, they're involved and maybe they'll be a member at some point. But, yeah, let's. , see (laughs) [00:17:34] Jeremy: Yeah. And, and it, so it, it sounds like you're thinking that's more of a, a governance or a organizational aspect of note than it is a, a technical limitation. Is that right? [00:17:47] Luca: Yeah. I obviously can't speak for the node technical steering committee, but I know that there's a significant chunk of the node technical steering committee that is, very favorable towards, uh, standards compliance. but parts of the Node technical steering committee are also not, they are either indifferent or are actively, I dunno if they're still actively working against this, but have actively worked against standards compliance in the past. And because the node governance structure is very, yeah, is, is so, so open and let's, um, and let's, let's all these voices be heard, um, that just means that decision making processes within Node can take so long, like. . This is also why the fetch API took eight years to ship. Like this was not a technical problem. and it is also not a technical problem. That Node does not have URL pattern support or, the file global or, um, that the web crypto API was not on this, on the global object until like late last year, right? Like, these are not technical problems, these are decision making problems. Um, and yeah, that was also part of the reason why we started Deno as, as like a separate thing, because like you can try to innovate node, from the inside, but innovating node from the inside is very slow, very tedious, and requires a lot of fighting. And sometimes just showing somebody, from the outside like, look, this is the bright future you could have, makes them more inclined to do something. Why it takes so long to ship new features in Node [00:19:17] Jeremy: Do, do you have a sense for, you gave the example of fetch taking eight years to, to get into node. Do you, do you have a sense of what the typical objection is to, to something like that? Like I, I understand there's a lot of people involved, but why would somebody say, I, I don't want this [00:19:35] Luca: Yeah. So for, for fetch specifically, there was a, there was many different kinds of concerns. Um, one of the, I, I can maybe list two of them. One of them was for example, that the fetch API is not a good API and as such, node should not have it. which is sort of. missing the point of, because it's a standard API, how good or bad the API is is much less relevant because if you can share the API, you can also share a wrapper that's written around the api. Right? and then the other concern was, node does need fetch because Node already has an HTTP API. Um, so, so these are both kind of examples of, of concerns that people had for a long time, which it took a long time to either convince these people or, or to, push the change through anyway. and this is also the case for, for other things like, for example, web, crypto, um, like why do we need web crypto? We already have node crypto, or why do we need yet another streams? Implementation node already has four different streams implementations. Like, why do we need web streams? and the, the. Like, I don't know if you know this XKCD of, there's 14 competing standards. so let's write a 15th standard, to unify them all. And then at the end we just have 15 competing standards. Um, so I think this is also the kind of concern that people were concerned about, but I, I think what we've seen here is that this is really not a concern that one needs to have because it ends up that, or it turns out in the end that if you implement web APIs, people will use web APIs and will use web APIs only for their new code. it takes a while, but we're seeing this with ESM versus require like new code written with require much less common than it was two years ago. And, new code now using like Xhr, whatever it's called, form request or. You know, the one, I mean, compared to using Fetch, like nobody uses that name. Everybody uses Fetch. Um, and like in Node, if you write a little script, like you're gonna use Fetch, you're not gonna use like Nodes, htp, dot get API or whatever. and we're gonna see the same thing with Readable Stream. We're gonna see the same thing with Web Crypto. We're gonna see, see the same thing with Blob. I think one of the big ones where, where Node is still, I, I, I don't think this is one that's ever gonna get solved, is the, the Buffer global and Node. like we have the Uint8, this Uint8 global, um, and like all the run times including browsers, um, and Buffer is like a super set of that, but it's in global scope. So it, it's sort of this non-standard extension of unit eight array that people in node like to use and it's not compatible with anything else. Um, but because it's so easy to get at, people use it anyway. So those are, those are also kind of problems that, that we'll have to deal with eventually. And maybe that means that at some point the buffer global gets deprecated and I don't know, probably can never get removed. But, um, yeah, these are kinds of conversations that the no TSE is going have to have internally in, I don't know, maybe five years. Write once, have it run on any hosting platform [00:22:37] Jeremy: Yeah, so at a high level, What's shipped in the browser, it went through the ECMAScript approval process. People got it into the browser. Once it's in the browser, probably never going away. And because of that, it's safe to build on top of that for these, these server run times because it's never going away from the browser. And so everybody can kind of use it into the future and not worry about it. Yeah. [00:23:05] Luca: Exactly. Yeah. And that's, and that's excluding the benefit that also if you have code that you can write once and use in both the browser and the server side around time, like that's really nice. Um, like that, that's the other benefit. [00:23:18] Jeremy: Yeah. I think that's really powerful. And that right now, when someone's looking at running something in CloudFlare workers versus running something in the browser versus running something in. it's, I think a lot of people make the assumption it's just JavaScript, so I can use it as is. But it, it, there are at least currently, differences in what APIs are available to you. [00:23:43] Luca: Yep. Yep. Why bundle so many things into Deno? [00:23:46] Jeremy: Earlier you were talking about how Deno is more than just the runtime. It has a linter, formatter, file watcher there, there's all sorts of stuff in there. And I wonder if you could talk a little bit to the, the reasoning behind that [00:24:00] Luca: Mm-hmm. [00:24:01] Jeremy: Having them all be separate things. [00:24:04] Luca: Yeah, so the, the reasoning here is essentially if you look at other modern run time or mo other modern languages, like Rust is a great example. Go is a great example. Even though Go was designed around the same time as Node, it has a lot of these same tools built in. And what it really shows is that if the ecosystem converges, like is essentially forced to converge on a single set of built-in tooling, a that built-in tooling becomes really, really excellent because everybody's using it. And also, it means that if you open any project written by any go developer, any, any rest developer, and you look at the tests, you immediately understand how the test framework works and you immediately understand how the assertions work. Um, and you immediately understand how the build system works and you immediately understand how the dependency imports work. And you immediately understand like, I wanna run this project and I wanna restart it when my file changes. Like, you immediately know how to do that because it's the same everywhere. Um, and this kind of feeling of having to learn one tool and then being able to use all of the projects, like being able to con contribute to open source when you're moving jobs, whatever, like between personal projects that you haven't touched in two years, you know, like being able to learn this once and then use it everywhere is such an incredibly powerful tool. Like, people don't appreciate this until they've used a runtime or, or, or language which provides this to them. Like, you can go to any go developer and ask them if they would like. There, there's this, there's this saying in the Go ecosystem, um, that Go FMT is nobody's favorite, but, or, uh, wait, no, I don't remember what the, how the saying goes, but the saying essentially implies that the way that go FMT formats code, maybe not everybody likes, but everybody loves go F M T anyway, because it just makes everything look the same. And like, you can read your friend's code, your, your colleagues code, your new jobs code, the same way that you did your code from two years ago. And that's such an incredibly powerful feeling. especially if it's like well integrated into your IDE you clone a repository, open that repository, and like your testing panel on the left hand side just populates with all the tests, and you can click on them and run them. And if an assertion fails, it's like the standard output format that you're already familiar with. And it's, it's, it's a really great feeling. and if you don't believe me, just go try it out and, and then you will believe me, (laughs) [00:26:25] Jeremy: Yeah. No, I, I'm totally with you. I, I think it's interesting because with JavaScript in particular, it feels like the default in the community is the opposite, right? There's so many different ways. Uh, there are so many different build tools and testing frameworks and, formatters, and it's very different than, like you were mentioning, a go or a Rust that are more recent languages where they just include that, all Bundled in. Yeah. [00:26:57] Luca: Yeah, and I, I think you can see this as well in, in the time that average JavaScript developer spends configuring their tooling compared to a rest developer. Like if I write Rust, I write Rust, like all day, every day. and I spend maybe two, 3% of my time configuring Rust tooling like. Doing dependency imports, opening a new project, creating a format or config file, I don't know, deleting the build directory, stuff like that. Like that's, that's essentially what it means for me to configure my rest tooling. Whereas if you compare this to like a front-end JavaScript project, like you have to deal with making sure that your React version is compatible with your React on version, it's compatible with your next version is compatible with your ve version is compatible with your whatever version, right? this, this is all not automatic. Making sure that you use the right, like as, as a front end developer, you developer. You don't have just NPM installed, no. You have NPM installed, you have yarn installed, you have PNPM installed. You probably have like, Bun installed. And, and, and I don't know to use any of these, you need to have corepack enabled in Node and like you need to have all of their global bin directories symlinked into your or, or, or, uh, included in your path. And then if you install something and you wanna update it, you don't know, did I install it with yarn? Did I install it with N pNPM? Like this is, uh, significant complexity and you, you tend to spend a lot of time dealing with dependencies and dealing with package management and dealing with like tooling configuration, setting up esent, setting up prettier. and I, I think that like, especially Prettier, for example, really showed, was, was one of the first things in the JavaScript ecosystem, which was like, no, we're not gonna give you a config where you, that you can spend like six hours configuring, it's gonna be like seven options and here you go. And everybody used it because, Nobody likes configuring things. It turns out, um, and even though there's always the people that say, oh, well, I won't use your tool unless, like, we, we get this all the time. Like, I'm not gonna use Deno FMT because I can't, I don't know, remove the semicolons or, or use single quotes or change my tab width to 16. Right? Like, wait until all of your coworkers are gonna scream at you because you set the tab width to 16 and then see what they change it to. And then you'll see that it's actually the exact default that, everybody uses. So it'll, it'll take a couple more years. But I think we're also gonna get there, uh, like Node is starting to implement a, a test runner. and I, I think over time we're also gonna converge on, on, on, on like some standard build tools. Like I think ve, for example, is a great example of this, like, Doing a front end project nowadays. Um, like building new front end tooling that's not built on Vite Yeah. Don't like, Vite's it's become the standard and I think we're gonna see that in a lot more places. We should settle on what tools to use [00:29:52] Jeremy: Yeah, though I, I think it's, it's tricky, right? Because you have so many people with their existing projects. You have people who are starting new projects and they're just searching the internet for what they should use. So you're, you're gonna have people on web pack, you're gonna have people on Vite, I guess now there's gonna be Turbo pack, I think is another one that's [00:30:15] Luca: Mm-hmm. [00:30:16] Jeremy: There's, there's, there's all these different choices, right? And I, I think it's, it's hard to, to really settle on one, I guess, [00:30:26] Luca: Yeah, [00:30:27] Jeremy: uh, yeah. [00:30:27] Luca: like I, I, I think this is, this is in my personal opinion also failure of the Node Technical Steering committee, for the longest time to not decide that yes, we're going to bless this as the standard format for Node, and this is the standard package manager for Node. And they did, they sort of did, like, they, for example, node Blessed NPM as the standard, package manager for N for for node. But it didn't innovate on npm. Like no, the tech nodes, tech technical steering committee did not force NPM to innovate NPMs, a private company ultimately bought by GitHub and they had full control over how the NPM cli, um, evolved and nobody forced NPM to, to make sure that package install times are six times faster than they were. Three years ago, like nobody did that. so it didn't happen. And I think this is, this is really a failure of, of the, the, the, yeah, the no technical steering committee and also the wider JavaScript ecosystem of not being persistent enough with, with like focus on performance, focus on user experience, and, and focus on simplicity. Like things got so out of hand and I'm happy we're going in the right direction now, but, yeah, it was terrible for some time. (laughs) Node compatibility layer [00:31:41] Jeremy: I wanna talk a little bit about how we've been talking about Deno in the context of you just using Deno using its own standard library, but just recently last year you added a compatibility shim where people are able to use node libraries in Deno. [00:32:01] Luca: Mm-hmm. [00:32:01] Jeremy: And I wonder if you could talk to, like earlier you had mentioned that Deno has, a different permissions model. on the website it mentions that Deno's HTTP server is two times faster than node in a Hello World example. And I'm wondering what kind of benefits people will still get from Deno if they choose to use packages from Node. [00:32:27] Luca: Yeah, it's a great question. Um, so I think a, again, this is sort of a like, so just to clarify what we actually implemented, like what we have is we have support for you to import NPM packages. Um, so you can import any NPM package from NPM, from your type script or JavaScript ECMAScript module, um, that you have, you already have for your Deno code. Um, and we will under the hood, make sure that is installed somewhere in some directory globally. Like PNPM does. There's no local node modules folder you have to deal with. There's no package of Jason you have to deal with. Um, and there's no, uh, package. Jason, like versioning things you need to deal with. Like what you do is you do import cowsay from NPM colon cowsay at one, and that will import cowsay with like the semver tag one. Um, and it'll like do the sim resolution the same way node does, or the same way NPM does rather. And what you get from that is that essentially it gives you like this backdoor to a callout to all of the existing node code that Isri been written, right? Like you cannot expect that Deno developers, write like, I don't know. There was this time when Deno did not really have that many, third party modules yet. It was very early on, and I don't know the, you either, if you wanted to connect to Postgres and there was no Postgres driver available, then the solution was to write your own Postgres driver. And that is obviously not great. Um, (laughs) . So the better solution here is to let users for these packages where there's no Deno native or, or, or web native or standard native, um, package for this yet that is importable with url. Um, specifiers, you can import this from npm. Uh, so it's sort of this like backdoor into the existing NPM ecosystem. And we explicitly, for example, don't allow you to, create a package.json file or, import bare node specifiers because we don't, we, we want to stay standards compliant here. Um, but to make this work effectively, we need to give you this little back door. Um, and inside of this back door. All hell is like, or like everything is terrible inside there, right? Like inside there you can do bare specifiers and inside there you can like, uh, there's package.json and there's crazy node resolution and underscore underscore DIRNAME and common js. And like all of that stuff is supported inside of this backdoor to make all the NPM packages work. But on the outside it's exposed as this nice, ESM only, NPM specifiers. and the, the reason you would want to use this over, like just using node directly is because again, like you wanna use TypeScript, no config, like necessary. You want to use, you wanna have a formatter you wanna have a linter, you wanna have tooling that like does testing and benchmarking and compiling or whatever. All of that's built in. You wanna run this on the edge, like close to your users and like 30 different, 35 different, uh, points of presence. Um, it's like, Okay, push it to your git repository. Go to this website, click a button two times, and it's running in 35 data centers. like this is, this is the kind of ex like developer experience that you can, you do not get. You, I will argue that you cannot get with Node right now. Like even if you're using something like ts-node, it is not possible to get the same level of developer experience that you do with Deno. And the, the, the same like speed at which you can iterate, iterate on your projects, like create new projects, iterate on them is like incredibly fast in Deno. Like, I can open a, a, a folder on my computer, create a single file, may not ts, put some code in there and then call Deno Run may not. And that's it. Like I don't, I did not need to do NPM install I did not need to do NPM init -y and remove the license and version fields and from, from the generated package.json and like set private to true and whatever else, right? It just all works out of the box. And I think that's, that's what a lot of people come to deno for and, and then ultimately stay for. And also, yeah, standards compliance. So, um, things you build in Deno now are gonna work in five, 10 years, with no hassle. Node shims and testing [00:36:39] Jeremy: And so with this compatibility layer or this, this shim, is it where the node code is calling out to node APIs and you're replacing those with Deno compatible equivalents? [00:36:54] Luca: Yeah, exactly. Like for example, we have a shim in place that shims out the node crypto API on top of the web crypto api. Like sort of, some, some people may be familiar with this in the form of, um, Browserify shims. if anybody still remembers those, it's essentially. , your front end tooling, you were able to import from like node crypto in your front end projects and then behind the scenes your web packs or your browser replies or whatever would take that import from node crypto and would replace it with like the shim that was essentially exposed the same APIs node crypto, but under the hood, wasn't implemented with native calls, but was implemented on top of web crypto, or implemented in user land even. And Deno does something similar. there's a couple edge cases of APIs that there's, where, where we do not expose the underlying thing that we shim to, to end users, outside of the node shim. So like there's some, some APIs that I don't know if I have a good example, like node nextTick for example. Um, like to properly be able to shim node nextTick, you need to like implement this within the event loop in the runtime. and. , you don't need this in Deno, because Deno, you use the web standard queueMicrotask to, to do this kind of thing. but to be able to shim it correctly and run node applications correctly, we need to have this sort of like backdoor into some ugly APIs, um, which, which natively integrate in the runtime, but, yeah, like allow, allow this node code to run. [00:38:21] Jeremy: A, anytime you're replacing a component with a, a shim, I think there's concerns about additional bugs or changes in behavior that can be introduced. Is that something that you're seeing and, and how are you accounting for that? [00:38:38] Luca: Yeah, that's, that's an excellent question. So this is actually a, a great concern that we have all the time. And it's not just even introducing bugs, sometimes it's removing bugs. Like sometimes there's bugs in the node standard library which are there, and people are relying on these bugs to be there for the applications to function correctly. And we've seen this a lot, and then we implement this and we implement from scratch and we don't make that same bug. And then the test fails or then the application fails. So what we do is, um, we actually run node's test suite against Deno's Shim layer. So Node has a very extensive test suite for its own standard library, and we can run this suite against, against our shims to find things like this. And there's still edge cases, obviously, which node, like there was, maybe there's a bug which node was not even aware of existing. Um, where maybe this, like it's is, it's now standard, it's now like intended behavior because somebody relies on it, right? Like the second somebody relies on, on some non-standard or some buggy behavior, it becomes intended. Um, but maybe there was no test that explicitly tests for this behavior. Um, so in that case we'll add our own tests to, to ensure that. But overall we can already catch a lot of these by just testing, against, against node's tests. And then the other thing is we run a lot of real code, like we'll try run Prisma and we'll try run Vite and we'll try run NextJS and we'll try run like, I don't know, a bunch of other things that people throw at us and, check that they work and they work and there's no bugs. Then we did our job well and our shims are implemented correctly. Um, and then there's obviously always the edge cases where somebody did something absolutely crazy that nobody thought possible. and then they'll open an issue on the Deno repo and we scratch our heads for three days and then we'll fix it. And then in the next release there'll be a new bug that we added to make the compatibility with node better. so yeah, but I, yeah. Running tests is the, is the main thing running nodes test. Performance should be equal or better [00:40:32] Jeremy: Are there performance implications? If someone is running an Express App or an NextJS app in Deno, will they get any benefits from the Deno runtime and performance? [00:40:45] Luca: Yeah. It's actually, there is performance implications and they're usually. The opposite of what people think they are. Like, usually when you think of performance implications, it's always a negative thing, right? It's always okay. Like you, it's like a compromise. like the shim layer must be slower than the real node, right? It's not like we can run express faster than node can run, express. and obviously not everything is faster in Deno than it is in node, and not everything is faster in node than it is in Deno. It's dependent on the api, dependent on, on what each team decided to optimize. Um, and this also extends to other run times. Like you can always cherry pick results, like, I don't know, um, to, to make your runtime look faster in certain benchmarks. but overall, what really matters is that you do not like, the first important step for for good node compatibility is to make sure that if somebody runs your code or runs their node code in Deno or your other run type or whatever, It performs at least the same. and then anything on top of that great cherry on top. Perfect. but make sure the baselines is at least the same. And I think, yeah, we have very few APIs where we behave, where we, where, where like there's a significant performance degradation in Deno compared to Node. Um, and like we're actively working on these things. like Deno is not a, a, a project that's done, right? Like we have, I think at this point, like 15 or 16 or 17 engineers working on Deno, spanning across all of our different projects. And like, we have a whole team that's dedicated to performance, um, and a whole team that's dedicated node compatibility. so like these things get addressed and, and we make patch releases every week and a minor release every four weeks. so yeah, it's, it's not a standstill. It's, uh, constantly improving. What should go into the standard library? [00:42:27] Jeremy: Uh, something that kind of makes Deno stand out as it's standard library. There's a lot more in there than there is in in the node one. [00:42:38] Luca: Mm-hmm. [00:42:39] Jeremy: Uh, I wonder if you could speak to how you make decisions on what should go into it. [00:42:46] Luca: Yeah, so early on it was easier. Early on, the, the decision making process was essentially, is this something that a top 100 or top 1000 NPM library implements? And if it is, let's include it. and the decision making is still short of based on that. But right now we've already implemented most of the low hanging fruit. So things that we implement now are, have, have discussion around them whether we should implement them. And we have a process where, well we have a whole team of engineers on our side and we also have community members that, that will review prs and, and, and make comments. Open issues and, and review those issues, to sort of discuss the pros and cons of adding any certain new api. And sometimes it's also that somebody opens an issue that's like, I want, for example, I want an API to, to concatenate two unit data arrays together, which is something you can really easily do node with buffer dot con cat, like the scary buffer thing. and there's no standards way of doing that right now. So we have to have a little utility function that does that. But in parallel, we're thinking about, okay, how do we propose, an addition to the web standards now that makes it easy to concatenate iterates in the web standards, right? yeah, there's a lot to it. Um, but it's, it's really, um, it's all open, like all of our, all of our discussions for, for, additions to the standard library and things like that. It's all, all, uh, public on GitHub and the GitHub issues and GitHub discussions and GitHub prs. Um, so yeah, that's, that's where we do that. [00:44:18] Jeremy: Yeah, cuz to give an example, I was a little surprised to see that there is support for markdown front matter built into the standard library. But when you describe it as we look at the top a hundred thousand packages, are people looking at markdown? Are they looking at front matter? I, I'm sure there's a fair amount that are so that that makes sense. [00:44:41] Luca: Yeah, like it sometimes, like that one specifically was driven by, like, our team was just building a lot of like little blog pages and things like that. And every time it was either you roll your own front matter part or you look for one, which has like a subtle bug here and the other one has a subtle bug there and really not satisfactory with any of them. So, we, we roll that into the standard library. We add good test coverage for it good, add good documentation for it, and then it's like just a resource that people can rely on. Um, and you don't, you then don't have to make the choice of like, do I use this library to do my front meta parsing or the other library? No, you just use the one that's in the standard library. It's, it's also part of this like user experience thing, right? Like it's just a much nicer user experience, not having to make a choice, about stuff like that. Like completely inconsequential stuff. Like which library do we use to do front matter parsing? (laughs) [00:45:32] Jeremy: yeah. I mean, I think when, when that stuff is not there, then I think the temptation is to go, okay, let me see what node modules there are that will let me parse the front matter. Right. And then it, it sounds like probably ideally you want people to lean more on what's either in the standard library or what's native to the Deno ecosystem. Yeah. [00:46:00] Luca: Yeah. Like the, the, one of the big benefits is that the Deno Standard Library is implemented on top of web standards, right? Like it's, it's implemented on top of these standard APIs. so for example, there's node front matter libraries which do not run in the browser because the browser does not have the buffer global. maybe it's a nice library to do front matter pricing with, but. , you choose it and then three days later you decide that actually this code also needs to run in the browser, and then you need to go switch your front matter library. Um, so, so those are also kind of reasons why we may include something in Strand Library, like maybe there's even really good module already to do something. Um, but if there's certain reliance on specific node features that, um, we would like that library to also be compatible with, with, with web standards, we'll, uh, we might include in the standard library, like for example, YAML Parser, um, or the YAML Parser in the standard library is, is a fork of, uh, of the node YAML module. and it's, it's essentially that, but cleaned up and, and made to use more standard APIs rather than, um, node built-ins. [00:47:00] Jeremy: Yeah, it kind of reminds me a little bit of when you're writing a front end application, sometimes you'll use node packages to do certain things and they won't work unless you have a compatibility shim where the browser can make use of certain node APIs. And if you use the APIs that are built into the browser already, then you won't, you won't need to deal with that sort of thing. [00:47:26] Luca: Yeah. Also like less Bundled size, right? Like if you don't have to shim that, that's less, less code you have to ship to the client. WebAssembly use cases [00:47:33] Jeremy: Another thing I've seen with Deno is it supports running web assembly. [00:47:40] Luca: Mm-hmm. [00:47:40] Jeremy: So you can export functions and call them from type script. I was curious if you've seen practical uses of this in production within the context of Deno. [00:47:53] Luca: Yeah. there's actually a Bunch of, of really practical use cases, so probably the most executed bit of web assembly inside of Deno right now is actually yes, build like, yes, build has a web assembly, build like yeses. Build is something that's written and go. You have the choice of either running. Um, natively in machine code as, as like an ELF process on, on Linux or on on Windows or whatever. Or you can use the web assembly build and then it runs in web assembly. And the web assembly build is maybe 50% slower than the, uh, native build, but that is still significantly faster than roll up or, or, or, or I don't know, whatever else people use nowadays to do JavaScript Bun, I don't know. I, I just use es build always, um, So, um, for example, the Deno website, is running on Deno Deploy. And Deno Deploy does not allow you to run Subprocesses because it's, it's like this edge run time, which, uh, has certain security permissions that it's, that are not granted, one of them being sub-processes. So it needs to execute ES build. And the way it executes es build is by running them inside a web assembly. Um, because web assembly is secure, web assembly is, is something which is part of the JavaScript sandbox. It's inside the JavaScript sandbox. It doesn't poke any holes out. Um, so it's, it's able to run within, within like very strict security context. . Um, and then other examples are, I don't know, you want to have a HTML sanitizer, which is actually built on the real HTML par in a browser. we, we have an hdml sanitizer called com or, uh, ammonia, I don't remember. There's, there's an HTML sanitizer library on denoland slash x, which is built on the html parser from Firefox. Uh, which like ensures essentially that your html, like if you do HTML sanitization, you need to make sure your HTML par is correct, because if it's not, you might like, your browser might parse some HTML one way and your sanitizer pauses it another way and then it doesn't sanitize everything correctly. Um, so there's this like the Firefox HTML parser compiled to web assembly. Um, you can use that to. HTML sanitization, or the Deno documentation generation tool, for example. Uh, Deno Doc, there's a web assembly built for it that allows you to programmatically, like generate documentation for, for your type script modules. Um, yeah, and, and also like, you know, deno fmt is available as a WebAssembly module for programmatic access and a Bunch of other internal Deno, programs as well. Like, or, uh, like components, not programs. [00:50:20] Jeremy: What are some of the current limitations of web assembly and Deno for, for example, from web assembly, can I make HTTP requests? Can I read files? That sort of thing. [00:50:34] Luca: Mm-hmm. . Yeah. So web assembly, like when you spawn as web assembly, um, they're called instances, WebAssembly instances. It runs inside of the same vm, like the same, V8 isolate is what they're called, but. it does not have it, it's like a completely fresh sandbox, sort of, in the sense that I told you that between a runtime and like an engine essentially implements no IO calls, right? And a runtime does, like a runtime, pokes holds into the, the, the engine. web assembly by default works the same way that there is no holes poked into its sandbox. So you have to explicitly poke some holes. Uh, if you want to do HTTP calls, for example, when, when you create web assembly instance, it gives you, or you can give it something called imports, uh, which are essentially JavaScript function bindings, which you can call from within the web assembly. And you can use those function bindings to do anything you can from JavaScript. You just have to pass them through explicitly. and. . Yeah. Depending on how you write your web assembly, like if you write it in Rust, for example, the tooling is very nice and you can just call some JavaScript code from your Rust, and then the build system will automatically make sure that the right function bindings are passed through with the right names. And like, you don't have to deal with anything. and if you're writing go, it's slightly more complicated. And if you're writing like raw web assembly, like, like the web assembly, text format and compiling that to a binary, then like you have to do everything yourself. Right? It's, it's sort of the difference between writing C and writing JavaScript. Like, yeah. What level of abstraction do you want? It's definitely possible though, and that's for limitations. it, the same limitations as, as existing browsers apply. like the web assembly support in Deno is equivalent to the web assembly support in Chrome. so you can do, uh, many things like multi-threading and, and stuff like that already. but especially around, shared mutable memory, um, and having access to that memory from JavaScript. That's something which is a real difficulty with web assembly right now. yeah, growing web assembly memory is also rather difficult right now. There's, there's a, there's a couple inherent limitations right now with web assembly itself. Um, but those, those will be worked out over time. And, and Deno is like very up to date with the version of, of the standard, it, it implements, um, through v8. Like we're, we're, we're up to date with Chrome Beta essentially all the time. So, um, yeah. Any, anything you see in, in, in Chrome beta is gonna be in Deno already. Deno Deploy [00:52:58] Jeremy: So you talked a little bit about this before, the Deno team, they have their own, hosting. Platform called Deno Deploy. So I wonder if you could explain what that is. [00:53:12] Luca: Yeah, so Deno has this really nice, this really nice concept of permissions which allow you to, sorry, I'm gonna start somewhere slightly, slightly unrelated. Maybe it sounds like it's unrelated, but you'll see in a second. It's not unrelated. Um, Deno has this really nice permission system which allows you to sandbox Deno programs to only allow them to do certain operations. For example, in Deno, by default, if you try to open a file, it'll air out and say you don't have read permissions to read this file. And then what you do is you specify dash, dash allow read um, maybe you have to give it. they can either specify, allow, read, and then it'll grant to read access to the entire file system. Or you can explicitly specify files or folders or, any number of things. Same goes for right permissions, same goes for network permissions. Um, same goes for running subprocesses, all these kind of things. And by limiting your permissions just a little bit. Like, for example, by just disabling sub-processes and foreign function interface, but allowing everything else, allowing reeds and allowing network access and all that kind of stuff. we can run Deno programs in a way that is significantly more cost effective to you as the end user than, and, and like we can cold start them much faster than, like you may be able to with a, with a more conventional container based, uh, system. So what, what do you, what Deno Deploy is, is a way to run JavaScript or Deno Code, on our data centers all across the world with very little latency. like you can write some JavaScript code which execute, which serves HTTP requests deploy that to our platform, and then we'll make sure to spin that code up all across the world and have your users be able to access it through some URL or, or, or some, um, custom domain or something like that. and this is some, this is very similar to CloudFlare workers, for example. Um, and it's like Netlify Edge functions is built on top of Deno Deploy. Like Netlify Edge functions is implemented on top of Deno Deploy, um, through our sub hosting product. yeah, essentially Deno Deploy is, is, um, yeah, a cloud hosting service for JavaScript, um, which allows you to execute arbitrary JavaScript. and there there's a couple, like different directions we're going there. One is like more end user focused, where like you link your GitHub repository and. Like, we'll, we'll have a nice experience like you do with Netlify and Versace, that word like your commits automatically get deployed and you get preview deployments and all that kind of thing. for your backend code though, rather than for your front end websites. Although you could also write front-end websites and you know, obviously, and the other direction is more like business focused. Like you're writing a SaaS application and you want to allow the user to customize, the check like you're writing a SaaS application that provides users with the ability to write their own online store. Um, and you want to give them some ability to customize the checkout experience in some way. So you give them a little like text editor that they can type some JavaScript into. And then when, when your SaaS application needs to hit this code path, it sends a request to us with the code, we'll execute that code for you in a secure way. In a secure sandbox. You can like tell us you, this code only has access to like my API server and no other networks to like prevent data exfiltration, for example. and then you do, you can have all this like super customizable, code in inside of your, your SaaS application without having to deal with any of the operational complexities of scaling arbitrary code execution, or even just doing arbitrary code execution, right? Like it's, this is a very difficult problem and give it to someone else and we deal with it and you just get the benefits. yeah, that's Deno Deploy, and it's built by the same team that builds the Deno cli. So, um, all the, all of your favorite, like Deno cli, or, or Deno APIs are available in there. It's just as web standard is Deno, like you have fetch available, you have blob available, you have web crypto available, that kind of thing. yeah. Running code in V8 isolates [00:56:58] Jeremy: So when someone ships you their, their code and you run it, you mentioned that the, the cold start time is very low. Um, how, how is the code being run? Are people getting their own process? It sounds like it's not, uh, using containers. I wonder if you could explain a little bit about how that works. [00:57:20] Luca: Yeah, yeah, I can, I can give a high level overview of how it works. So, the way it works is that we essentially have a pool of, of Deno processes ready. Well, it's not quite Deno processes, it's not the same Deno CLI that you download. It's like a modified version of the Deno CLI based on the same infrastructure, that we have spun up across all of our different regions across the world, uh, across all of our different data centers. And then when we get a request, we'll route that request, um, the first time we get request for that, that we call them deployments, that like code, right? We'll take one of these idle Deno processes and will assign that code to run in that process, and then that process can go serve the requests. and these process, they're, they're, they're isolated and they're, you. it's essentially a V8 isolate. Um, and it's a very, very slim, it's like, it's a much, much, much slimmer version of the Deno cli essentially. Uh, which the only thing it can do is JavaScript execution and like, it can't even execute type script, for example, like type script is we pre-process it up front to make the the cold start faster. and then what we do is if you don't get a request for some amount of. , we'll, uh, spin down that, um, that isolate and, uh, we'll spin up a new idle one in its place. And then, um, if you get another request, I don't know, an hour later for that same deployment, we'll assign it to a new isolate. And yeah, that's a cold start, right? Uh, if you have an isolate which receives, or a, a deployment rather, which receives a Bunch of traffic, like let's say you receive a hundred requests per second, we can send a Bunch of that traffic to the same isolate. Um, and we'll make sure that if, that one isolate isn't able to handle that load, we'll spin it out over multiple isolates and we'll, we'll sort of load balance for you. Um, and we'll make sure to always send to the, to the point of present that's closest to, to the user making the request. So they get very minimal latency. and they get we, we've these like layers of load balancing in place and, and, and. I'm glossing over a Bunch of like security related things here about how these, these processes are actually isolated and how we monitor to ensure that you don't break out of these processes. And for example, Deno Deploy does, it looks like you have a file system cuz you can read files from the file system. But in reality, Deno Deploy does not have a file system. Like the file system is a global virtual file system. which is, is, uh, yeah, implemented completely differently than it is in Deno cli. But as an end user you don't have to care about that because the only thing you care about is that it has the exact same API as the Deno cli and you can run your code locally and if it works there, it's also gonna work in deploy. yeah, so that's, that's, that's kind of. High level of Deno Deploy. If, if any of this sounds interesting to anyone, by the way, uh, we're like very actively hiring on, on Deno Deploy. I happen to be the, the tech lead for, for a Deno Deploy product. So I'm, I'm always looking for engineers, to, to join our ranks and, and build cool distributed systems. Deno.com/jobs. [01:00:15] Jeremy: for people who aren't familiar with the isolates, are these each run in their own processes, or do you have a single process and that has a whole Bunch of isolates inside it? [01:00:28] Luca: in, in the general case, you can say that we run, uh, one isolate per process. but there's many asterisks on that. Um, because, it's, it's very complicated. I'll just say it's very complicated. Uh, in, in the general case though, it's, it's one isolate per process. Yeah. Configuring permissions [01:00:45] Jeremy: And then you touched a little bit on the permissions system. Like you gave the example of somebody could have a website where they let their users give them code to execute. how does it look in terms of specifying what permissions people have? Like, is that a configuration file? Are those flags you pass in? What, what does that look? [01:01:08] Luca: Yeah. So, so that product is called sub hosting. It's, um, slightly different from our end user platform. Um, it's essentially a service that allows you to, like, you email us, well, we'll send you a, um, onboard you, and then what you can do is you can send HTTP requests to a certain end point with a, authentication token and. a reference to some code to execute. And then what we'll do is, we'll, um, when we receive that HTTP request, we'll fetch the code, it's spin up and isolate, execute the code. execute the code. We serve the request, return you the response, um, and then we'll pipe logs to you and, and stuff like that. and the, and, and part of that is also when we, when we pull the, um, the, the code for to spin up the isolate, that code doesn't just include the code that we're executing, but also includes things like permissions, and, and various other, we call this isolate configuration. Um, you can inspect, this is all public. we have public docs for this at Deno.com/subhosting. I think. Yes, Deno.com/subhosting. [01:02:08] Jeremy: And is that built on top of something that's a part of the public Deno project, the open source part? Or is this specific to this sub hosting

devtools.fm
Tobias Koppers - TurboPack, Webpack

devtools.fm

Play Episode Listen Later Feb 3, 2023 48:09 Transcription Available


This week we're joined by Tobias Koppers, the creator of Webpack, and now TurboPack. We talk about the origin of Webpack, maintaining Webpack, and what's next for JavaScript bundling. TurboPack is a new bundler from Vercel, and it's built on a completely new architecture with a familiar API. Join as as we dive deep into the future of bundling.https://turbo.build/packhttps://webpack.js.orghttps://twitter.com/wsokrahttps://github.com/sokraJoin our patreon for the full episode.TooltipsWant to hear use talk about our tooltips? Join our patreon!Andrewhttps://maath.pmnd.rshttps://github.com/vercel/satoriJustinhttps://github.com/julusian/node-elgato-stream-deckhttps://progrium.com/blog/Tobiashttps://github.com/salsa-rs/salsahttps://v8.dev/blog

Ethereum Daily - Crypto News Briefing
ERC-4337 Bundler Compatibility Test Suite

Ethereum Daily - Crypto News Briefing

Play Episode Listen Later Dec 30, 2022 4:30


The ERC-4337 team announces two new projects, DeBank adds sybil address tags, zkSync completes its second security audit, and Nethermind releases v1.15.0. Newsletter: https://ethdaily.substack.com

Classic Ghost Stories
The Ghost of Jerry Bundler by W W Jacobs

Classic Ghost Stories

Play Episode Listen Later Dec 16, 2022 35:36


The Ghost of Jerry Bundler by W. W. Jacobs is Jacob's second most famous supernatural fiction short story after The Monkey's Paw. It's a Christmas Ghost story set in the bar of an old coaching inn in an English country town just a few days short of Christmas. A group of travellers find themselves having to stay over Christmas at the haunted inn and begin to entertain and ultimately terrify themselves.A spooky little story for Christmas ghosts with a twist at the end that deserves its place on any podcast that reads out classic horror audiobooks.Check out my Bandcamp sitehttps://theclassicghoststoriespodcast.bandcamp.com/Remember I have members only stories too!New Patreon RequestBuzzsprout - Let's get your podcast launched! Start for FREESupport the showVisit us here: www.ghostpod.orgBuy me a coffee if you're glad I do this: https://ko-fi.com/tonywalkerIf you really want to help me, become a Patreon: https://www.patreon.com/barcudMusic by The Heartwood Institute: https://bit.ly/somecomeback

Women in Venture Capital
A Conversation with Jazmin Medina, Principal @ NewView Capital | CapTable Coalition | Bundler TV | Goldman Sachs | MBA @ Harvard Business School

Women in Venture Capital

Play Episode Listen Later Oct 3, 2022 26:13


In this episode, we talk to Jazmin Medina about her interest in VC and startups during business school and why she chose an operator role beforehand. We also touch on some of the aspects of the industry that she finds attractive, including the ability to have a high level of ownership, lifelong learning, and the privilege of working alongside CEOs. Jazmin further talks to us about trends she's excited about and The Cap Table Coalition, elaborating on the organization's broader mission to increase the number of startups, founders, board managers, and other positions in power from underrepresented groups. 

Codefol.io
With Ross Kaffenberger: Teaching, WebPacker and Paradigms

Codefol.io

Play Episode Listen Later Jul 25, 2022 68:12


Oh man, my audio quality is AWFUL here. Luckily Ross's is better and he's great at carrying the conversation! We talk about how Ross "cheats" both to get into teaching and to get into tech, and about some overlap between the two -- we talk about Seymour Papert, of course. Later we get into different paradigms of programming and what you learn from them, as well as the balance between being a generalist and a specialist. Ross has done a lot with WebPacker -- WebPacker and the asset pipeline are a lot like Bundler as a way to control the Wild West of dependency management. For show notes and links, see: http://justtheusefulbits.com/jtub/ross-kaffenberger-teaching-webpacker-and-paradigms/

RWpod - подкаст про мир Ruby и Web технологии
04 выпуск 10 сезона. Bundler v2.3, Rpush, Que, Spree Commerce 4.4, Nokogiri-ext, Chroma.js, Ngraph.path и прочее

RWpod - подкаст про мир Ruby и Web технологии

Play Episode Listen Later Jan 30, 2022 52:22


Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Bundler v2.3: Locking the version of Bundler itself Implementing cursor-based pagination Build concurrency control in Sidekiq Using entropy for user-friendly strong passwords Rpush - the push notification service for Ruby Que - a Ruby job queue that uses PostgreSQL's advisory locks for speed and reliability Spree Commerce 4.4 is even more composable and customizable Nokogiri-ext - useful extensions to nokogiri Ruby Is For Fun (book) Web Node.js will include support for fetch in their next release A pipe operator for JavaScript: introduction and use cases I'm porting tsc to Go How React server components work: an in-depth guide Show a browser picker for date, time, color, and files Chroma.js - a small-ish zero-dependency JavaScript library for all kinds of color conversions and color scales Ngraph.path - fast path finding for arbitrary graphs Lightence - React-powered 100% FREE Admin Dashboard Template for building rich user interfaces significantly faster Semi-UI - a modern, comprehensive, flexible design system and UI library RWpod Cafe 28 (05.02.2022) Сбор и голосование за темы новостей

EnCrypted: The Classic Horror Podcast
An EnCrypted Christmas: "Jerry Bundler" by W.W. Jacobs

EnCrypted: The Classic Horror Podcast

Play Episode Listen Later Dec 4, 2021 22:18


A winter's night at the Old Boar's Head in Torchester, and a party of men delight in telling spooky stories to each other. But when one old gentleman shares the story of the hanged highwayman who supposedly haunts the place they all become more than a little spooked... This is an audio presentation (with music and sound effects) of "Jerry Bundler" by W.W. Jacobs (1897) - part of this month's selection of Christmas ghost stories.

Rish Outcast
Rish Outcast 207: Jerry Bundler

Rish Outcast

Play Episode Listen Later Oct 12, 2021


Rish presents W.W. Jacobs's 1908 ghost story "Jerry Bundler."Note, due to circumstances beyond your control, I've had to bump this episode up in place of the one I keep having to delay.To download the episode, Right-Click HERE.To support me on Patreon, click HERE.Logo by Gino "Hairy Chundler" Moretto.

RWpod - подкаст про мир Ruby и Web технологии
01 выпуск 07 сезона. Ruby 2.6.0 Released, Bundler 2, TensorStream, How To Learn CSS, FBT, Readlint, Bandersnatch Life и прочее

RWpod - подкаст про мир Ruby и Web технологии

Play Episode Listen Later Jan 6, 2019 36:31


Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Ruby 2.6.0 Released, Comprehensive Ruby 2.6 changelog, An update on the Bundler 2 release и Active Admin Tips and Performance Optimizations for Rails Apps TensorStream: Bringing Machine Learning to Ruby, Parallelising ETL workflows with the Jongleur gem и Programming Crypto Collectibles Step-by-Step Book / Guide (Видео про CryptoKitties) Web Top JavaScript Frameworks and Topics to Learn in 2019, Simple CSS Animation Tutorial и How To Learn CSS Static Site Boilerplate, FBT - an internationalization framework, Readlint - lint all of the code examples in your README documentation using shared configs, Omniclone - an isomorphic and configurable javascript function for object deep cloning и Bandersnatch Life is an interactive website for the movie Black Mirror: Bandersnatch by Netflix

The Frontside Podcast
115: Testing Issues and BigTest Solutions

The Frontside Podcast

Play Episode Listen Later Nov 29, 2018 50:07


In this internal episode, Charles and Wil talk about testing issues and BigTest solutions. Pieces of the testing story are discussed, such as the start and launch application, component setup and teardown, interacting with the application and component, convergent assertions, and network. Then they talk about testing issues: the fact that cross browser and device-simulated browsers are not good enough, maintainability and when and when not to DRY (RYE), slowness and why (acceptance) testing is slow, portability and why tests are coupled to the framework, and reliability. Finally, they talk about BigTest solutions: @bigtest/cli to start / launch (Karma recommended for now) @bigtest/react, @bigtest/vue, etc for setup & teardown @bigtest/interactor for interactions @bigtest/convergence for assertions @bigtest/network in the future (Mirage recommended for now) Resources: Justin Searls – Please don't mock me This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC. Transcript: CHARLES: Hello, everybody and welcome to The Frontside Podcast, Episode 115. My name is Charles Lowell, this episode's host and a developer here at the Frontside. With me today to talk some shop is Mr Wil Wilsman. WIL: Hello. CHARLES: Hello, Wil. WIL: How's it going? CHARLES: It's going good. I'm actually pretty excited to get to jump into this topic because we're going to be talking about some of the big things that are happening at Frontside and some of the things that we've been developing in almost for the last year. WIL: Yeah. It's been about a year now. CHARLES: It's been about a year and we've talked about it in various podcast but we're going to be talking about it again because there's just been so much progress that we've made, I think in a lot of clarity in kind of what we're going for here when we talk about BigTest and testing big and how we want to roll out the BigTest framework. We just have a lot more experience using it on a number of different projects, so we get to talk about that today. Before we get started, I just wanted to talk a little bit about what BigTest is, both in terms of the framework and also the philosophy. Wil, you're the one who works the most on BigTest. When you think about philosophically, what does BigTest mean to you? WIL: It's the size of your test, not a physical size like size and storage but how much your task actually does. The test itself can be very small as our test are but it tests the whole application from the user interacting with it down to the network requests. That's the definition of the philosophy of a BigTest to me. It's to tests your application from the biggest point of view. CHARLES: Actually, achieving that can be surprisingly difficult, especially in a frontend JavaScript application and there are a lot of solutions out there for testing and we've talked about them. One of the questions that arises is when we talk about BigTest, what exactly are we talking about? Are we talking about a product that you can download and install? Are we talking about the philosophy that you just outlined? Or are we talking about the individual pieces of software that make that philosophy real? I think the answer is we're kind of talking about all three but we want to take this episode to talk about where we're going with the product. What we've identified is the subcomponent pieces of that product. In other words, in order to get started testing big, what are the things that you need to think about? What are the things that you need to do? And then what are the component pieces? Because one of the things that I think is very important to us is that you be able to arrive at wherever you are in your project, whatever framework you are using, whatever current testing solution and be able to begin using BigTest. That means, you might be using some of it or you might be using a lot of it but we want to meet you exactly where you are, so that you can then, get onboarded and start testing big. WIL: Yeah. Definitely an important distinction that we get confusion about is what is BigTests and people just assume like this whole test suite is BigTest but we used the parts of it ourselves like we use Mocha, which is not part of BigTest. We use Chai, which is not part of BigTest. We use Mirage which is kind of part of BigTest but definitely it originate in BigTest and Karma and things like that. BigTest isn't your testing suite. It's not one thing to go-to to grab, to start writing tests. It is a small pieces that you can use in conjunction with other small pieces, just to make it really easy and flexible to test your application. CHARLES: Exactly. Because it turns out that there's a lot going on in the application. Maybe we should talk about what some of those pieces are that you might want to start using BigTest with or that you might need to test big, I guess I should say. What's a good place to start? Let's start with talking about some of the issues that you want to do when your testing big. Then we can talk about what pieces of the testing story that fit in to solve those issues. One of them is you need to test that your application works, like actually works. That means you need to be able to test on a multiplicity of browsers, for example. We're limiting to the domain of web applications. There are actually a shockingly large number of browsers. It's not just Chrome. It's not just Safari. There's Mobile Chrome, Mobile Safari, which are subtly different. There's Edge and I'm sure the Mobile Edge is slightly different too, so you want to be able to test cross browser, right? WIL: Yeah, absolutely and things like Nightmare and JS DOM and things that simulated browsers, we don't necessarily think those are the best tools for writing BigTest because we want to ensure that those browser quirks are caught and tested as well. CHARLES: This is not theoretical like sometimes you'll have a syntax, like the parser is slightly different and you have something that throws a syntax error in Safari or in the Internet Explorer and your whole app is completely busted. If you just take in the time, just even trying to load the app in that browser, you would have caught that. That's what I've been on many times. WIL: Yeah and what I just saw came up yesterday, which comes up frequently is not closing your CSS Selector and Chrome doesn't really care like web to browsers don't care too much but that will fail in Edge and depending on what you're missing, the failing is part of that too but mostly, Firefox and Chrome don't care about that kind of thing. CHARLES: Right. It seems like the majority of testing solutions are kind of focused around Headless Chrome or some variation of Electron. That entire class of really dumb errors has already been caught. Like I said, to actually catch it, it takes less than a millisecond of CPU time just to load it onto the browser and see that thing doesn't work. Unfortunately, they can be catastrophic errors but the problem is how do you actually do. We want to test like cross browser. This is something that we want to do. For me, I just can't imagine shipping an application without having some form of cross browser testing, some capability of being able to say, "I want to test it," like, "We want to work on these eight browsers and so we're going to test it on these eight browsers," but how do you actually go about doing that? WIL: Right now, we are working on the BigTest CLI which will help us launch browsers but that's not complete yet. It has some bugs on. For the meantime we've been using Karma, which is great. Basically, you just have this service that's able to find the browser binary on the system and just launch them pointing to local hosts with your app loaded up and your normal development server take care of loading the test up and running the test. Karma and the BigTest CLI is just there to capture output and launch those separate browsers. CHARLES: Yeah. I remember when I was first using working with Karma and I think Testim is another tool that's in this space. There's Testim, Karma and BigTest actually is we're developing a launcher because launching is something that you're going to need but it's such a weird problem. I feel like with the browser launchers, there's three levels of inversion of control because you're starting a server that then starts another process, which then calls back to your server, which then loads the app resources, which then loads the tests and then runs the test. There's a lot of sleight of hand that has to happen and – WIL: Including injecting the adapter that you use, like the Mocha adapter, the Jasmine adapter that ends up reporting back to the CLI. That's something that Karma and Testim and BigTest will handle for you. CHARLES: Right, so you're fanning out the test suite to a suite of browsers then collecting the results but basically, you need some sort of agent living inside the browser that's going to act on behalf of the test suite, to collect the results. I remember when I first came into contact with Karma and Testim, I was like, "This is so unnecessarily complex," but then, having used it for a while and I think there are some complexity that can be removed but if you want to do cross browser testing, that kind of level of ping-ponging is there's a certain amount of it that just necessary. It's something that's actually quite complex that you need to have in your stack, in your toolbox, if you want to truly test big. WIL: Yeah and all the solutions is mechanisms for detecting when the browser has launched and restarting the browser based on its health check, etcetera and things like that that you wouldn't think of actually loading up a browser but you need to think of when you're doing automated testing. CHARLES: What is it that sets apart, for example the launcher solution? We kind of call this class of solutions launchers, so Testim, Karma, the BigTest CLI. What is it that sets BigTest CLI apart from say, Karma and Testim? WIL: We're trying to be as minimal config as possible and just really easy to get started and going. Karma has a lot of plugins that you need to make sure you have installed and loaded in the options set for those plugins. Testim has some stuff bundled but it still requires this big config bulk at the beginning that you need to passing or that's all what you were doing. We're trying to avoid that with BigTest CLI and one of the ways that we're able to avoid that is by just letting your Bundler handle bunding the test. In Karma, you need Karma webpack or something. Testim has some stuff that it needs and really, we just want like in-testing mode. When you're in the testing environment, just change your index to point your tests, instead of your application and your Bundler will do all the work and we just serve that file and collect the results. CHARLES: Right, so it doesn't matter if you're using Parcel or you're using webpack or you're using Ember CI. WIL: Yeah, Rollup even. CHARLES: Or even just like low level Broccoli or Gulp or whatever. There's a preponderance of bundling solutions and that was always something that was just a huge pain in the butt with Karma. I know it's like just getting to the point where my tests are loaded and you look with Testim, most of my experience come with Testim comes through how it's used in Ember CLI like the histrionics that undertaken just to bundle all your tests assets and your application assets and your vendor assets and just kind of bootstrap that thing. It's a lot of work. WIL: Another thing with BigTest CLI that doesn't include in Karma and Testim does is a concept of a watcher because all these Bundlers, you have HMR -- hot module reloading, Rollup and things like that. It come with plenty of plugins. Parcel is always set out of the box, so if you're using your Bundler, your existing Bundler to bundle your test, you get that watch feature for free, so it's another complexity that the BigTest CLI kind of eliminates. CHARLES: What it means is we've hidden most of that complexity. Just let the Bundler handle it, right? The Bundler is the part of your project that bundles. WIL: Yeah. CHARLES: You should have your launcher actually doing that for you but we still do need to have some way to do that set up and tear down. When we have that testing endpoint, we have some way to say, "We're starting a test, not the application. We're ending the test, tear it down," so how do you abstract that away? WIL: That's kind of something that we can't really avoid. It is just like some sort of dependency on the framework itself, your application framework. It's like you need to mount a React app. You need to mount an Ember app, etcetera and there's different ways to mount those things. This is one of the things that can't really be decoupled as much as everything else can but BigTest has BigTest React and BigTest Vue and we want to eventually gets like BigTest Ember but really, the main export of all these packages is just a simple mount helper that will mount and clean up your application for you and your testing hooks, whether you're using before each from Mocha or before from something else like Jasmine. You know, no matter what you're doing, you just have a hook that mounts your application and then, cleans it up on the next mount. CHARLES: It's worth pointing out here that this is kind of a core concern of testing and testing big is being able to mount your application and tear it down with regularity and having hooks into that process. Whether you're using BigTest or whether you're not, can you still use BigTest React and BigTest Vue, even if you weren't using anything else? WIL: Yeah, absolutely. Like I said, they just export simple mount helpers. I don't even think they have any other inner BigTest dependencies. They just have pure dependencies on their frameworks. CHARLES: Right and so, you could use it, even if you wanted to roll everything else by hand or you wanted to get started somehow and you needed to do set up and tear down, again this is something that's key to being able to test big, so you should be able to use it independently, whether you use the CLI or not, whether you're using any of the other tools or not. All of the tools can be used independently. WIL: Then another feature of the BigTest React and BigTest Vue is the tear down that happens before set up, rather than happening after your test runs, having a separate tear down. This allows it. Whether your test passes or fails, you can look at it and play with it and inspect it and debug it much easier than if you had tear down. You have to disable at tear down or throw a pause in there to keep other or something. CHARLES: Yeah, I love that. When something goes wrong, you can just let the test case run and the last test that it runs, it just leaves at set up. It does the tear down right before the set up. WIL: Exactly, yeah. At the very end of the whole test run, there's an app there waiting for you to play with. CHARLES: If you focus in on a single test, we most commonly use Mocha, so you say a '.only' to run that single focus test, then you have the state of the application at that test case set up and ready to go. You can just play with it, you can inspect it, you can actually just use it as a starting off point and interact with the app normally as you would. WIL: I want to say, Cypress does this too. They do their tear down before they're set up as well. That's how you're able to play with Cypress test. CHARLES: Yeah, I like that trick. Now, we talked about launching, setup and tear down but we haven't actually talked about much of what actually happens in the test cases themselves. We talked about how to start and launch your test suite, how to do that across a bunch of different browsers, how inside of that, you have a separate concern as applications set up and tear down and how you want to lean on how you're actual app is actually bundled because that fits in with the philosophy of testing big. You don't want to use an external Bundler for your test suite. You want to use your real Bundler, how the asset is actually going to look. But when it comes down to actually writing the tests, you need to be able to interact with at the highest level as you possibly can. When I say highest level, we want to verify that the users, when they take certain actions, we'll see certain outcomes and so, we want those outcomes and we already talked about this to be reflected in a real DOM, in a real browser. But at the same time, the real interactions, we want those to be as high fidelity as possible, so you want to be sending events to the browser. You want real mount events, real key events, real interactions. WIL: Yeah, interacting with application. That's another core philosophy that we kind of talked about earlier that defines a BigTest. It's the user interacting with your application. We're not calling methods and expecting other callbacks or arguments to be passed or clicking on a button and expecting a message to pop up that says, "Form submitted successfully." These are user-facing things were starting on and acting on. CHARLES: Yeah and then, it can be really tricky because these things don't happen synchronously. They're happening inside of your browser's event loop. I click that button and then it goes off and there's some loading state and then, I might get an error message that pops up this thing that animates out and then, goes away. The state of the browser is in constant flux. It's constantly changing and so, it can be very difficult to put your finger and say, I want to be in this state if you are limiting yourself to only reading from the DOM. Some frameworks, Ember for example, you have kind of a white box where you can actually inspect the state of the Ember run loop and use that to do some synchronization but it can be very, very hard to coordinate these interactions. WIL: Yeah. You know, to talk about getting to the solution as a BigTest interactor, which is basically modern page components or page objects. If you ever heard of page objects, it's just a way to encapsulate interacting with big pieces of your pages. It's not a new concept. It's been around for a while but BigTest interactor has kind of a new twist on it where they're immutable, composable interactions that are also convergent, which we'll get into later, which basically means if your buttons not there, it won't click the button until it is there. They're really powerful and they're making really easy and fun to write these tests. CHARLES: Yeah, they're super powerful. I remember we talked about convergences last time when we talked about BigTest but interactors, I think are definitely a new development. I think we should spend a little bit of time there talking about, not just the power but also the ergonomics of interactors because they are like page components or page objects, except they're scope to the component. Not only do they have all this wonderful stuff where it'll make sure that the component exists before it starts to interact with it and things like that but their composable. If I have a button, then there are certain operations that are valid for that button. I can click it. I can hover over it. I can do all these things. They're the operations that make it unique to the button. Now, those might actually map to real events. WIL: Similarly, their assertions about that button as well, like as primary is secondary. If this button is repeated throughout your application, you might want to make sure that your form has a primary and secondary button. CHARLES: Exactly. It really encapsulates all the knowledge of how you can interact with both in terms of taking action and reading state from that button. It almost feels like an accessibility API. It would be easy to write a screen reader if you had these interactors for every single component on the page. WIL: That's kind of what it is. It's just like you're defining an API around how your user would interact with your application and what your user would expect in the application. That's the point of page objects and interactors as you're defining this user API, essentially. CHARLES: Yeah and so, really the step that interactor take is that they take the classic page object and it make them composable, so I can have, you kind of touched on this before, a modal dialogue interactor, which is composed out of two button interactors. One for the primary action, one for the secondary action and maybe, it's aware of its own title text, so you can assert on the title text but I didn't actually have to write the individual button interactors for that modal dialog interactor. Then I might have a second modal dialog interactor or a form that's on a modal dialog just composed of the modal dialog interactors and the individual form components, which appear on that particular modal dialog. WIL: It's essentially how we've been building applications lately with components but this is for page objects in your test if you want to mirror that. You don't have to have one-to-one mappings of an interactor to a component but if you do, it's really powerful. CHARLES: Yeah. I found that when we have one-to-one interactors, that's when it just feels the best. WIL: Yeah and on top of this, if you have a component library and your component library exports the interactor that it uses for the component test, like we said, this BigTest technology, they're sprinkled also. We don't have to use interactors in big acceptance tests. We can use them for smaller component tests too, so if we ship these component interactors with the component library, your application that's consuming this component library now can test those components for free, without having to write their own interactors. It can just compose the interactors exported by the library. CHARLES: Man, I almost want you to repeat that word for word again, just so it can sink in. It's so awesome. Because when you actually go to write your tests, you're not starting from ground zero like, "How do I do this?" They're like, "I'm writing some tests for this thing and I'm using these components and so, I've already got the prepackaged interactions for those components." It's like you start writing your tests. If your tests are a 10-story building, it's like you're starting on Floor 7 and you only have to walk up to Floor 10, instead of slogging up all 10 stories. WIL: One really helpful interactor that we work within the open source stuff we've been working on is a date-picker interactor because date-pickers can be really complex. Just having that common interactor and have a date-picker on multiple forms where we can just use that one interactor, we don't have to tell every single test how to interact with that date-picker. We just say pick date and pass the date. CHARLES: Yeah, it's so awesome. That is actually a great example. It doesn't feel scary to write a test for a page that has a date-picker on it or two. If you're doing like a date range or something like that, you're like, "Oh, my God. I don't write the selectors to test this." You just import your date-picker interactor, you set the date, it actually worries about all the low level events and there you go. It feels like you're operating at a much higher level. WIL: Yeah. The interactor API essentially, you're telling me the test what the user would be doing and what the user would be seeing. CHARLES: Yeah. It's worth pointing out again. We've identified starting and launching. We've identified set up and tear down but interaction is a core concern of BigTesting, no matter what tool you're using. One of the things that we found as interactors are something that you can sprinkle on literally any test suite if you're testing an interface and it makes it better. We've used it inside big acceptance tests. We use it inside Jest, doing just little component tests. There are people in the BigTest community who have used it to basically, write component tests against a JS DOM and while theoretically, philosophically, you want to make those tests as big as you possibly can, you can use that piece in your test suite. If you are using a simulated DOM and if you're running a node in a browser, these interactors will still work and you're going to get high fidelity test cases that are resilient to this asynchrony and are composable and if they do have a full-fledged test suite, you can reuse these interactors. They are a really awesome power up that you can bring into your test suite. WIL: And they are not tied to the framework at all. We use them in React for our stuff but we've also written some in Ember. Robert's written some in the Vue and ported some test and one of the beautiful things we've seen from this is that one interactor goes everywhere. You just write the interactor once and you can use it in Ember, in React, in Vue, in those test suites. If the rest of your test suite is framework agnostic, you have this test suite that you can jump frameworks in your test suites until it works and can test your application with high fidelity. CHARLES: Yeah, it's fantastic. I remember when we first tried using interactors inside an Ember test suite because Ember comes with like a big kitchen sink in testing set up but interactors just slotted right in and there's absolutely no issue. WIL: Yeah and there is actually a speed boost even because in most of the Ember test offers a hook into the Ember run loop and interactors are not. There is actually a good speed boos just using interactors. CHARLES: Yeah. This is a good point. It's a good segue because typically, we think of acceptance tests as being really slow and one of the reasons, even the people [inaudible] acceptance tests or testing big as they think like it's going to take a long time. We found that actually we've been able to maintain a happy medium of testing big but also, having those test be really, really fast. When you say you said a speed boost from using interactors with Ember, where is that speed boost actually come from? WIL: I mentioned the Ember test offers a hook into the Ember run loop and interactors aren't and the reason of this is because interactors are converging and they wait for things in the DOM to exist before interacting with them. Instead of waiting for the framework to settle, it just waits for the thing to appear and then interacts with it immediately. If you're starting something about a button toward the top of the page, you don't really care that another button at the bottom of the page has rendered yet, unless of course you have assertion about that but if they're converging, you don't need to hook into the wrong loop to wait for the entire page to load, to interact with just one piece of it. CHARLES: Right. You're just waiting and you say, "I'm expecting something to happen and the moment I detect it, no matter what else is going on, the page could be taking 30 seconds to load but if that button appears and I can interact with it, I can take my action then or I can make my assertion then." It's about kind of removing gates -- artificial gates. WIL: Yeah. Another common thing that's helped with is animations as most test that are hooked into the run loop, you kind of have to wait for some of these animations to finish before you can even interact with the element and that means if a model has a half second animation where it flies in and you have 30 tests around this modal, those tests are extremely slow now because you have to wait for that modal to come in, whereas -- CHARLES: -- Straight up flaky. WIL: Yeah, straight up flaky. Whereas in the actual DOM, that modal is inserted pretty immediately and can be interacted with pretty immediately. With interactors, they don't need to wait for the animation to finish. They can just immediately interact with that modal but of course, if you need to wait for the animation to finish, there are options for that as well. CHARLES: Yeah. If there's some fade in that needs to happen, you can kind of assert on any state and as long as it's achieved at some point, the interactor will recognize it and recognize it at the soonest possible time that it possibly could. I remember getting bitten on one project where the modal animations in particular were so brutal. Not only were they flaky, they just were slow because there was all these manual time outs. It wasn't even a paper cut. It was kind of like a knife cut, like there's someone sitting there and kind of slashing you with a pocket knife. It just was a constant source of pain in your side. WIL: Yeah and that's how you end up with things like waits and sleeps in your test suite. When you need to wait for the animation to happen or something, you just see a sleep for four seconds with a comment because we have to wait for the components to load in. That's kind of a code now. CHARLES: Yeah, that's just asking for trouble both in terms of slowness and in terms of it's going to get flaky again. That has been kind of one the most freeing things about working with interactors and working with convergent assertions on which they're based is you just don't ever have to worry about asynchrony. Really, really truly, most of the time, you're writing your tests, like it's all synchronous and that kind of makes sense because from the user's perspective, their consciousness is synchronous and they don't care about the internal run loop. It's just they were making observations in serial and at some point, they're going to observe something, so the interactor sits at that point and really observes the application the way that your user would. WIL: Yeah. We've mentioned a few times now the convergent assertions, which interactors are based on. A little caveat there if you're using interactors and you're making non-convergent assertions, they might fail or be flaky. That's because interactors wait for the thing to be there to interact with, so as soon as the buttons there, it clicks it but it doesn't wait for after that event has fired and your application has reacted to that event, that's your application is concerned. We need something there like our convergent assertions that can converge on that state and wait for that state to be true before it considers itself passing or in times out. CHARLES: Maybe we should dig a little bit into convergent assertions. I think the last time we had a public conversation on the podcast about this, this is kind of where we were, like we hadn't built the interactors, we hadn't built these other component pieces of the testing story. We were really focused on the convergent assertion. We've talked a little bit about this but I think it's worth rehashing a little bit because it's a unique way of approaching the system but it's also kind of horrifying when you see how it works under the covers. I think when we tell people about the fact that it's basically polling underneath the covers. The timeout is configurable but it's basically polling every 10 milliseconds to observe a state. I remember the first time being confronted with this idea and I was horrified and like my programmer hackles on the back of my neck, like raised up and I was like, "Wait a minute. This is going to be slow. It's going to be computationally intensive." WIL: Yeah. That was my exact thought too because this is going to be slow. If acceptance tests are slow and we're doing an acceptance test every 10 milliseconds, it's going to be really slow and that's actually not the case completely. It's actually the opposite. They're extremely fast. CHARLES: It is shockingly fast. You've got to try it to believe how fast it is, how fast you can run acceptance tests. WIL: Yeah, talking like 100 tests in just tens of seconds. CHARLES: Right. You basically gated by how fast your framework can render. Your tests are not part of the slowness. Your test -- WIL: And also, memory leaks can be costly too. We experience that recently where we had memory leaks that were slowing down our test but we fixed those up in test and put our backup. CHARLES: Yeah, because basically, running the assertion or running the convergence is very fast. It's just a very light ping. I kind of think of it is as it is light as the brush of a photon or something that was bouncing off of a surface, so that you can observe it. It's extremely light and most of the time, it's just waiting so the test and the convergence really just gets out of the way. Just because they can run a thousand times or a hundred times in a second, it's doesn't gun it up. But the thing is it means that your tests run as fast as your application will run. You get back to the point... Was it in React where the kind of the key insight is that JavaScript is not the bottleneck? Well, your tests are not the bottleneck. WIL: Yeah. CHARLES: I guess this is what it is. I don't know if there's anything else that you want to say about convergences. WIL: No. We pretty much summed it up there and that's what interactors are based on. That's how they're able to wait for things in a DOM. It basically polls the DOM until it exists and then it moves on and actually does the interaction. CHARLES: Once again, this is actually a very low level thing on which BigTest is based but this is once again, something that you can use independently. You can write your own convergent assertions. You can write your own convergences that honestly have nothing to do with testing your assertions. It's a free standing library that you can use in your test suite or elsewhere should you choose. WIL: That doesn't need to be a DOM for BigTest convergence there. I use BigTest convergence in BigTest CLI to converge on the browser being launched. Instead of waiting for the browser to report that, I can just kind of poll and see how that process is doing and the convergence waits for that process to start before moving on. CHARLES: Right. I guess the best way I've thought about it it's a way to synchronize on observations and not on callbacks. It's a synchronization mechanism and 99% of the synchronization mechanisms that we're used to, they've involved some sort of callback, a promise, an event-listener, things like that or even a generator where control is handed back explicitly to a piece of code when something happens. Whereas, this is a fundamentally different synchronization primitive, where you are writing synchronous code that's based on observations, so what I observe this, do this. When I observe this, do this. It's extremely robust. WIL: Yeah, very. CHARLES: It is a core piece. A fundamental thing that on which interactors are based on, which the CLI is based, I don't know if it's core to writing tests but -- WIL: It definitely helps. CHARLES: It doesn't helps. We couldn't have BigTest interactor without that. WIL: No, definitely not. CHARLES: Because that's what makes it fast, that's what makes it not flaky at all and having those things, I think it makes it easy to maintain because you can work at the interactor level or the level of user interaction and you don't have to worry about synchronization, so the flow of your tests are very natural. WIL: Yeah. We don't have to explicitly wait for request to be done for making an assertion about your app. That'll just come with convergences, just waiting for test date in application to true. CHARLES: Let's talk about one more piece of the testing issue because when you're testing big, when you're testing in the browser, there's always the issue of what are you going to do about your API. You got to have your API running. It's just always an issue and this is kind of interesting because this sits at the crossroads of testing big and also, getting the most utility out of your test because in an ideal world, if you're testing really big, you're going to be using a real API. You're not going to poke holes in reality. WIL: Yeah. One of the things that we avoid in BigTest is poking holes. We're not shallow mounting the components and testing the methods and the results. We're fully mounting these things and fully interacting with them through the full DOM API. CHARLES: Yeah, exactly, using real browsers. It just occurred to me the irony of us talking about reality being things that are still running inside of a computer processor. I think we've inherited this term from that talk that Justin Searls at AssertJS in 2017. It's a really, really excellent talk. I think he gave it at RubyConf. It's the 'Don't mock me.' WIL: Yeah, it's one of my favorite talks. CHARLES: Yeah, it's a great talk. In it, he talks about the value of a test is a balance of how many holes you poke in reality and sometimes, you encounter a test where all it is like holes in reality. Whether you're mocking this, you're mocking that, you're mocking the DOM, you're mocking the browser, you're mocking your network layer, you're mocking this external API and the more holes you poke, the less useful it's going to be. Network is one of those where it can be very difficult to not poke holes in that reality because it's a huge part of your application. Your frontend application is how it's going to interact with the server but at the same time, servers are gigantic pieces of software themselves, each with their own dependencies, each with their own set up and tear down -- WIL: Have their own concerns. CHARLES: Yeah, exactly. They might be in a different language. They've got runtime, things like they might need external C libraries and crazy stuff like that. They're their own beast. To get a true big end-to-end test, you going to have to stand up your server but the problem that presents is you want your tests to be also isolatable. If you're a developer, I can go to a repo, I can do an install of my dependencies and I can run the tests without having to do any external dependencies other than the repository and the language in which I'm working. This is one where we kind of have tried to walk the line of not wanting to poke holes in reality but also, have the test be containable to the actual application. In order to do that, you need something that presents a high fidelity version of the network. You can kind of try and have your cake and eat it too. You want to have something that acts like a server and really acts like a server but it's actually not a server. WIL: And still poke as few holes as possible and the application and how that's all set up, we don't want to be intercepting methods and responding with fake data. That's not a good way to mock that network. CHARLES: Right. We want to be calling actual fetches, calling actual XMLHttpRequest. Ideally, if you've got service workers, making actual service worker requests. WIL: Basically, as far as the application is concerned, it's talking to a real server on us. CHARLES: Yeah and that's kind of the litmus test for is it a hole in reality or is it just a really great illusion? WIL: Yeah and that's a good name for Mirage, right? It's a really great illusion. CHARLES: Yeah. It is a simulation of reality, so we use Mirage, which is something from the Ember testing world but something that we have extracted and made available as BigTest Mirage. WIL: Yeah. The main difference just being is that we've taken away the Ember dependencies and the run loop stuff. It's just plain JavaScript Mirage. It works exactly the same as you use it in Ember minus the auto imports and the file... Oh, man. I can't think of that word. Aside from automatically importing your files for your server config, you have to do that manually because Ember is what provides that but other than that, it's a form of Mirage. You define models and serializers and factories and all the good stuff. CHARLES: Right and then you can use those factories and you can use those models to really give a high fidelity server. If you are building something in whatever framework, you can use BigTest Mirage to simulate that network layer. Again, we've used it in a number of different scenarios but having that in place means that you're going to be able to have those high fidelity tests where your application is actually making XMLHttpRequest but it's all isolatable, so that it can be run in repo. This isn't really related to testing but it has a fantastic capability where you can prepopulate, you can use the factories to prepopulate your server with data, so that you can use the application without the actual server being implemented. WIL: Yeah. That's extremely powerful. That's we were talking about earlier and getting at is the scenarios which are setting up specific, essentially fixtures but you're generating these fixtures. Factories are essentially high level fixtures, network of fixtures. CHARLES: Yeah, higher order of fixtures. WIL: Yeah, so the scenarios are just setting up these fixtures for a scenario of your applications, like the backend is down or the list only responds with two items as opposed to 5000 items, something like that. You want to be able to, not only test these things but be able to develop against it and Mirage makes that really easy because you can just start your app with Mirage-enabled point to that scenario and you're there. You have that exhausted scenario to develop in. CHARLES: If you've never used Mirage, it is really hard to understand just how incredibly powerful it can be. We've used it now on at least four projects, where we did develop the entire first version of the product without any backend whatsoever. It's an incredible product development tool, even apart from testing, that then informs the shape of what the API was going to be. I know we've talked about this on the podcast before but it's really an incredible technology and it is available to you no matter what framework you're using. I think it's one of the best kept secrets in JavaScript development. WIL: Yeah. That's definitely great. That said, though it does have some fallacies. It's great but it can be a little slow sometimes, so we are eventually working on a BigTest network like another piece of the BigTest pie that you'll be able to sprinkle into your application but in the meantime, praise Mirage. CHARLES: Yeah. We are going to be offering an alternative or maybe collaborating for another version of Mirage but hopefully, we can make Mirage faster, we will be able to make this thing faster, so that it can use service workers and be used in a bunch of different scenarios. Just to recap, we've talked about a lot of different components but over the past year, a couple of years, these are the things that we've identified as being really key components as big part of your acceptance testing and really your testing stack. How you're going to start and launch these things? How are you going to set them up and tear them down? How are you going to interact with the application from a user, both in terms of making assertions and how are you going to take action on behalf of the user and still have it be maintainable, have it be resistant to flakiness, have it be performant? BigTest is the answer to that for those particular areas of the testing story and so, some were using we're using existing components like we use Karma, we use Mirage to date. Those, we did not develop but where we see kind of key pieces of that puzzle missing is where we started writing the BigTest solutions so things like the interactor. Eventually, we are going to make BigTest into a product that's you're going to be able to use kind of out of the box, just like you might install Cypress, where it's a very quick set up and we make all of the decisions about the components for you. But in the meantime, we're really trying to take our time, identify those pieces of the puzzle and build the software component that fits that piece of the puzzle at the absolute best so when they're polished, use them in a more comprehensive product. Things like convergence, things like interactor, things like BigTest React, BigTest Vue and very soon, BigTest Ember. These are things that you can use today, to make your tests just that much bigger and that much better, especially interactor. It's been an incredible journey this past year as we kind of develop these individual pieces and there's just going to be more goodness to come. WIL: Absolutely. Right now, I'm working on some validation type API for interactor that I'm hoping to land soon. That'll open up the possibilities of maybe hiding away those convergent assertions a bit more in your tests and just handling this automatically. It'll be pretty good. CHARLES: It's really exciting. Writing test has got more and more easy and more and more fun over the last year for us. I think we're already starting in a pretty good place. If you have any questions about BigTest, how would folks get in touch with us? WIL: We have a BigTest Gitter channel. You can find a link to that on the BigTest website: BigTestJS.io. Just ask us questions on Gitter and we'll try to answer them. CHARLES: And as always, you can ask us directly. You can send email to Contact@Frontside.io or reach out to us on Twitter at @TheFrontside or you can actually reach out to the BigTestJS Twitter account directly and just call us on Twitter at @BigTestJS. Thank you very much, Wil. WIL: Thank you, Charles.

RWpod - подкаст про мир Ruby и Web технологии
44 выпуск 06 сезона. Bundler 2.0, reCAPTCHA v3, the Evolution of Async JavaScript, Plotly.js, Ervy и прочее

RWpod - подкаст про мир Ruby и Web технологии

Play Episode Listen Later Nov 4, 2018 24:20


Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby An Update on Bundler 2.0, Skip devise trackable module for API calls to avoid users table getting locked и Implementing Google Authenticator in active admin JavaScript Introducing reCAPTCHA v3: the new way to stop bots, Why React's new Hooks API is a game changer и Why JWT shouldn't be stored in Local Storage The Evolution of Async JavaScript: From Callbacks, to Promises, to Async/Await, An introduction to plotly.js — an open source graphing library и Ervy - bring charts to terminal

The Frontside Podcast
093: Monoids, Monoids Everywhere! with Julie Moronuki

The Frontside Podcast

Play Episode Listen Later Jan 11, 2018 47:09


Julie Moronuki: @argumatronic | argumatronic.com Show Notes: This episode is a follow-up episode to the one we did with Julie in September: Learn Haskell, Think Less. We talk a whole lot about monoids, and learning programming languages untraditionally. Transcript: CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode 93. My name is Charles Lowell, a developer here at The Frontside and I am your podcast host-in-training. With me today from The Frontside is Elrick also. Hello, Elrick. ELRICK: Hey. CHARLES: How are you doing? ELRICK: I'm doing great. CHARLES: Alright. Are you ready? ELRICK: Oh yeah, I'm excited. CHARLES: You ready to do some podcasting? Alright. Because we actually have a repeat guest on today. It was a very popular episode from last year. We have with us the author of ‘Learning Haskell: From First Principles' and a book that is coming out but is not out yet but one that we're eagerly looking forward to, Julie Moronuki. Welcome. JULIE: Hi. It's great to be back. CHARLES: What was it about, was it last October? JULIE: I think it was right before I went to London to Haskell [inaudible]. CHARLES: Yeah. JULIE: Which was in early October. So yeah… CHARLES: Okay. JULIE: Late or early October, somewhere in there. CHARLES: Okay. You went to Haskell eXchange. You gave a talk on Monoids. What have you been up to since then? JULIE: Oh wow. It's been a really busy time. I moved to Atlanta and so I've had all this stuff going on. And so, I was telling a friend last night “I'm going to be on this podcast tomorrow and I don't think I have anything to talk about.” [Laughter] JULIE: Because I feel like everything has just been like, all my energy has been sucked up with the move and stuff. But I guess… CHARLES: Is it true that everybody calls it ‘Fatlanta' there? JULIE: Yeah. [Laughs] CHARLES: I've heard the term. But do people actually be like “Yes, I'm from Fatlanta.” JULIE: I've heard it a couple of times. CHARLES: Okay. JULIE: Maybe it's mostly outsiders. I'm not sure. CHARLES: [Chuckles] JULIE: But yeah, it's a real cool city and I'm real happy to be here. But yeah, I did go in October. I went to London and I spoke at Haskell eXchange which was really amazing. It was a great experience and I hope to be able to go back. I got to meet Simon Payton Jones which was incredible. Yeah, and I gave a talk on monoids, monoids and semirings. And… CHARLES: Ooh, a semiring. JULIE: Semiring. So, a semiring is a structure where there's two monoids. So, both of them have an identity element. And the identity element of one of them is an annihilator. Isn't that a great word? It's an annihilator… CHARLES: Whoa. JULIE: Of the other. So, if you think of addition and multiplication, the identity element for addition is zero, right? But if you multiply times zero, you're always going to get to zero, so it's the annihilator of multiplication. CHARLES: Whoa. I think my mind is like annihilated. [Laughter] JULIE: So, it's a structure where you're got two monoids and one of them distributes over the other, the distributive property of addition and multiplication. And the identity of one of them is the annihilator of the other. Anyway, but yeah, I gave a history of where monoids come from and that was really fun. CHARLES: Yeah. I would actually like to get a summary of that, because I think since we last talked, I've been getting a little bit deeper and deeper into these formal type classes. I'm still not doing Haskell day-to-day but I've been importing these ideas into just plain vanilla JavaScript. And it turns out, it's actually a pretty straightforward thing to do. There's definitely nothing stopping these things from existing in JavaScript. It's just, I think people find type class programming can be a tough hill to climb or something like that, or find it intimidating. JULIE: Yeah. CHARLES: But I think it's actually quite powerful. And I think one of the things that I'm coming to realize is that these are well-worn pathways for composing things. JULIE: Right. CHARLES: So, what you encounter in the wild is people generating these one-off ways of composing things. And so, for a shop like ours, we did a lot of Ruby on Rails, a lot of Ember, and both of those frameworks have very strong philosophical underpinnings that's like “You shouldn't be reinventing the wheel if you don't have to.” I think that all of these patterns even though they have crazy quixotic esoteric names, they are the wheels, the gold standard of wheel. [Laughs] They're like… JULIE: Right. CHARLES: We should not be reinventing. And so, that's what I'm coming to realize, is I'm into this. And last time you were talking, you were saying “I find monoids so fascinating.” I think it took a little bit while to seep in. But now, I feel like it's like when you look at one of those stereo vision things, like I'm seeing monoids everywhere. It's like sometimes they won't leave me alone. JULIE: In ‘Real World Haskell' there's a line I've always liked. And I'm going to misquote it slightly but paraphrasing at least. “Monoids are ubiquitous in programming. It's just in Haskell we have the ability to just talk about them as monoids.” CHARLES: Yeah, yeah. JULIE: Because we have a name and we have a framework for gathering all these similar things together. CHARLES: Right. And it helps you. I feel like it helps you because if you understand the mechanics of a monoid, you can then when you encounter a new one, you're 90% there. JULIE: Right. CHARLES: Instead of having to learn the whole thing from scratch. JULIE: Right. And as you see them over and over again, you develop a kind of intuition for when something is monoidal or something looks like a semiring. And so, you get a certain intuition where you think, “Oh, this thing is like a… this is a monad.” And so, what do I know about monads? All of a sudden, this new situation like all these things that I know about monads, I can apply to this new situation. And so, you gain some intuition for novel situations just by being able to relate them to things you already do know. CHARLES: Exactly. I want to pause here for people. The other thing that I think I've come in the last three months to embrace is just embrace the terminology. JULIE: Yeah. CHARLES: You got to just get over it. JULIE: [Chuckles] CHARLES: Think about it like learning a foreign language. The example I give is like tasku is the Finnish word for pocket. JULIE: Right. CHARLES: It sounds weird, right? Tasku. But if you say it 10 times and you think “Pocket, pocket, pocket, pocket, pocket.” JULIE: Yes, yeah. [Laughs] CHARLES: Then it's like, this is a very simple, very useful concept. JULIE: Right. CHARLES: And it's two-sided. There on the one hand, the terminology is obtuse. But at the same time, it's not. It's just, it is what it is. And it's just a symbol that's referencing a concept. JULIE: Right, right. CHARLES: It's a simple concept. So, I just want to be… I know for our listeners, I know that there's a general admonition. Don't worry about the terminology. It's… JULIE: Right, right. Like what I just said, I said the word ‘monad'. I just threw that out there at everybody, but [chuckles] it doesn't matter which one of these words we'd be talking about or whatever I call them. We could give monads a different name and it's still this concept that once you understand the concept itself, and then you can apply it in new situations, it doesn't matter then what it's called. But it does take getting used to. The words are… well, I think functor is a pretty good word for what it is. If you know the history of functor and how it came to mean what it means, I think it's a pretty good word. CHARLES: Really? So, I would love to know the history. Because functor is mystifying to me. It sounds like, I think the analogy I use is like if George Clinton and a funk parliament had an empire, the provinces, the governors of the provinces would be functors. ELRICK: [Laughs] JULIE: Yes. CHARLES: But [Laughs] that's the closest thing to an explanation I can come up with. JULIE: I might use that. I'm about to give a talk on functors. I might use that. [Laughter] ELRICK: Isn't that the name of the library? Funkadelic? CHARLES: Well, that's the name of the library that I've been… JULIE: [Could be], yeah. ELRICK: That you'd been… CHARLES: That I'd been [writing] for JavaScript. ELRICK: Yeah. CHARLES: That imports all these concepts. JULIE: [Laughs] ELRICK: Yeah. JULIE: Yeah. ELRICK: So awesome. JULIE: Yeah. Yeah, I have… CHARLES: So, what is the etymology of functor? JULIE: Well, as far as I can tell, Rudolf Carnap, the logician, invented the word. I don't know if he got it from somewhere else. But the first time I can find a reference to it is in, he wrote a book about… he was a logician but this is sort of a linguistics book. It's called ‘The Logical Syntax of Language'. And that's the first reference I know of to the word functor. And he was trying to really make language very logically systematic, which natural language is and isn't, right? [Chuckles] CHARLES: Right. JULIE: But he was only concerned with really logically systematizing everything. And so, he used the word functor to describe some kinds of function words in language that relate one part of a sentence to another part of a sentence. CHARLES: Huh. So, what's an example? JULIE: So, the example that I've used in the past is, as far as I know this is not one that Carnap himself actually uses but it's the clearest one outside of that book… well the ones inside the book I don't really think are very good examples because they're not really how people talk. So, the one that I've used to try to explain it is the word ‘not' in English where ‘not' gets applied to the whole sentence. It doesn't really change the logical structure of the sentence. It doesn't change the meaning of the sentence except for now it negates the whole thing. CHARLES: I see. JULIE: And so, it relates this sentence with this structure to a different context, which is now the whole thing has been negated. CHARLES: I see. So, the meaning changes, but the structure really doesn't. JULIE: Right. And it changes the whole meaning. CHARLES: Right. JULIE: Not just part of the sentence. So, if you imagine ‘not' applying to an entire sentence because of course we can apply it just to a single word or just to a single phrase and change the meaning just of that word or that phrase, but if you imagine a context where you've applied ‘not' to a whole sentence, to an entire proposition, because of course he's a logician. So, if you've applied ‘not' to an entire proposition, then it doesn't change the structure or the meaning of that proposition per se except for it just relates it to the category of negated propositions. CHARLES: Mmhmm. JULIE: So, that's where it comes from. And… CHARLES: But I still don't understand why he called it functor. JULIE: He's sort of making up… well, actually I think the German might be the same word. CHARLES: Ah, okay. JULIE: Because he was writing in German. Because he's looking for something that evokes the idea of ‘function word'. CHARLES: Oh. JULIE: So, if you were to take the ‘func' of ‘function' [Laughs] and the, I don't know, maybe in German there's some better explanation for making this into a particular word. But that's how I think of it. So, it's ‘function word'. And then category theorists took it from Carnap to mean a way to map a function in this category or when we're talking about Haskell, a function of this type, to a function of another type. CHARLES: Okay. JULIE: And so, it takes the entire function, preserves the structure of the function just like negation preserves the structure of the sentence, and maps the whole thing to just a different context. So, if you had a function from A to B, functor can give you a function from maybe A to maybe B. CHARLES: Right. JULIE: So, it takes the function and just maps it into a different context. CHARLES: Right. So, a JavaScript example is if I've got an array of ints and a function of ints to strings, I can take any array of ints and get an array of strings. JULIE: Right. CHARLES: Or if I have a promise that has an int in it, I can take that same function to get a promise of a string. JULIE: Yeah. CHARLES: Yeah. I had no idea that it actually came from linguistics. JULIE: Yeah. [Laughs] CHARLES: So actually, the category theorists even… it digs deeper than category theory. They were actually borrowing concepts. JULIE: They were, yes. CHARLES: We just always are borrowing concepts. ELRICK: I like the borrowing of concepts. JULIE: Yeah. ELRICK: I think where people struggle with certain things, it's tying it back to something that they're familiar with. So, that's where I get… my mind is like [makes exploding sound] “I now get it,” is when someone ties it back to something that I am… CHARLES: Right. ELRICK: Familiar with. Like Charles' work with the JavaScript, tying it with JavaScript. I'm like, “Oh, now I see what they're talking about.” JULIE: Right. CHARLES: because you realize, you're using these concepts. People are using them, just they're using them anonymously. JULIE: Right. ELRICK: True. CHARLES: They don't have names for them. JULIE: Right. ELRICK: True. CHARLES: It's literally like an anonymous function and you're just taking that lambda and assigning it to a symbol. JULIE: Yeah. CHARLES: You're like “Oh wait. I've been using this anonymous function all over the place for years. I didn't realize. Boom. This is actually a formal concept.” ELRICK: True. And I think when people say like “Don't reinvent the wheel” it's a great statement for someone that has seen a wheel already. [Laughter] ELRICK: You know what I'm saying? If you never saw a wheel, then your'e going to reinvent the wheel because you're like “Aw man. This doesn't exist.” [Chuckles] JULIE: Yeah. ELRICK: But if people are exposed to these concepts, then they wouldn't reinvent the wheel. CHARLES: Right. JULIE: Right. Yeah. CHARLES: Instead of calling in some context, calling it a roller. [Chuckles] It's a round thingy. [Laughter] JULIE: Right. Yeah, so that's a little bit what I tried to do in my monoid talk in London. I tried to give some history of monoid, where this idea comes from and why it's worth talking about these things. CHARLES: Yeah. JULIE: Why it's worth talking about the structure. CHARLES: So, why is it worth the… where did it come from and why is it worth talking about? JULIE: Oh, so back when Boole, George Boole, when he decided to start formalizing logic… CHARLES: George Boole also, he was a career-switcher too, right? He was a primary school teacher. JULIE: Right, yeah. CHARLES: If I recall. He actually, he was basically teaching. Primary school is like elementary school in England, right? JULIE: I believe so, yes. CHARLES: Yeah. I think he was like, he was basically the US equivalent of an elementary school teacher who then went on to a second and probably, thankfully a big career that left a big legacy. JULIE: Right. Although no one knew exactly how big the legacy was really, until Claude Shannon picked it up and then just changed the whole world.[Laughs] Anyway, so Boole, when he was trying to come up with a formal algebra of logic so that we could not care so much about the semantic content of arguments (we could just symbolize them and just by manipulating symbols we could determine if an argument was logically valid or not), he was… well, for disjunction and conjunction which is AND and OR – well, disjunction would be the OR and conjunction the AND – he had prior art. He had addition and multiplication to look at. So, addition is like disjunction in some important ways. And multiplication is like conjunction in some important ways. And I think it took me a while to see how addition and disjunction were like each other, but there are some important ways that they're like each other. One of them is that they share their identity values. If you think of, it's sort of like binary addition and binary multiplication because in boolean logic there's only two values: true or false. So, you have a zero and a one. So, if you think of them as being like binary addition and binary multiplication then it's easier to see the connection. Because when we think of addition of just integers in a normal base 10 or whatever, it doesn't seem that much like an OR. [Laughs] CHARLES: Mmhmm. No, it doesn't. JULIE: [Inaudible] like a logical OR. So, it took me a while to see that. But they're also related then to set intersection and union where intersect-… CHARLES: So can… Let's just stop on that for a little bit, because let me parse that. So, for OR I've got two values, like in an ‘if' statement. This OR that. If I've got a true value then I can OR that with anything and I'll get the same anything. JULIE: Right. CHARLES: So, true is the identity value of OR, right? Is that what you're saying? So, one… JULIE: Well, it's false that's the identity of OR. CHARLES: Oh, it is? JULIE: Zero is the identity of addition. CHARLES: Wait, but if I take ‘false OR one' I get… oh, I get one. JULIE: Right. CHARLES: Okay. So, if I get ‘false OR true', I get true. Okay, so false is the identity. JULIE: Yeah. CHARLES: Oh right. You're right. You're right. Because… okay, sorry. JULIE: So, just like in addition, zero is the identity. So, whatever you add to zero, that's the result, right? You're going to get [the same] CHARLES: Right. JULIE: Value back. So, with OR false is the identity and false is equivalent to zero. CHARLES: [Inaudible] ‘False OR anything' and you're getting the anything. JULIE: Right. So, the only time you'll get a false back is if it's ‘false OR false', right? CHARLES: Right. Mmhmm. JULIE: Yeah. So, false is the identity there. And then it's sort of the same for conjunction where one is the identity of multiplication and one is also the… I mean, true is then the identity of logical conjunction. CHARLES: Right. Because one AND… JULIE: ‘True AND false' will get the false back. [Inaudible] CHARLES: Right. ‘True And true' you can get the true back. JULIE: Yeah. CHARLES: Okay. JULIE: And it's also then true, getting back to what we were talking about, semirings, it's also true that false is a kind of annihilator for conjunction. That's sort of trivial, because… CHARLES: Oh, because you annihilate the value. JULIE: Right. When there's only two values it's a little bit trivial. But it is [inaudible]. So… CHARLES: But it's [inaudible]. Yeah. It demonstrates the point. JULIE: Right. CHARLES: So, if I have yeah, ‘false AND anything' is just going to be false. So, I annihilate whatever is in that position. JULIE: Right. CHARLES: And the same thing as zero is the annihilator for multiplication, right? JULIE: Right. CHARLES: Because zero times anything and you annihilate the value. JULIE: Yeah. CHARLES: And now I've got… okay, I'm seeing it. I don't know where you're going with this. [Laughter] ELRICK: Yeah. CHARLES: But I'm there with you. ELRICK: Yup. JULIE: And then it turns out there are some operations from set theory that work really similarly. So, intersection and union are similar but the ones that are closer to conjunction/disjunction are disjoint unions and cartesian products. So we don't need to talk about those a whole lot if you're not into set theory. But anyway… CHARLES: I like set theory although it's so hard to describe without pictures, without Venn diagrams. JULIE: It is. It really is, yeah. So anyway, all of these things are monoids. And they're all binary associative operations with identity elements. So, they're all monoids. And so, we've taken operations on sets, operations on logical propositions, operations on many kinds of numbers (because not all kinds of addition and multiplication I guess are associative), and we can kind of unify all of those into the same framework. And then once we have done that, then we can see that there's all these other ‘sets'. Because most of the kinds of numbers are sets and there are operations on generic sets with set theory. So, now we can say “Oh. We can do these same kinds of operations on many other kinds of sets, many other varieties of sets.” And we can see that same pattern. And then we can get a kind of intuition for “Well, if I have a disjunctive monoid where I'm adding two things or I'm OR-ing two things…” Because even though those are logically very similar, intuitively and in terms of what it means to concatenate lists versus choosing one or the other, those obviously have different practical effects. CHARLES: So, I'm going to try and come up with some concrete examples to maybe… JULIE: Okay, yeah. CHARLES: A part of them will probably be like in JavaScript, right? So, to capture the idea of a disjunctive monoid versus a conjunctive monoid. So, a disjunctive monoid is like, so in JavaScript we're got two objects. You concat them together and it's like two maps or two hashes. So, you mash them together and you get… so, for the disjunctive one you'd have all the keys from both of the hashes inside the resulting object. You take two objects. Basically we call it object assign in JavaScript where you have basically the empty object. You can take the empty object and then take any number of objects. And so, we talked about… JULIE: That would become a disjunctive monoid, right? CHARLES: That would be a disjunctive monoid because you're like basically, you're OR-ing. Yeah. JULIE: You're kind of, [inaudible] CHARLES: Hard to find the terminology. JULIE: Yeah. CHARLES: But like object assign would be a disjunctive monoid because you're like mashing these two objects. And the resulting object has all of the things from both of them. JULIE: Right. So, it's like a sum of the two, right? CHARLES: Right, right. Okay, so then another one would be like min or max where you've got this list of integers and you can basically take any two integers and you can mash them together and if you're using min, you get the one that's smaller. Basically, you're collapsing them into one value but you're actually just choosing one of them. Is that like… JULIE: Yeah. CHARLES: Would that be like a conjunctive monoid? JULIE: No, that's also disjunctive but that's more like an OR than like a sum. CHARLES: Okay. JULIE: Right. So, that's what I said. It's hard to think of disjunctive monoids I think because there's really two varieties. There's some underlying logical similarity, like the similarity in the identity values. But they're also different. Summing two things versus choosing one or the other are also very different things in a lot of ways. CHARLES: Right. Okay. JULIE: And so, I think the conjunctive monoids are all a little bit more similar, I think. [Chuckles] But the disjunctive monoids are two broad categories. And we don't really have a monoid in Haskell of lists where you're choosing one or the other. The basic list monoid is you're concatenating them. So, you're adding two lists or taking the union of them. But for maybe, the maybe type, we do have monoids in Haskell where you're just choosing either the first just value that comes up or the last just value that comes up. So, we do have a monoid of choice over the maybe type. And then we have a type class called alternative which is monoids of choice for… so, they're disjunctive monoids but instead of adding the two things together, they're choosing one or the other. CHARLES: Okay. JULIE: Though we have a type class for that. [Laughs] CHARLES: [Sighs] Oh wow. Yeah. JULIE: Mmhmm, yeah. CHARLES: I'l have to go read up on that one. JULIE: That type class comes up the most when you're parsing, because you can then parse… like if you found this thing, then parse this thing. But if you haven't found this thing, then you can keep going. And if you find this other thing later, then you can take that thing. So, you allow the possibility of choice. The first thing that you come to that matches, take that thing or parse that thing. So, that type class gets mostly used for parsing but it's not only useful for parsing. CHARLES: Okay. JULIE: So yeah. That's the most of the time when I've used it. CHARLES: Is this when you're like parsing JSON? Or is this when you're just searching some stream for some value? Like you just want to run through it until you encounter this value? Or how does that…? JULIE: Right. Say you want to run through it until you find either this value or this value. I've used it when I've been parsing command line arguments. So, let's say I have some flags that can be passed in on my command line command. There are some flags that could be passed in. So, we'll parse until we find this thing or this thing. This flag or this flag. So, if you find this flag, then we're going to go ahead and parse that and do whatever that flag says to do. If you don't find that first flag then we can keep parsing and see if you find this other flag, in which case we'll do something different. CHARLES: Okay. JULIE: It'll take the first match that it finds. Does that make sense? CHARLES: Yeah, yeah, yeah. It does. But I'm not connecting how it's a monoid. [Laughs] JULIE: How is that a monoid? Well, because it's a monoid of OR-ing CHARLES: What's the identity value or the empty value in that case? JULIE: Well, the empty value would be… let's say you have maybes. Let's say you have some kind of maybe thing, so you're parser is going to return maybe this thing, maybe whatever you're parsing. Like maybe string. CHARLES: Yeah, yeah. JULIE: So, it's going to return a maybe string. So well, nothing would be the empty. CHARLES: Okay. JULIE: But nothing is like the zero because it's a disjunction, logical OR. So, only when you have two nothings will you get back a nothing. Otherwise, it will take the first thing that it finds. CHARLES: Okay. I see. JULIE: Yeah. So, the identity then is the nothing, like false is the identity for disjunction. CHARLES: Mmhmm. Okay. JULIE: Yeah. CHARLES: [Inaudible] JULIE: Yeah. If you have nothing or this other thing, then you return this other thing. Then you return the maybe string. If you have two nothings, then you get in fact nothing. Your parsing has failed. CHARLES: Right, because you've got nothing. JULIE: Because you've got nothing. There was nothing to give you back. CHARLES: So, you concatenated all of the things together and you ended up with nothing. JULIE: Right, because there was nothing there. CHARLES: Right. [Laughs] JULIE: You found nothing. So, it's useful when you've got some possibilities that could be present and you just want to keep parsing until you find the first one that matches. And then it'll just return whatever. It'll just parse the first thing that it matches on. CHARLES: Okay, okay. JULIE: Does that make sense? CHARLES: Yeah. No, I think it makes sense. JULIE: I'm not sure. Because I feel like I kind of went down a rabbit hole there. [Laughs] CHARLES: Yeah. [Laughs] No, no. I think it makes sense. And as a quick aside, I think… so, I was, when we were talking about min and max, are min and max also like a semiring? Because negative infinity is the annihilator of min and it's the identity of max. and positive infinity is the annihilator of max but it's the identity of min. JULIE: I guess. I don't really think of min and max as having identities. Is that how [inaudible]? CHARLES: I'm just, I don't know. Well, I think if you have negative infinity and you max it with anything, you're going to get the anything, right? Negative infinity max one is one. Negative infinity/minus a billion is minus a billion. JULIE: Yeah, okay. CHARLES: I don't know. Just off the cuff. I'm just trying to… annihilators sound cool. And so… [Laughter] CHARLES: And so I'm like, I'm trying to find annihilators. JULIE: Yeah, they are cool. CHARLES: [Laughs] JULIE: One of my friends on Twitter was just talking about how he used the intuition at least of a semiring at work because he had this sort of monoid to concatenate schedules. So, he's got all these different schedules and he's got this kind of monoid to concatenate them, to merge the schedules together. But then he's got this one schedule that is special. And whenever something is in this schedule, it needs to hard override every other schedule. CHARLES: Right. JULIE: And so, that was like the annihilator. So, he was thinking of it as a semiring, because that hard override schedule is like the annihilator of all the other schedules. CHARLES: Yeah. JULIE: If anything else exists on this day or whatever, then it'd just get a hard override. So, there's a real world use. [Laughs] CHARLES: Yeah, a real world example. That's the thing that I'm finding, is that all these really very crystalline abstractions, they still play out very well I think in the real world. And they're useful as a took in terms of casting a net over a problem. Because you're like… when I'm faced with something new, I'm like “Well, let's see. Can I make it a functor?” And if I can, then I've unlocked all these goodies. I've unlocked every single composition pattern that works with functor. JULIE: Right, right. CHARLES: And it's like sometimes it fits. It almost feels like when you're working on something at home and you've got some bolt and you're trying on different diameters. So you're like, “Oh, is it 15 millimeter? Is it 8 millimeter?” JULIE: Right. [Laughs] CHARLES: “Like no, okay. Maybe it'll work with this.” But then when it clicks, then you can really ratchet with some serious torque. JULIE: Right, right. Yeah. CHARLES: So, yeah. Definitely trying to look for semirings [Laughs] is definitely beyond my [can] at this point. But I hope to get there where it can be like, if it's a fit, it's a fit. That's awesome. JULIE: Right. Yeah, it's kind of beyond my can too. Semirings are still a little bit new for me and I can't say that I find them in the wild as it were, as often as monoids or something. But I think it just takes seeing some concrete examples. So, now you know this idea exists. If you just have some concrete examples of it, then over time you develop that intuition, right? CHARLES: Right. JULIE: Like “Okay, I've seen this pattern before.” [Chuckles] CHARLES: yeah. Basically, every time now I want to fold a list, or like in JavaScript, any time you want to reduce something I'm like “There's a monoid here that I'm not seeing. Let me look for it.” JULIE: Yeah. Oh, that's cool, yeah. CHARLES: Because like, that's basically, most of the time you're doing a reduce, then like I said that's the terminology for fold in JavaScript, is you start with some reducible thing. Then you have an initial value and a function to actually concatenate two things together. JULIE: Right. CHARLES: And so, usually that initial state, that's your identity. And then that function is just your concat function from your monoid. And so, usually anytime I do a reduce, there's the three pieces. Boom. Identity value, concatenation function, it's usually right there. And so, that's the way I've found of extracting these things, is I'm very suspicious every time I'm tempted to… JULIE: [Laughs] CHARLES: A fold. I'm like “Hmm. Where's the monoid I'm missing? Is it [under the] couch?” Like, where is it? [Laughs] Because it just, it cleans it up and it makes it so much more concise. JULIE: Oh yeah, that's awesome. CHARLES: So anyhow. JULIE: Have we totally lost Elrick? ELRICK: Nope, I'm still here. JULIE: Okay. [Laughs] ELRICK: I'm sitting in and listening to you two break down these complex topics is really good. Because you guys break them down to a level where it's consumable by people that barely understand it. So, I'm just sitting here just soaking everything in like “Oh, that's awesome.” Taking notes. Yes, okay, okay. [Laughter] JULIE: Cool. ELRICK: So, I'm like riding the train in the back just hanging out, feeling the cool breeze while you guys just pull the train ahead in… [Laughter] ELRICK: In the engine department, you know? It's awesome. CHARLES: Yeah. ELRICK: I don't know if they're related. But you were talking about semirings and I heard of semigroups or semigroups. I have no idea if those two things are related. Are they related or [inaudible]? JULIE: They're kind of related. So, a semigroup is like a monoid but doesn't have an identity value. CHARLES: What is an example of a semigroup out there in the wild? Because every time I find a semigroup, I feel like it's actually a monoid. JULIE: Well, you know I feel like that a lot, too. We do have a data type in Haskell that is a non-empty list. So, there is no empty list CHARLES: Ah, right. Okay. JULIE: So then you can concatenate those lists, but there's never an identity value for it. CHARLES: I see. JULIE: Yeah. So, that's a case. There's actually a lot of comparison functions, greater than and less than. I think those are semigroups because they're binary, they're associative, but they don't have an identity value. Like if you're comparing two numbers, there's not really an identity value there. CHARLES: Right. Well, would the negative infinity work there? Let's see. Like, negative infinity greater than anything would be the anything. Well, okay wait. But greater than, that takes numbers and yields a boolean, right? JULIE: Yeah, CHARLES: Right. So, it couldn't be… could it be a semigroup? Don't semigroups have to… Doesn't the [inaudible] function have to yield the same type as the operands? JULIE: Yes. CHARLES: But a non-empty list, that's a good one. Sometimes it's basically not valid for you to have a list that doesn't have any elements, right? Because it's like the null value or the empty value and it could be like a shopping cart on Amazon. You can't have a shopping cart without at least something in it. JULIE: Right. CHARLES: Or, you can't check out without something. So, you might want to say like the shopping cart that I'm going to check out is a non-empty list. And so, you can put two non-empty lists together. But yeah, there's no value you can mash together, you can concat with anything, that isn't empty. JULIE: Right. CHARLES: So, I guess going back to your question Elrick, I don't know if it's related to semiring. But semigroup is just, it's like one-half of monoid. It's the part that concats two values together. JULIE: Right. Well, yeah. And so, it's supposed to be half a group, right? But I don't remember… CHARLES: [Laughs] JULIE: [Inaudible] all of the group stuff is, all the stuff that these types have to have to be a group. And similarly, I forget what the difference between semiring and ring is. [Chuckles] Because a ring and a group I know are not the same thing. But I forget what the difference is, too. So, I kind of got a handle on what semigroups are, and I know all my Haskell friends are going to, when they hear this podcast they're going to tweet all these examples of semigroups at me, especially my coauthor for ‘Joy of Haskell', Chris Martin. He's really into semigroups. And so, I know he's going to be very disappointed in my inability to think… [Laughter] JULIE: To think of any good examples. But it's not something that I find myself using a lot, whereas semirings are something that I have started noticing a little bit more often. So, how a monoid relates to a group is something that I can't remember off the top of my head. And I know how semirings relate to monoids, but how monoids then relate to rings and groups, I can't really remember. And so, these things are sort of all related. But the relation is not something I can spill out off the top of my head. Sorry. [Laughs] CHARLES: No, It's no worries. You know, I feel like… ELRICK: It's all good. CHARLES: What's funny is I feel like having these discussions is exactly like the discussions people have with any framework of using one that we use a lot, which is EmberJS. But if you could do with React or something, it's like, how does the model relate to the controller, relate to the router, relate to the middleware, relate to the services? You just have these things, these moving parts that fit together. And part of… I feel like exploring this space is really, absolutely no different than exploring any other software framework where you just have these things, these cooperating concepts, and they do click together. But you just have to map out the space in your head. JULIE: Yeah. This is going to sound stupid because everybody thinks that because I know Haskell I must know all these other things. But I just had to ask people to recommend me a book that could explain the relationship of HTML and CSS, because that was completely opaque to me. CHARLES: [Laughs] Yeah. JULIE: I've been involved in the making now of several websites because of the books and stuff like that. And I have a blog. It's not WordPress or anything. I did that sort of myself. So, I've done a little bit with that. But CSS is really terrifying. And… CHARLES: Right. Like query selectors, rules, properties. JULIE: Yeah. ELRICK: [Laughs] CHARLES: Again, might as well be groups and semigroups and monoids, right? JULIE: Right, right. ELRICK: Yeah. CHARLES: [Laughs] ELRICK: That is really interesting. [Chuckles] I've never heard anyone make that comparison before. But it's totally true, now that I'm thinking about it. JULIE: Yeah, yeah. CHARLES: Yeah. In the tech world we are so steeped in our own jargon that we could be… we can reject one set of jargon and be totally fine with another set. Or be like, suspicious of one set of concepts working together and be totally fine with these other designations which are somewhat arbitrary but they work. JULIE: Right. CHARLES: So, people use them. JULIE: So, it's like what you've gotten used to and what you're familiar with and that seems normal and natural to you. [Chuckles] So, the Haskell stuff, most of it seems normal and natural to me. And then I don't understand HTML and CSS. So, I bought a book. [Laughter] CHARLES: Learning HTML and CSS from first principles. JULIE: Yes, yeah. I just wanted to understand. I could tell that they do relate to each other, that there is some way that they click together. I can tell that by banging my head against them repeatedly. But I didn't really understand how, and so yeah. So, i've been reading this book to [Laughs] [learn] HTML and CSS and how they relate together. That's so important, just figuring out how things relate to each other, you know? CHARLES: Yeah. ELRICK: Yeah. That is very true. JULIE: Yeah. ELRICK: We can trade. I can teach you HTML and CSS and you can teach me Haskell. JULIE: Absolutely. ELRICK: [Laughs] CHARLES: There you go JULIE: [Laughs] ELRICK: Because I'm like, “Ooh.” I'm like, “Oh, CSS. Great. No problem.” [Laughter] ELRICK: Haskell, I'm like “Oh, I don't know.” JULIE: Yeah. CHARLES: Yeah. ELRICK: [Laughs] CHARLES: No, it's amazing [inaudible] CSS. ELRICK: Yeah. CHARLES: It is, it's a complicated system. And it's actually, it's in many ways, it's actually a pretty… it's a pretty functional system, CSS is at least. The DOM APIs are very much imperative and about mutable state. But CSS is basically yeah, completely declarative. JULIE: Right. CHARLES: Completely immutable. And yeah, the workings of the interpreter are a mystery. [Laughs] ELRICK: Yup. JULIE: YEs. And you know, for the Joy of Haskell website we use Bootstrap. And so, there was just like… there's all this magic, you know? [Laughs] ELRICK: Oh, yeah. CHARLES: Yeah. JULIE: Oh look, if I just change this little thing, suddenly it's perfectly responsive and mobile. Cool. [Laughter] JULIE: I don't know how it's doing this, but this is great. [Laughs] CHARLES: Yeah. Oh, yeah. It's an infinite space. And yeah, people forget what is so easy and intuitive is not and that there's actually a lot of learning that happened there that they're just taking for granted. JULIE: I think so many people start from HTML and CSS. That's one of their first introductions to programming, or JavaScript or some combination of all three of those. And so, to them the idea that you would be learning Haskell first and then coming around and being like “Oaky, I have to figure out HTML,” that [seems very] strange, right? [Laughter] CHARLES: Yeah. Well, definitely probably stepping into bizarro world. JULIE: And I went backwards. But [Laughs] CHARLES: Yeah. JULIE: Not that it's backwards in terms of… just backwards in terms of the normal way, progression of [inaudible] CHARLES: Yeah. It's definitely the back door. Like coming in through the catering kitchen or something. JULIE: Yes. CHARLES: Instead of the front door. Because you know the browser, you can just open up the Dev Tools and there you are. JULIE: Exactly, yeah. CHARLES: The level of accessibility is pretty astounding. And so, I think t's why it's one of the most popular avenues. JULIE: Oh, definitely. Yeah. ELRICK: It's the back door probably for web development but not the back door for programming in general. JULIE: Mm, yeah. Yeah. CHARLES: Yeah. It seems like Haskell programming has really started taking off and that the ecosystem is starting to get some of the trappings of a really less fricative developer experience in terms of the package management and a command line experience and being able to not make all of the tiny little decisions that need to be made before you're actually writing ‘hello world'. JULIE: Right. ELRICK: Interesting. Haskell has a package manager now? CHARLES: Oh, it has for a while. ELRICK: Oh, really? What is it called? I have no idea? Do you know the name off the top of your head? CHARLES: So, I actually, I'm not that familiar with the ecosystem other than every time I try it out. So I definitely will defer this question to you, Julie. JULIE: This is going to be a dumb question, I guess. What do we mean by package manager? CHARLES: So, in JavaScript, we have npm. The concept of these packages. It's code that you can download, a module that you can import, basically import symbols from. And Ruby has RubyGems. And Python has pip. JULIE: Okay, okay. CHARLES: Emacs has Emacs Packages. And usually, there's some repository and people could publish to them and you can specify dependencies. JULIE: Right, yeah. Okay, so we have a few things. Hackage is sort of the main package repository. And then we have another one called Stackage and the packages that are in Stackage are all guaranteed to work with each other. CHARLES: Mm, okay. JULIE: So, on Hackage, some of the packages that are on Hackage are not really maintained or they only work with some old versions of dependencies and stuff like that, so the people who made Stackage were like “well, if we had this set of packages that were all guaranteed to work together, the dependencies were all kept updated and they all can be made to work together, then that would be really convenient.” And then we have Cabal and we have Stack are the main… and a lot of people use Nix for the same purpose that you would use Cabal or Stack for building projects and importing dependencies and all of that. CHARLES: Right. So, Cabal and Stack would be roughly equivalent then to the way we use Yarn or JavaScript and Bundler in Ruby. You're solving the equation for, here's my root set of dependencies. Go out and solve for the set of packages that satisfy. Give me at least one solution and then download those packages and [you can] run them. JULIE: Yeah, yeah. Right, so managing your dependencies and building your project. Because Haskell's compiled, so you've got to build things. And so yeah, we have both of those. CHARLES: And now there's like web frameworks and REST frameworks. JULIE: Oh there are, yeah. We have… CHARLES: All kinds of stuff now. JULIE: We had this big proliferation of web frameworks lately. And I guess some of them are very good. I don't really do web development. But the people I know who do web development in Haskell say that some of these are very good. Yesod is supposed to be very good. Servant is sort of the new hotness. And I haven't used Servant at all though, so don't ask me questions about it. [Laughter] JULIE: But yeah, we have several big web frameworks now. There are still some probably big holes in the Haskell ecosystem in terms of what people want to see. So, that's one thing that people complain about Haskell for, is that we don't have some of the libraries they'd like to see. I'd like to see something… I would really like to see in Haskell something along the lines of like NLTK from Python. CHARLES: What is that? JULIE: Natural language toolkit. CHARLES: Oh, okay. JULIE: So yeah, Python has this… CHARLES: Yeah, Python's got all the nice science things. JULIE: They really do. And Haskell has some natural language processing libraries available but nothing along the lines of, nothing as big or easy to use and stuff as NLTK yet. So, I'd really like to see that hole get filled a little bit better. And you know… CHARLES: Well, there you go. If anyone out there is seeking fame and fortune in the Haskell community. JULIE: That's actually why I started learning Python, was just so that I could figure out NLTK well enough to start writing it in Haskell. [Laughter] JULIE: So, that's sort of my ambitious long-term project. We'll see how that goes. [Laughs] CHARLES: Nice. Before we wrap up, is there anything going on, coming up, that you want to give a shoutout to or mention or just anything exciting in general? JULIE: Yeah, so on March 30th I'm going to be giving a talk at lambda-squared which is going to be in Knoxville and is a new conference. I think it's just a single-day conference and I'm going to be giving a talk about functors. So, I'm going to try to get through all the exciting varieties of functors in a 50-minute talk. CHARLES: Ooh. JULIE: So, we'll see how that goes. Yeah. And I am still working with Chris Martin on ‘The Joy of Haskell' which should be finished this year, sometime. I'm not going to… [Laughter] JULIE: Give any more specific deadline than that. And in the process of writing Joy of Haskell, I was telling him about some things that, some things that I think are really difficult. Like in my experience, teaching Haskell some places where I find people have the biggest stumbling blocks. And I said, “What if we could do a beginner video course where instead of throwing all of these things at people at once, we separated them out?” And so, you can just worry about this set of stumbling blocks at one time and then later we can talk about this set of stumbling blocks. And so, we're doing… we're going to start a video course, a beginner Haskell video course. I think we'll be starting later this month. So, I'm pretty excited… CHARLES: Nice. JULIE: About that. Yeah. CHARLES: Yeah, I know a lot of people learn really, really well from videos. There's just some… JULIE: Yeah. [Inaudible] for me, so I'm a little nervous. But [Laughs] CHARLES: Yeah, especially if you can do… are you going to be doing live coding examples? Building out things with folks? JULIE: Yeah. CHARLES: Yeah. Well, you just needn't look no further than the popular things like RailsCasts and some of the… yeah, there's just so many good video content out there. Yeah, we'll definitely be looking for the. JULIE: Cool. CHARLIE: Alright. Well, thank you so much, Julie, for coming on. JULIE: Well, thank you for having me on. Sorry I went down some… I went kind of down some rabbit holes. Sorry about that. [Laughs] CHARLES: You know what? You go down the rabbit holes, we spend time walking around the rabbit holes. JULIE: [Laughs] CHARLES: There's something for everybody. So… [Laughter] CHARLES: And ultimately we're strolling through the meadow. So, it's all good. JULIE: [Laughs] Yeah. CHARLES: Thank you too, Elrick. JULIE: It was nice talking to you guys again. CHARLES: Yeah. ELRICK: Yeah, thank you. CHARLES: If folks want to follow up with you or reach out to you, what's the best way to get in contact with you? JULIE: I'm @argumatronic on Twitter and my blog is argumatronic.com which has an email address and some other contact information for me. So, I'd love to hear questions, comments. [Laughs] Yeah. I always [inaudible]. CHARLES: Alright, fantastic. JULIE: To talk to new people. CHARLES: Alright. And if you want to get in touch with us, we are @TheFrontside on Twitter. Or you can just drop us an email at contact@frontside.io. Thanks everybody for listening. And we will see you all later.

RWpod - подкаст про мир Ruby и Web технологии
40 выпуск 05 сезона. On Bundler 2.0 compatibility, Ecto vs ActiveRecord, the road to Ember 3.0, PostgreSQL 10, CKEditor 5 и прочее

RWpod - подкаст про мир Ruby и Web технологии

Play Episode Listen Later Oct 8, 2017 30:02


Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby On Bundler 2.0 compatibility и Ecto vs ActiveRecord PostgreSQL 10 Released, PgParty - ActiveRecord migrations and model helpers for creating and managing PostgreSQL 10 partitions и Graphql-errors - provides a simple error handling for graphql-ruby JavaScript The Road to Ember 3.0, Improving Performance with the Paint Timing API и The future of accessibility for custom elements The many faces of this in javascript, Asynchronous stack traces: why await beats .then() и CKEditor 5 Builds

The Manifest
Episode 3: Rubygems with André Arko

The Manifest

Play Episode Listen Later Sep 18, 2017 54:56


Wherein we discuss Rubygems and Bundler with André Arko. We discuss how he became the lead maintainer of Rubygems and Bundler, and what lead him to set up Ruby Together. Special Guest: André Arko.

arko bundler rubygems andrew nesbitt ruby together
RWpod - подкаст про мир Ruby и Web технологии
23 выпуск 05 сезона. Node v8.1.0, V8 Release 6.0, ESLint v4.0.0, Rendering Rails Pages with render_async, N + 1 Control и прочее

RWpod - подкаст про мир Ruby и Web технологии

Play Episode Listen Later Jun 11, 2017 68:23


Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Rubygems Monthly: Sinatra 2, Bundler 1.15, Rubocop, CanCanCan 2, Devise, Puma and ActsAsTaggableOn 5, Speeding Up Rendering Rails Pages with render_async и Debugging Rails views in production Introducing moby project: a new open-source project to advance the software containerization movement, Open Service Broker API и Open API Initiative Why you should probably avoid mixins, N + 1 Control и Corneal allows you to quickly generate a Sinatra template with Rails-like simplicity JavaScript Node v8.1.0, V8 Release 6.0, ESLint v4.0.0 и Announcing WebRTC and Media Capture WebAssembly 101: a developer's first steps, Adopting Flow & TypeScript и Node.js Child Processes: Everything you need to know Factor-network - simple factor-network implementation written by JavaScript, SVGI - the SVG inspection tool, Flubber - tools for smoother shape animations, Billboard.js - re-usable easy interface JavaScript chart library и Awesome CSS in JS В гостях - Alexandr Lomov Github Facebook