Podcasts about noredink

  • 41PODCASTS
  • 72EPISODES
  • 49mAVG DURATION
  • ?INFREQUENT EPISODES
  • Apr 3, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about noredink

Latest podcast episodes about noredink

Women in Technology
Why Diversity in STEM Matters for Companies & Innovation with Dr. Bushra Anjum

Women in Technology

Play Episode Listen Later Apr 3, 2025 25:20


I'm thrilled to have Dr. Bushra Anjum, join us for today's episode! She is the head of Data Science and AI/LLM subject matter expert at the EdTech startup NoRedInk. With a passion for innovation in adaptive online curriculum tools, Dr. Anjum leads a talented team dedicated to enhancing students' writing and critical thinking skills.Join us as we dive into her impressive journey and explore her pivotal role as Co-chair of the Association for Computing Machinery's Women in Computing Council (ACM-W). Dr. Anjum will share insights on ACM-W's mission to increase representation and inclusion in STEM, the importance of mentorship for women in tech, and her key priorities in advancing initiatives for women in computing.We'll also discuss practical steps for students and early-career professionals looking to break into the tech industry, and how aspiring members can get involved with ACM-W. Plus, we'll learn about the practices Dr. Anjum has implemented to promote equity in the hiring process for her Data Department.Don't miss this inspiring conversation filled with valuable advice and actionable insights for anyone interested in the tech field!✉️ Connect with Dr. Anjum on LinkedIn: https://www.linkedin.com/in/bushraanjum/

Modern Web
Modern Web Podcast S11E32- Why Every Developer Should Try Elm + Are We Abandoning JavaScript? with Lindsay Wardell

Modern Web

Play Episode Listen Later Apr 3, 2024 44:03


Lindsay Wardell, senior software engineer at NoRedInk, shares her opinions on Elm, and explains why every software engineer should give it a try. She and Rob Ocel also discuss trends in fullstack development away from JavaScript, and why developers should broaden their experience with multiple languages to stay adaptable. Sponsored by This Dot Watch this episode on our YouTube Channel Read more on our blog

The Cult of Pedagogy Podcast
218: How to Help Students Without Being a Savior

The Cult of Pedagogy Podcast

Play Episode Listen Later Dec 10, 2023 42:45 Very Popular


As a teacher, you probably find yourself in situations pretty often where you're made aware of a student having needs or challenges that exceed what your school typically offers them. The list of student needs in so many schools is never-ending, and your desire to help meet them is probably pretty strong, too. But attempting to meet these needs on your own — to become a kind of "savior" to your students — can not only lead to burnout for you, it's also not ultimately that helpful to the student long-term. In this episode Alex Shevrin Venet, author of the book Equity-Centered Trauma-Informed Education, returns to talk about the danger of getting into a savior mentality when helping our students, how to tell if you're slipping into that kind of thinking, and how to shift toward healthier and more helpful ways of thinking about and approaching student needs. Thanks to NoRedInk and The Modern Classrooms Project for sponsoring this episode. You can find links to Alex's book and a full transcript of our conversation at cultofpedagogy.com/savior-mentality/.

The Cult of Pedagogy Podcast
217: How to Talk about Race in Your Classroom

The Cult of Pedagogy Podcast

Play Episode Listen Later Nov 12, 2023 48:43


Our classrooms have the potential to be spaces where we learn how to have conversations about challenging topics with respect, curiosity, and kindness. Contrary to the voices that say race is not an appropriate topic for school, in this episode we're saying just the opposite. My guests are Matthew Kay, author of the book, Not Light, But Fire: How to Lead Meaningful Race Conversations in the Classroom, and Jennifer Orr, Kay's co-author of the follow-up book, We're Gonna Keep On Talking: How to Lead Meaningful Race Conversations in the Elementary Classroom. I talked with Matt and Jen about the value of discussion as a teaching tool, the elements that are necessary for creating a healthy ecosystem for race conversations, some strategies for having these conversations in organic and authentic ways, and a message for teachers working in states that are hostile to conversations about race. Thanks to NoRedInk and The Modern Classrooms Project for sponsoring this episode. You can find links to both books and a full transcript of our conversation at cultofpedagogy.com/pod/.

The Cult of Pedagogy Podcast
216: Your Teachers Need a Win

The Cult of Pedagogy Podcast

Play Episode Listen Later Oct 23, 2023 12:03


I have no new strategies or tools or books to share with you this week. Nothing new to implement. Just a simple call to action for administrators to start giving your teachers more specific, genuine positive feedback. They need it.  Thanks to NoRedInk and The Modern Classrooms Project for sponsoring this episode. You can read a full transcript of this podcast at cultofpedagogy.com/pod.

Elm Town
Elm Town 58 – Unblocking users with quality software

Elm Town

Play Episode Listen Later Jun 13, 2023 58:05


Tessa Kelly shares her experience unblocking users while building quality software, explains how to avoid the "accessibility dongle" using the Elm philosophy, and considers some tesk9/accessible-html design changes.Thanks to our sponsor, Logistically. Email: elmtown@logisticallyinc.com.Music by Jesse Moore.Recording date: 2023.04.04GuestTessa Kelly (https://github.com/tesk9)Show notes[00:00:13] Sponsored by Logistically[00:00:47] Introducing Tessa Kelly (she needs no introduction)Elm Town 9 - Getting StartedElm Town 30 - Accessibility with Tessa KellyElm Radio - (2020) Holiday Special!Elm Radio - Accessibility in Elmtesk9/accessible-htmltesk9/palette"Functional Data Structures" at elm-conf 2016"Accessibility with Elm" at elm-conf 2017"Writing Testable Elm" at elm-conf 2019Software Unscripted - Accessibility in Practice with the Accessibilibats!

Elm Town
Elm Town 57 – Brilliant ways to use Elm

Elm Town

Play Episode Listen Later May 30, 2023 67:40


Aaron Strick shares what it was like learning Elm at NoRedInk, and explains some of the "zany" (delightful) ways Elm is used at Brilliant.Thanks to our sponsor, Logistically. Email: elmtown@logisticallyinc.com.Intro music by Jesse Moore.Outro music (The Elm Song) by Matt Farley. (Commissioned by Michael Glass for elm-conf 2019.)Recording date: 2023.03.10GuestAaron Strick (https://aaronstrick.com/)Show notes[00:00:56] Introducing Aaron Strick[00:01:47] An eclectic background[00:05:12] The impetus for Aaron's journey into computers[00:07:10] Learning Elm at NoRedInk"A Farewell to FRP" by Evan Czaplicki on the move away from signals to The Elm Architecture.[00:10:32] What Aaron likes about Elmiselmdead.info[00:13:27] Challenges when learning Elm as first functional language[00:19:33] Mentors at NoRedInkElm Town 15 - Spotlight on Hardy JonesElm in Action by Richard Feldman"Haskell, in Elm terms: Type Classes" by Tereza Sokol[00:23:26] Richard gives us a memorable moment from NoRedInk[00:27:27] Benefits of the holistic approachElm Town 55 – From algorithms & animation to building a decentralized finance app with Dwayne CrooksDiscourse post with Cal Newport quote & how Evan works[00:30:18] Brilliant ways to use Elm"Diagrammar: Simply Make Interactive Diagrams" by Pontus Granström (Strange Loop 2022)Year End Review 2022 post on Aaron's website about working on a mathematical input boxBrilliant.org math courses[00:52:56] Using elm-pages to build aaronstrick.comaaronstrick.comelm-pages.comAaron's music (including the "Turtlehead Poo" cover)[00:59:02] PicksAaron's picksCSS for Javascript Developers by Josh W. ComeauEverything Everywhere All at OnceJared's picksCourtney BarnettParable of the Sower by Octavia E. ButlerThanks, everyone, for coming to Elm Town! If you're enjoying the show, please share it with friends and like/rate it on your podcast platform.

PodRocket - A web development podcast from LogRocket
Functional programming with Elm with Lindsay Wardell

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Jan 10, 2023 23:37


Lindsay Wardell, engineer at NoRedInk, comes on to talk about her recent Vite Conf talk, “Functional programming in Vite with Elm,” to tell us what functional programming is and why it's beneficial. Links https://elmprogramming.com https://twitter.com/elm_programming https://www.manning.com/books/elm-in-action https://elm-lang.org/community/slack https://www.youtube.com/watch?v=QyJZzq0v7Z4 https://twitter.com/lindsaykwardell Tell us what you think of PodRocket We want to hear from you! We want to know what you love and hate about the podcast. What do you want to hear more about? Who do you want to see on the show? Our producers want to know, and if you talk with us, we'll send you a $25 gift card! If you're interested, schedule a call with us (https://podrocket.logrocket.com/contact-us) or you can email producer Kate Trahan at kate@logrocket.com (mailto:kate@logrocket.com) Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket combines frontend monitoring, product analytics, and session replay to help software teams deliver the ideal product experience. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Lindsay Wardell.

Software Unscripted
Accessibility in Practice with the Accessibilibats

Software Unscripted

Play Episode Listen Later Nov 9, 2022 58:32


Richard talks with the Accessibilibats, a team of three people working at NoRedInk to improve the accessibility of a product that's used by millions of people. The discussion focuses on their actual experiences in practice - what was surprising, what was challenging, and advice for the future.

Professional Technical Interviewee with Taylor Dorsett
Episode #30 - Blake Thomas & Richard Feldman - Professional Technical Interviewee w/ Taylor Dorsett

Professional Technical Interviewee with Taylor Dorsett

Play Episode Listen Later May 19, 2022 59:46


My guests today are Blake Thomas and Richard Feldman. They both work at NoRedInk. Blake is the VP of Engineering and Richard is the Head of Open Source. They have hired a large number of engineers over the last year and are continuing to scale NoRedInk. Video: https://youtu.be/9vdFSCWHV1M Part Two - Technical: To be posted Part Two Audio only: To be posted Spotify - https://open.spotify.com/show/7zvt9QZWMUGsQ27NM8XuMd?si=272649053fbf4c03 Apple Podcasts - https://podcasts.apple.com/us/podcast/professional-technical-interviewee-with-taylor-dorsett/id1557937961 Guests: Blake Thomas and Richard Feldman of NoRedInk LinkedIn (Blake): https://www.linkedin.com/in/bwthomas/ LinkedIn (Richard): https://www.linkedin.com/in/rtfeldman/ Website: https://www.noredink.com/ If you enjoyed the show please subscribe, thumbs up, and share the show. Episodes released on the first four Thursdays of each month. Host: Taylor Owen Dorsett Email: dorsetttaylordev@gmail.com Twitter: @yodorsett LinkedIn: https://www.linkedin.com/in/taylordorsett/ Github: https://github.com/TaylorOD Youtube: https://www.youtube.com/c/TaylorDorsett Editor: Dustin Bays Email: dustin.bays@baysbrass.com

FSJam Podcast
Episode 72 - Elm with Lindsay Wardell

FSJam Podcast

Play Episode Listen Later May 6, 2022 40:12


In this episode we discuss NoRedInk's experience using Elm in production, the combined power of functional programming and static type systems, building a language for the long term, and the difficulty of explaining the benefits of purely functional languages to developers who have never experienced them.Lindsay Wardell Home Page Twitter LinkedIn Elm Home Page Twitter Discourse Slack News NoRedInk Home Page Twitter LinksFrom Rails to Elm and Haskell

Software Unscripted
Change Management

Software Unscripted

Play Episode Listen Later May 3, 2022 50:54


Richard talks with Blake Thomas, Director of Engineering at NoRedInk, about some of the human aspects of software development - like change management, delegation, and team organization.

JavaScript Jabber
What's New with Elm? ft. Lindsay Wardell - JSJ 527

JavaScript Jabber

Play Episode Listen Later Apr 12, 2022 76:52 Very Popular


Elm is a functional language that compiles to JavaScript and runs in the browser. Lindsay Wardell from NoRedInk joins the JavaScript Jabber panel this week to discuss his background with Vue and Elm. The discussion ranges into how Lindsay got into Elm and how it differs and solves some of the issues that crop up when people build apps with JavaScript Jabber. Sponsors Top End Devs (https://topenddevs.com/) Coaching | Top End Devs (https://topenddevs.com/coaching) Links elm-vue-bridge (https://elm-vue-bridge.lindsaykwardell.com/) GitHub - lindsaykwardell/vite-elm-template (https://github.com/lindsaykwardell/vite-elm-template) Utilizing Elm in a Web Worker (https://www.lindsaykwardell.com/blog/utilizing-elm-in-a-web-worker) Setting up an Elm project in 2022 (https://www.lindsaykwardell.com/blog/setting-up-elm-in-2022) Lindsay Wardell (https://www.lindsaykwardell.com/) Picks AJ- GitHub: coolaj86/AJScript (https://github.com/coolaj86/AJScript) AJ- Slonik (https://www.npmjs.com/package/slonik) Follow CoolAJ86 Live Streams: YouTube: https://youtube.com/coolaj86 Twitch: https://twitch.tv/coolaj86 Follow Beyond Code: YouTube: https://www.youtube.com/channel/UC2KJHARTj6KRpKzLU1sVxBA Twitter: https://twitter.com/@_beyondcode Charles- Taco Cat Goat Cheese Pizza (https://amzn.to/3jtcuQ3) Dan- Uprooted (https://amzn.to/3E4U0hY) Dan- Support Ukraine Lindsay- Elm Radio Podcast (https://elm-radio.com/) Lindsay- Why Isn't Functional Programming the Norm? – Richard Feldman (https://www.youtube.com/watch?v=QyJZzq0v7Z4) Lindsay- A Taste of Roc — Richard Feldman (https://www.youtube.com/watch?v=6qzWm_eoUXM) Steve- Twitter: Dad Jokes ( @Dadsaysjokes ) (https://twitter.com/Dadsaysjokes) Special Guest: Lindsay Wardell.

All JavaScript Podcasts by Devchat.tv
What's New with Elm? ft. Lindsay Wardell - JSJ 527

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Apr 12, 2022 76:52


Elm is a functional language that compiles to JavaScript and runs in the browser. Lindsay Wardell from NoRedInk joins the JavaScript Jabber panel this week to discuss her background with Vue and Elm. The discussion ranges into how Lindsay got into Elm and how it differs and solves some of the issues that crop up when people build apps with JavaScript. Sponsors Top End Devs (https://topenddevs.com/) Raygun | Click here to get started on your free 14-day trial (https://raygun.com/?utm_medium=podcast&utm_source=jsjabber&utm_campaign=devchat&utm_content=homepage) Coaching | Top End Devs (https://topenddevs.com/coaching) Links elm-vue-bridge (https://elm-vue-bridge.lindsaykwardell.com/) GitHub - lindsaykwardell/vite-elm-template (https://github.com/lindsaykwardell/vite-elm-template) Utilizing Elm in a Web Worker (https://www.lindsaykwardell.com/blog/utilizing-elm-in-a-web-worker) Setting up an Elm project in 2022 (https://www.lindsaykwardell.com/blog/setting-up-elm-in-2022) Lindsay Wardell (https://www.lindsaykwardell.com/) Picks AJ- GitHub: coolaj86/AJScript (https://github.com/coolaj86/AJScript) AJ- Slonik (https://www.npmjs.com/package/slonik) Follow CoolAJ86 Live Streams: YouTube: https://youtube.com/coolaj86 Twitch: https://twitch.tv/coolaj86 Follow Beyond Code: YouTube: https://www.youtube.com/channel/UC2KJHARTj6KRpKzLU1sVxBA Twitter: https://twitter.com/@_beyondcode Charles- Taco Cat Goat Cheese Pizza (https://amzn.to/3jtcuQ3) Dan- Uprooted (https://amzn.to/3E4U0hY) Dan- Support Ukraine Lindsay- Elm Radio Podcast (https://elm-radio.com/) Lindsay- Why Isn't Functional Programming the Norm? – Richard Feldman (https://www.youtube.com/watch?v=QyJZzq0v7Z4) Lindsay- A Taste of Roc — Richard Feldman (https://www.youtube.com/watch?v=6qzWm_eoUXM) Steve- Twitter: Dad Jokes ( @Dadsaysjokes ) (https://twitter.com/Dadsaysjokes) Special Guest: Lindsay Wardell.

Elm Radio
050: Large Elm Codebases with Ju Liu

Elm Radio

Play Episode Listen Later Feb 14, 2022 68:01


Ju Liu (twitter) (github)Elm at NoRedInkJu's blognoredink-ui is NoRedInk's internal UI kit (live demo page)avh4/elm-program-testelm-sortable-table APIImplementing Elm Podcast season 1cultureamp/react-elm-components (React library for embedding Elm as a Web Component)elm-community/js-integration-examplesNoRedInk is hiring Elm and Haskell devsJu Liu's blog

Hi Ed. This is Tech.
How to Get a Job in EdTech Marketing

Hi Ed. This is Tech.

Play Episode Listen Later Feb 7, 2022 36:11


This week, Rob and Anna sit down with Hallie Smith and Justin McElwee to discuss the journey from teacher to Marketer. Hallie is a former teacher, who now leads marketing at edtech companies like NoRedInk, and Justin is a current teacher and podcast producer.

The Bike Shed
315: Emotions Are A Pendulum

The Bike Shed

Play Episode Listen Later Nov 9, 2021 41:23


Steph talks about starting a new project and identifying "focused" tests while Chris shares his latest strategy for managing flaky tests. They also ponder the squishy "it depends" side of software and respond to a listener question about testing all commits in a pull request. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. rspec-retry (https://github.com/NoRedInk/rspec-retry) Cassidy Williams - It Depends - GitHub Universe 2021 (https://www.youtube.com/watch?v=aMWh2uLO9OM) Say No To More Process (https://thoughtbot.com/blog/say-no-to-more-process-say-yes-to-trust) StandardRB (https://github.com/testdouble/standard) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: My new computer is due on the fourth. I'm so close. STEPH: On the fourth? CHRIS: On the fourth. STEPH: That's so exciting. CHRIS: And I'm very excited. But no, I don't want to upgrade any software on this computer anymore. Never again shall I update a piece of software on this computer. STEPH: [laughs] CHRIS: This is its final state. And then I will take its soul and move it into the new computer, and we'll go from there. [chuckles] STEPH: Take its soul. [laughs] CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we learn along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. Let's see. It's been kind of a busy week. It's been a busy family week. Utah, my dog, hasn't been feeling well as you know because you and I have chatted off-mic about that a bit. So he is still recovering from something, I don't know what. He's still on most days his normal captain chaos self, but then other days, he's not feeling well. So I'm just keeping a close eye on him. And then I also got some other family illnesses going on. So it has been a busy family week for sure. On the more technical project side, I am wrapping up my current project. So I have one more week, and then I will shift into a new project, which I'm very excited about. And you and I have chatted about this several times. So there's always just that interesting phase where you're trying to wrap up and hand things off and then accomplish last-minute wishlist items for a project before then you start with a new one. So I am currently in that phase. CHRIS: How long were you on this project for? STEPH: It'll be a total of I think eight months. CHRIS: Eight months, that's healthy. That's a bunch. It's always interesting to be on a project for that long but then not longer. There were plenty of three and four-month projects that I did. And you can definitely get a large body of work done. You can look back at it and proudly stare at the code that you have written. But that length of time is always interesting to me because you end up really...for me, when I've had projects that went that long but then not longer, I always found that to be an interesting breaking point. How are you feeling moving on from it? Are you ready for something new? Are you sad to be moving on? Do you feel attached to things? STEPH: It's always a mix. I'm definitely attached to the team, and then there are always lots of things that I'd still love to work on with that team. But then, I am also excited to start something new. That's why I love this role of consulting because then I get to hop around and see new projects and challenges and work with new people. I'm thinking seven to eight months might be a sweet spot for me in terms of the length of a project. Because I find that first month with a project, I'm really still ramping up, I'm getting comfortable, I'm getting in the groove, and I'm contributing within a short amount of time. But I still feel like that first month; I'm getting really comfortable with this new environment that I'm in. And so then I have that first month. And then, at six months, I have more of heads-down time. And I get to really focus and work with a team. And then there's that transition period, and it's nice to know when that's coming up for several weeks, so then I have a couple of weeks to then start working on that transition phase. So eight months might be perfect because then it's like a month for onboarding, ramping up, getting comfortable. And then six months of focus, and then another month of just focusing on what needs to be transitioned so then I can transition off the team. CHRIS: All right. Well, now we've defined it - eight months is the perfect length of a project. STEPH: That's one of the things I like about the Boost team is because we typically have longer engagements. So that was one of the reasons when we were splitting up the teams in thoughtbot that I chose the Boost team because I was like, yeah, I like the six-month-plus project. Speaking of that wishlist, there are little things that I've wanted to make improvements on but haven't really had time to do. There's one that's currently on my mind that I figured I'd share with you in case you have thoughts on it. But I am a big proponent of using the RSpec focus filter for when running tests. So that way, I can just prefix a context it block or describe block with F, and then RSpec I can just run all the tests. But RSpec will only run the tests that I've prefixed with that F focus command., and I love it. But we are running into some challenges with it because right now, there's nothing that catches that in a pull request. So if you commit that focus filter on some of your tests, and then that gets pushed up, if someone doesn't notice it while reviewing your pull request, then that gets merged into main. And all of the tests are still green, but it's only a subset of the tests that are actually running. And so it's been on my mind that I'd love something that's going to notice that, that's going to catch it, something that is not just us humans doing our best but something that's automated that's going to notice it for us. And I have some thoughts. But I'm curious, have you run into something like this? Do you have a way that you avoid things like that from sneaking into the main branch? CHRIS: Interestingly, I have not run into this particular problem with RSpec, and that's because of the way that I run RSpec tests. I almost never use the focus functionality where you actually change the code file to say, instead of it, it is now fit to focus that it. I tend to lean into the functionality where RSpec you can pass it the line number just say, file: and then line number. And RSpec will automatically figure out which either spec or context block or entire file. And also, I have Vim stuff that allows me to do that very easily from the file. It's very rare that I would want to run more than one file. So basically, with that, I have all of the flexibility I need. And it doesn't require any changes to the file. So that's almost always how I'm working in that mode. I really love that. And it makes me so sad when I go to JavaScript test runners because they don't have that. That said, I've definitely felt a very similar thing with ESLint and ESLint yelling at me for having a console.log. And I'm like, ESLint, I'm working here. I got to debug some stuff, so if you could just calm down for a minute. And what I would like is a differentiation between these are checks that should only run in CI but definitely need to run in CI. And so I think an equivalent would be there's probably a RuboCop rule that says disallow fit or disallow any of the focus versions for RSpec. But I only want those to run in CI. And this has been a pain point that I felt a bunch of times. And it's never been painful enough that I put in the effort to fix it. But I really dislike particularly that version of I'm in my editor, and I almost always want there to be no warnings within the editor. I love that TypeScript or ESLint, or other things can run within the editor and tell me what's going on. But I want them to be contextually aware. And that's the dream I've yet to get there. STEPH: I like the idea of ESLint having a work mode where you're like, back off, I am in work mode right now. [chuckles] I understand that I won't commit this. CHRIS: I'm working here. [laughter] STEPH: And I like the idea of a RuboCop. So that's where my mind went initially is like, well, maybe there's a custom cop, or maybe there's an existing one, and I just haven't noticed it yet. But so I'm adding a rule that says, hey, if you do see an fcontext, fdescribe, ffit, something like that, please fail. Please let us know, so we don't merge this in. So that's on my wishlist, not my to-don't list. That one is on my to-do list. CHRIS: I'm also intrigued, though, because the particular failure mode that you're describing is you take what is an entire spec suite, and instead, you focus down to one context block within a given file. So previously, there were 700 specs that ran, and now there are 12. And that's actually something that I would love for Circle or whatever platform you're running your tests on to be like, hey, just as a note, you had been slowly creeping up and had hit a high watermark of roughly 700 specs. And then today, we're down to 12. So either you did some aggressive grooming, or something's wrong. But a heuristic analysis of like, I know sometimes people delete specs, and that's a thing that's okay but probably not this many. So maybe something went wrong there. STEPH: I feel like we're turning CI into this friend at the bar that's like, "Hey, you've had a couple of drinks. I just wanted to check in with you to make sure that you're good." [laughs] CHRIS: Yes. STEPH: "You've had 100 tests that were running and now only 50. Hey, friend, how are you? What's going on?" CHRIS: "This doesn't sound like you. You're normally a little more level-headed." [laughs] And that's the CI that is my friend that keeps me honest. It's like, "Wait, you promised never to overspend anymore, and yet you're overspending." I'm like, "Thank you, CI. You're right; I did say I want the test to pass." STEPH: [laughs] I love it. I'll keep you posted if I figure something out; if I either turn CI into that friend, that lets me know when my behavior has changed in a concerning way, and an intervention is needed. Or, more likely, I will see if there's a RuboCop or some other process that I can apply that will check for this, which I imagine will be fast. I mean, we're very mindful about ensuring our test suite doesn't slow down as we're running it. But I'm just thinking about this out loud. If we add that additional cop, I imagine that will be fast. So I don't think that's too much of an overhead to add to our CI process. CHRIS: If you've already got RuboCop in there, I'm guessing the incremental cost of one additional cop is very small. But yeah, it is interesting. That general thing of I want CI to go fast; I definitely feel that feel. And we're slowly creeping up on the project I'm working on. I think we're at about somewhere between five to six minutes, but we've gotten there pretty quickly where not that long ago; it was only three minutes. We're adding a lot of features specs, and so they are definitely accruing slowdowns in our CI. And they're worth it; I think, because they're so valuable. And they test the whole integration of everything, but it's a thing that I'm very closely watching. And I have a long list of things that I might pursue when I decide it's time for CI to get a haircut, as it were. STEPH: I have a very hot tip for a way to speed up your test, and that is to check if any of your tests have a very long sleep in them. That came up recently [chuckles] this week where someone was working in a test and found some relic that had been added a while back that then wasn't caught. And I think it was a sleep 30. And they were like, "Hey, I just sped up our test by 30 seconds." I was like, ooh, we should grep now to see if there's anything else like that. [laughs] CHRIS: Oh, I love the sentence we should grep now. [laughter] The correct response to this is to grep immediately. I thought you were going to go with the pro tip of you can just focus down to one context block. And then the specs will run so much faster because you're ignoring most of them, but we don't want to do that. The sleep, though, that's a pro tip. And that does feel like a thing that there could be a cop for, like, never sleep more than...frankly, let's try not to sleep at all but also, add a sleep in our specs. We can sleep in life; it's important, but anyway. [chuckles] STEPH: [laughs] That was the second hot tip, and you got it. CHRIS: Lots of hot tips. Well, I'm going to put this in the category of good idea, terrible idea. I won't call it a hot tip. It's a thing we're trying. So much as we have tried to build a spec suite that is consistent and deterministic and tells us only the truth, feature specs, even in our best efforts, still end up flaking from time to time. We'll have feature specs that fail, and then eventually, on a subsequent rerun, they will pass. And I am of the mindset that A, we should try and look into those and see if there is a real cause to it. But sometimes, just the machinery of feature specs, there's so much going on there. We've got the additional overhead of we're running it within a JavaScript context. There's just so much there that...let me say what I did, and then we can talk more about the context. So there's a gem called RSpec::Retry. It comes from the wonderful folks over at NoRedInk, a well-known Elm shop for anyone out there in the Elm world. But RSpec::Retry does basically what it says in the name. If the spec fails, you can annotate specs. In our case, we've only enabled this for the feature specs. And you can tell it to retry, and you can say, "Retry up to this many times," and et cetera, et cetera. So I have enabled this for our feature specs. And I've only enabled it on CI. That's an important distinction. This does not run locally. So if you run a feature spec and it fails locally, that's a good chance for us to intervene and look at whether or not there's some flakiness there. But on CI, I particularly don't want the case where we have a pull request, everything's great, and we merge that pull request, and then the subsequent rebuild, which again, as a note, I would rather that Circle not rebuild it because we've already built that one. But that is another topic that I have talked about in the past, and we'll probably talk about it again in the future. But setting that aside, Circle will rebuild on the main branch when we merge in, and sometimes we'll see failures there. And that's where it's most painful. Like, this is now the deploy queue. This is trying to get this out into whatever environment we're deploying to. And it is very sad when that fails. And I have to go in and manually say, hey, rebuild. I know that this works because it just worked in the pull request, and it's the same commit hash. So I know deterministically for reasons that this should work. And then it does work on a rebuild. So we introduced RSpec::Retry. We have wrapped it around our feature specs. And so now I believe we have three possible retries. So if it fails once, it'll try it again, and then it'll try it a third time. So far, we've seen each time that it has had to step in; it will pass on the subsequent run. But I don't know; there was some very gentle pushback or concerns; let's call them when I introduced this pull request from another developer on the team, saying, "I don't know, though, I feel like this is something that we should solve at the root layer. The failures are a symptom of flaky tests, or inconsistency or et cetera, and so I'd rather not do this." And I said, "Yeah, I know. But I'm going to merge it," and then I merged it. We had a better conversation about that. I didn't just broadly overrule. But I said, "I get it, but I don't see the obvious place to shore this up. I don't see where we're doing weird inconsistent things in our code. This is just, I think, inherent complexity of feature specs." So I did it, but yeah, good idea, terrible idea. What do you think, Steph? Maybe terrible is too strong of a word. Good idea, mediocre idea. STEPH: I like the original branding. I like the good idea, terrible idea. Although you're right, that terrible is a very strong branding. So I am biased right now, so I'm going to lead in answering your question by stating that because our current project has that problem as well where we have these flaky tests. And it's one of those that, yes, we need to look at them. And we have fixed a large number of them, but there are still more of them. And it becomes a question of are we actually doing something wrong here that then we need to fix? Or, like you said, is it just the nature of these features-specs? Some of them are going to occasionally fail. What reasonable improvements can we make to address this at the root cause? I'm interested enough that I haven't heard of RSpec::Retry that I want to check it out because when you add that, you annotate a test. When a test fails, does it run the entire build, or will it rerun just that test? Do you happen to know? CHRIS: Just the test. So it's configured as in a round block on the feature specs. And so you tell it like, for any feature spec, it's like config.include for feature specs RSpec::Retry or whatever. So it's just going to rerun the one feature spec that failed when and if that happens. So it's very, very precise as well in that sense where when we have a failure merging into the main branch, I have to rebuild the whole thing. So that's five or six minutes plus whatever latency for me to notice it, et cetera, whereas this is two more seconds in our CI runtime. So that's great. But again, the question is, am I hiding? Am I dealing with the symptoms and not the root cause, et cetera? STEPH: Is there a report that's provided at the end that does show these are the tests that failed and we had to rerun them? CHRIS: I believe no-ish. You can configure it to output, but it's just going to be outputting to standard out, I believe. So along with the sea of green dots, you'll see had to retry this one. So it is visible, but it's not aggregated. And the particular thing is there's the JUnit reporter that we're using. So the XML common format for this is how long our tests took to run, and these ones passed and failed. So Circle, as a particular example, has platform-level insights for that kind of stuff. And they can tell you these are your tests that fail most commonly. These are the tests that take the longest run, et cetera. I would love to get it integrated into that such that retried and then surface this to Circle. Circle could then surface it to us. But right now, I don't believe that's happening. So it is truly I will not see it unless I actively go search for it. To be truly honest, I'm probably not doing that. STEPH: Yeah, that's a good, fair, honest answer. You mentioned earlier that if you want a test to retry, you have to annotate the test. Does that mean that you get to highlight specific tests that you're marking those to say, "Hey, I know that these are flaky. I'm okay with that. Please retry them." Or does it apply to all of them? CHRIS: I think there are different ways that you can configure it. You could go the granular route of we know this is a flaky spec, so we're going to only put the retry logic around it. And that would be a normal RSpec annotation sort of tagging the spec, I think, is the terminology there. But we've configured it globally for all feature specs. So in a spec support file, we just say config.include Rspec::Retry where type is a feature. And so every feature spec now has the possibility to retry. If they pass on the first pass, which is the hope most of the time, then they will not be tried. But if they don't, if they fail, then they'll be retried up to three times or up to two additional times, I think is the total. STEPH: Okay, cool. That's helpful. So then I think I have my answer. I really think it's a good idea to automate retrying tests that we have identified that are flaky. We've tried to address the root, and our resolution was this is fine. This happens sometimes. We don't have a great way to improve this, and we want to keep the test. So we're going to highlight that this test we want to retry. And then I'm going to say it's not a great idea to turn it on for all of them just because then I have that same fear about you're now hiding any flaky tests that get introduced into the system. And nobody reasonably is going to go and read through to see which tests are going to get retried, so that part makes me nervous. CHRIS: I like it. I think it's a balanced and reasonable set of good and terrible idea. Ooh, it's perfect. I don't think we've had a balanced answer on that yet. STEPH: I don't think so. CHRIS: This is a new outcome for this segment. I agree. Ideally, in my mind, it would be getting into that XML format, the output from the tests, so that we now have this artifact, we can see which ones are flaky and eventually apply effort there. What you're saying feels totally right of we should be more particular and granular. But at the same time, the failure mode and the thing that I'm trying, I want to keep deploys going. And I only want to stop deploys if something's really broken. And if a spec retries, then I'm fine with it is where I've landed, particularly because we haven't had any real solutions where there was anything weird in our code. Like, there's just flakiness sometimes. As I say it, I feel like I'm just giving up. [laughs] And I can hear this tone of stuff's just hard sometimes, and so I've taken the easy way out. And I guess that's where I'm at right now. But I think what you're saying is a good, balanced answer here. I like it. I don't know if I'm going to do anything about it, but...[laughter] STEPH: Well, going back to when I was saying that I'm biased, our team is feeling this pain because we have flaky tests. And we're creating tickets, and we're trying to do all the right things. We create a ticket. We have that. So it's public. So people know it's been acknowledged. If someone's working on it, we let the team know; hey, I'm working on this. So we're not duplicating efforts. And so, we are trying to address all of them. But then some of them don't feel like a great investment of our time trying to improve. So that's what I really do like about the RSpec::Retry is then you can still have a resolution. Because it's either right now your resolution is to fix it or to change the code, so then maybe you can test it in a different way. There's not really a good medium step there. And so the retry feels like an additional good outcome to add to your tool bag to say, hey, I've triaged this, and this feels reasonable that we want to retry this. But then there's also that concern of we don't want to hide all of these flaky tests from ourselves in case we have done it and there is an opportunity for us to improve it. So I think that's what I do really like about it because right now, for us, when a test fails, we have to rerun the entire build, and that's painful. So if tests are taking about 20 minutes right now, then one spec fails, and then you have to wait another 20 minutes. CHRIS: I would have turned this on years ago with a 20-minute build time. [chuckles] STEPH: [laughs] Yeah, you're not wrong. But also, I didn't actually know about RSpec::Retry until today. So that may be something that we introduce into our application or something that I bring up to the team to see if it's something that we want to add. But it is interesting that initial sort of ooh kind of feeling that the team will give you introducing because it feels bad. It feels wrong to be like, hey, we're just going to let these flaky tests live on, and we're going to automate retrying them to at least speed us up. And it's just a very interesting conversation around where we want to invest our time and between the risk and pay off. And I had a similar experience this week where I had that conversation, but this one was more with myself where I was working through a particular issue where we have a state in the application where something weird was done in the past that led us to a weird state. And so someone raised a very good question where it's like, well, if what you're saying is technically an impossible state, we should make it impossible, like at the database layer. And I love that phrase. And yet, there was a part of me that was like, yes, but also doing that is not a trivial investment. And we're here because of a very weird thing that happened before. It felt one of those interesting, like, do we want to pursue the more aggressive, like, let's make this impossible for the future? Or do we want to address it for now and see if it comes back up, and then we can invest more time in it? And I had a hard time walking myself through that because my initial response was, well, yeah, totally, we should make it impossible. But then I walked through all the steps that it would take to make that happen, and it was not very trivial. And so it was one of those; it felt like the change that we ended up with was still an improvement. It was going to prevent users from seeing an error. It was still going to communicate that this state is an odd state for the application to be in. But it didn't go as far as to then add in all of the safety measures. And I felt good about it. But I had to convince myself to feel good about it. CHRIS: What you're describing there, the whole thought sequence, really feels like the encapsulation of it depends. And that being part of the journey of learning how to do software development and what it means. And you actually shared a wonderful video with me yesterday, and it was Cassidy Williams at GitHub Universe. And it was her talking to her younger self, and just it depends, and it was so true. So we will include a link to that in the show note because that was a wonderful thing for you to share. And it really does encapsulate this thing. And from the outside, before I started doing software development, I'm like, it's cool. I'm going to learn how to sling code and fix the stuff and hack, and it'll be great, and obvious, and correct, and knowable. And now I'm like, oh man, squishy nonsense. That's all it is. STEPH: [laughs] CHRIS: Fun squishy, and I like it. It's so good. But it depends. Exactly that one where you're like, I know that there's a way to get to correctness here but is it worth the effort? And looping back to...I'm surprised at the stance that I've taken where I'm just like, yeah, I'm putting in RSpec::Retry. This feels like the right thing. I feel good about this decision. And so I've tried to poke at it a tiny bit. And I think what matters to me deeply in a list of priorities is number one correctness. I care deeply that our system behaves correctly as intended and that we are able to verify that. I want to know if the system is not behaving correctly. And that's what we've talked about, like, if the test suite is green, I want to be able to deploy. I want to feel confident in that. Flaky specs exist in this interesting space where if there is a real underlying issue, if we've architected our system in a way that causes this flakiness and that a user may ever experience that, then that is a broken system. That is an incorrect system, and I want to resolve that. But that's not the case with what we're experiencing. We're happy with the architecture of our system. And when we're resolving it, we're not even really resolving them. We're just rerunning manually at this point. We're just like, oh, that spec flaked. And there's nothing to do here because sometimes that just happens. So we're re-running manually. And so my belief is if I see all green, if the specs all pass, I know that I can deploy to production. And so if occasionally a spec is going to flake and retrying it will make it pass (and I know that pass doesn't mean oh, this time it happened to pass; it's that is the correct outcome) and we have a false negative before, then I'm happy to instrument the system in a way that hides that from me because, at this point, it does feel like noise. I'm not doing anything else with the failures when we were looking at them more pointedly. I'm not resolving those flaky specs. There are no changes that we've made to the underlying system. And they don't represent a failure mode or an incorrectness that an end-user might see. So I honestly want to paper over and hide it from myself. And that's why I've chosen this. But you can see I need to defend my actions here because I feel weird. I feel a little off about this. But as I talk through it, that is the hierarchy. I care about correctness. And then, the next thing I care about is maintaining the deployment pipeline. I want that to be as quick and as efficient as possible. And I've talked a bunch about explorations into the world of observability and trying to figure out how to do continuous deployment because I think that really encourages overall better engineering outcomes. And so first is correctness. Second is velocity. And flaky specs impact velocity heavily, but they don't actually impact correctness in the particular mode that we're experiencing them here. They definitely can. But in this case, as I look at the code, I'm like, nah, that was just noise in the system. That was just too much complexity stacked up in trying to run a feature spec that simulates a browser and a user clicking in JavaScript and all this stuff and the things. But again, [laughs] here I am. I am very defensive about this apparently. STEPH: Well, I can certainly relate because I was defending my answer to myself earlier. And it is really interesting what you're pointing out. I like how you appreciate correctness and then velocity, that those are the two things that you're going after. And flaky tests often don't highlight an incorrect system. It is highlighting that maybe our code or our tests are not as performant as we would like them to be, but the behavior is correct. So I think that's a really important thing to recognize. The part where I get squishy is where we have encountered on this project some flaky tests that did highlight that we had incorrect behavior, and there's only been maybe one or two. It was rare that it happened, but it at least has happened once or twice where it highlighted something to us that when tests were run...I think there's a whole lot of context. I won't get into it. But essentially, when tests were being run in a particular way that made them look like a flaky test, it was actually telling us something truthful about the system, that something was behaving in a way that we didn't want it to behave. So that's why I still like that triage that you have to go through. But I also agree that if you're trying to get out at a deploy, you don't want to have to deal with flaky tests. There's a time to eat your vegetables, and I don't know if it's when you've got a deploy that needs to go out. That might not be the right time to be like, oh, we've got a flaky test. We should really address this. It's like, yes; you should note to yourself, hey, have a couple of vegetables tomorrow, make a ticket, and address that flaky test but not right now. That's not the time. So I think you've struck a good balance. But I also do like the idea of annotating specific tests instead of just retrying all of them, so you don't hide anything from yourself. CHRIS: Yeah. And now that I'm saying it and now that I'm circling back around, what I'm saying is true of everything we've done so far. But it is possible that now this new mode that the system behaves in where it will essentially hide flaky specs on CI means that any new flaky regressions, as it were, will be hidden from us. And thus far, almost all or I think all of the flakiness that we've seen has basically been related to timeouts. So a different way to solve this would potentially be to up the Capybara wait time. So there are occasionally times where the system's churning through, and the various layers of the feature specs just take a little bit longer. And so they miss...I forget what it is, but it's like two seconds right now or something like that. And I can just bump that up and say it's 10 seconds. And that's a mode that if eventually, the system ends in the state that we want, I'm happy to wait a little longer to see that, and that's fine. But there are...to name some of the ways that flaky tests can actually highlight truly incorrect things; race conditions are a pretty common one where this behaves fine most of the time. But if the background job happens to succeed before the subsequent request happens, then you'll go to the page. That's a thing that a real user may experience, and in fact, it might even be more likely in production because production has differential performance characteristics on your background jobs versus your actual application. And so that's the sort of thing that would definitely be worth keeping in mind. Additionally, if there are order issues within your spec suite if the randomize...I think actually RSpec::Retry wouldn't fix this, though, because it's going to retry within the same order. So that's a case that I think would be still highlighted. It would fail three times and then move on. But those we should definitely deal with. That's a test-related thing. But the first one, race conditions, that's totally a thing. They come up all the time. And I think I've potentially hidden that from myself now. And so, I might need to lock back what I said earlier because I feel like it's been true thus far that that has not been the failure mode, but it could be moving forward. And so I really want to find out if we got flaky specs. I don't know; I feel like I've said enough about this. So I'm going to stop saying anything new. [laughs] Do you have any other thoughts on this topic? STEPH: Our emotions are a pendulum. We swing hard one way, and then we have to wait till we come back and settle in the middle. But there's that initial passion play where you're really frustrated by something, and then you swing, and you settle back towards something that's a little more neutral. CHRIS: I don't trust anyone who pretends like their opinions never change. It doesn't feel like a good way to be. STEPH: Oh, I hope that...Do people say that? I hope that's not true. I hope we are all changing our opinions as we get more information. CHRIS: Me too. Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. CHRIS: Well, shifting only ever so slightly because it turns out it's a very related question, but we have a listener question. As always, thank you so much to everyone who sends in listener questions. We really appreciate them. And today's question comes from Mikhail, and he writes in, "Regarding the discussion in Episode 311 on requiring commits merged to be tested, I have a question on how you view multi commit PRs. Do you think all the commits in a PR should be tested or only the last one? If you test all commits in a PR, do you have any good tips on setups for that? Would you want all commits to pass all tests? For one, it helps a lot when using Git bisect. It is also a question of keeping the history clean and understandable. As a background on the project I currently work on, we have the opinion that all commits should be tested and working. We have now decided on single commit PRs only since this is the only way that we can currently get the setup reasonably on our CI. I would like to sometimes make PRs with more than one commit since I want to make commits as small as possible. In order to do that, we would have to find a way to make sure all commits in the PR are tested. There seems to be some hacky ways to accomplish this, but there is not much talk about it. Also, we are strict in requiring a linear history in all our projects. Kind regards, Mikhail." So, Steph, what do you think? STEPH: I remember reading this question when it came in. And I have an experience this week that is relevant to this mainly because I had seen this question, and I was thinking about it. And off the cuff, I haven't really thought about this. I haven't been very concerned about ensuring every single commit passes because I want to ensure that, ultimately, the final commit that I have is going in. But I also rarely have more than one commit in a PR. So that's often my default mode. There are a couple of times that I'll have two, maybe three commits, but I think that's pretty rare for me. I'll typically have just one commit. So I haven't thought about this heavily. And it's not something that frankly I've been concerned about or that I've run into issues with. From their perspective about using Git bisect, I could see how that could be troublesome, like if you're looking at a commit and you realize there's a particular commit that's already merged and that fails. The other area that I could think of where this could be problematic is if you're trying to roll back to a specific commit. And if you accidentally roll back to a commit that is technically broken, but you didn't know that because it was not the final commit as was getting tested on CI, that could happen. I haven't seen that happen. I haven't experienced it. So while that does seem like a legitimate concern, it's also one that I frankly just haven't had. But because I read this question from this person earlier this week, I actually thought about it when I was crafting a PR that had several commits in it, which is kind of unusual for me since I'm usually one or two commits in a PR. But for this one, I had several because we use standard RB in our project to handle all the formatting. And right now, we have one of those standard to-do files because we added it to the project. But there are still a number of manual fixes that need to be applied. So we just have this list of files that still need to be formatted. And as someone touches that file, we will format it, and then we'll take it out of that to-do list. So then standard RB will include it as it's linting all of our files. And I decided to do that for all of our spec files. Because I was like, well, this was the safest chunk of files to format that will require the least amount of review from folks. So I just want to address all of them in one go. But I separated the more interesting changes into different commits just to make others aware of, like, hey, this is something standard RB wants. And it was interesting enough that I thought I would point it out. So my first commit removed all the files from that to-do list, but then my other commits are the ones that made actual changes to some of those files that needed to be corrected. So technically, one or two of my middle commits didn't pass the standard RB linting. But because CI was only running that final commit, it didn't notice that. And I thought about this question, and so I intentionally went back and made sure each of those commits were correct at that point in time. And I feel good about that. But I still don't feel the need to add more process around ensuring each commit is going to be green. I think I would lean more in favor of let's keep our PR small to one or two commits. But I don't know; it's something I haven't really run into. It's an interesting question. How about you? What are your experiences, or what are your thoughts on this, Chris? CHRIS: When this question came through, I thought it was such an interesting example of considering the cost of process changes. And to once again reference one of our favorite blog posts by German Velasco, the Say No to More Process post, which we will, of course, link in the show notes. This is such a great example of there was likely a small amount of pain that was felt at one point where someone tried to run git bisect. They ran into a troublesome commit, and they were like, oh no, this happened. We need to add processes, add automation, add control to make sure this never happens again. Personally, I run git bisect very rarely. When I do, it's always a heroic moment just to get it started and to even know which is the good and which is the bad. It's always a thing anyway. So it would be sad if I ran into one of these commits. But I think this is a pretty rare outcome. I think in the particular case that you're talking about, there's probably a way to actually tease that apart. I think it sounds like you fixed those commits knowing this, maybe because you just put it in your head. But the idea that the process that this team is working on has been changed such that they only now allow single commit PRs feels like too much process in my mind. I think I'm probably 80%, maybe 90% of the time; it's only a single commit in a PR for me. But occasionally, I really value having the ability to break it out into discrete steps, like these are all logically grouped in one changeset that I want to send through. But they're discrete steps that I want to break apart so that the team can more easily review it so that we have granular separation, and I can highlight this as a reference. That's often something that I'll do is I want this commit to standalone because I want it to be referenced later on. I don't want to just fold it into the broader context in which it happened, but it's pretty rare. And so to say that we can't do that feels like we're adding process where it may not be worth it, where the cost of that process change is too high relative to the value that we're getting, which is speculatively being able to run git bisect and not hit something problematic in the future. There's also the more purist, dogmatic view of well, all commits should be passing, of course. Yeah, I totally agree with that. But what's it worth to you? How much are you willing to spend to achieve that goal? I care deeply about the correctness of my system but only the current correctness. I don't care about historical correctness as much, some. I think I'm diminishing this more than I mean to. But really back to that core question of yes, this thing has value, but is it worth the cost that we have to pay in terms of process, in terms of automation and maintenance of that automation over time, et cetera or whatever the outcome is? Is it worth that cost? And in this case, for me, this would not be worth the cost. And I would not want to adopt a workflow that says we can only ever have single commit PRs, or all commits must be run on CI or any of those variants. STEPH: This is an interesting situation where I very much agree with everything you're saying. But I actually feel like what Mikhail wants in this world; I want it too. I think it's correct in the way that I do want all the commits to pass, and I do want to know that. And I think since I do fall into the default, like you mentioned, 80%, 90% of my PRs are one commit. I just already have that. And the fact that they're enforcing that with their team is interesting. And I'm trying to think through why that feels cumbersome to enforce that. And I'm with you where I'll maybe have a refactor commit or something that goes before. And it's like, well, what's wrong with splitting that out into a separate PR? What's the pain point of that? And I think the pain point is the fact that one, you have two PRs that are stacked on each other. So you have the first one that you need to get reviewed, and then the second one; there's that bit of having to hop between the two if there's some shared context that someone can't just easily review in one pull request. But then there's also, as we just mentioned, there's CI that has to run. And so now it's running on both of them, even though maybe that's a good thing because it's running on both commits. I like the idea that every commit is tested, and every commit is green. But I actually feel like it's some of our other processes that make it cumbersome and hard to get there. And if CI did run on every commit, I think it would be ideal, but then we are increasing our CI time by running it on every commit. And then it comes down to essentially what you said, what's the risk? So if we do merge in a commit that doesn't work or has something that's failing about it but then the next commit after that fixes it, what's the risk that we're going to roll back to that one specific commit that was broken? If that's a high risk for you and your team, then adding this process is probably the really wise thing to do because you want to make sure the app doesn't go down for users. That's incredibly important. If that's not a high risk for your team, then I wouldn't add the process. CHRIS: Yeah, I totally agree. And to clarify my stances, for me, this change, this process change would not be worth the trade-off. I love the idea. I love the goal of it. But it is not worth the process change, and that's partly because I haven't particularly felt the pain. CI is not an inexhaustible resource I have learned. I'm actually somewhat proud our very small team that is working on the project that we're working on; we just recently ran out of our CI budget, and Circle was like, "Hey, we got to charge you more." And I was like, "Cool, do that." But it was like, there is cost both in terms of the time, clock time, and each PR running and all of those. We have to consider all of these different things. And hopefully, we did a useful job of framing the conversation, because as always, it depends, but it depends on what. And in this case, there's a good outcome that we want to get to, but there's an associated cost. And for any individual team, how you weigh the positive of the outcome versus how you weigh the cost will alter the decision that you make. But that's I think, critically, the thing that we have to consider. I've also noticed I've seen this conversation play out within teams where one individual may acutely feel the pain, and therefore they're anchored in that side. And the cost is irrelevant to them because they're like, I feel this pain so acutely, but other people on the team aren't working in that part of the codebase or aren't dealing with bug triage in the same way that that other developer is. And so, even within a team, there may be different levels of how you measure that. And being able to have meaningful conversations around that and productively come to a group decision and own that and move forward with that is the hard work but the important work that we have to do. STEPH: Yeah. I think that's a great summary; it depends. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeeeeee! Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
311: Marketing Matters

The Bike Shed

Play Episode Listen Later Oct 5, 2021 37:37


Longtime listener and friend of the show, Gio Lodi, released a book y'all should check out and Chris and Steph ruminate on a listener question about tension around marketing in open-source. Say No To More Process, Say Yes To Trust by German Velasco (https://thoughtbot.com/blog/say-no-to-more-process-say-yes-to-trust) Test-Driven Development in Swift with SwiftUI and Combine by Gio Lodi (https://tddinswift.com/) Transcript: CHRIS: Our golden roads. STEPH: All right. I am also golden. CHRIS: [vocalization] STEPH: Oh, I haven't listened to that episode where I just broke out in song in the middle. Oh, you're about to add the [vocalization] [chuckles]. CHRIS: I don't know why, though. Oh, golden roads, Golden Arches. STEPH: Golden Arches, yeah. CHRIS: Man, I did not know that my brain was doing that, but my brain definitely connected those without telling me about it. STEPH: [laughs] CHRIS: It's weird. People talk often about the theory that phones are listening, and then you get targeted ads based on what you said. But I'm almost certain it's actually the algorithms have figured out how to do the same intuitive leaps that your brain does. And so you'll smell something and not make the nine steps in between, but your brain will start singing a song from your childhood. And you're like, what is going on? Oh, right, because when I was watching Jurassic Park that one time, we were eating this type of chicken, and therefore when I smell paprika, Jurassic Park theme song. I got it, of course. STEPH: [laughs] CHRIS: And I think that's actually what's happening with the phones. That's my guess is that you went to a site, and the phones are like, cool, I got it, adjacent to that is this other thing, totally. Because I don't think the phones are listening. Occasionally, I think the phones are listening, but mostly, I don't think the phones are listening. STEPH: I definitely think the phones are listening. CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey. So we have a bit of exciting news where we received an email from Gio Lodi, who is a listener of The Bike Shed. And Gio sent an email sharing with us some really exciting news that they have published a book on Test-Driven Development in Swift. And they acknowledge us in the acknowledgments of the book. Specifically, the acknowledgment says, "I also want to thank Chris Toomey and Steph Viccari, who keep sharing ideas on testing week after week on The Bike Shed Podcast." And that's just incredible. I'm so blown away, and I feel officially very famous. CHRIS: This is how you know you're famous when you're in the acknowledgments of a book. But yeah, Gio is a longtime listener and friend of the show. He's written in many times and given us great tips, and pointers, and questions, and things. And I've so appreciated Gio's voice in the community. And it's so wonderful, frankly, to hear that he has gotten value out of the show and us talking about testing. Because I always feel like I'm just regurgitating things that I've heard other people saying about testing and maybe one or two hard-learned truths that I've found. But it's really wonderful. And thank you so much, Gio. And best of luck for anyone out there who is doing Swift development and cares about testing or test-driven development, which I really think everybody should. Go check out that book. STEPH: I must admit my Swift skills are incredibly rusty, really non-existent at this point. It's been so long since I've been in that world. But I went ahead and purchased a copy just because I think it's really cool. And I suspect there are a lot of testing conversations that, regardless of the specific code examples, still translate. At least, that's the goal that you and I have when we're having these testing conversations. Even if they're not specific to a language, we can still talk about testing paradigms and strategies. So I purchased a copy. I'm really looking forward to reading it. And just to change things up a bit, we're going to start off with a listener question today. So this listener question comes from someone very close to the show. It comes from Thom Obarski. Hi, Thom. And Thom wrote in, "So I heard on a recent podcast I was editing some tension around marketing and open source. Specifically, a little perturbed at ReactJS that not only were people still dependent on a handful of big companies for their frameworks, but they also seem to be implying that the cachet of Facebook and having developer mindshare was not allowing smaller but potentially better solutions to shine through. In your opinion, how much does marketing play in the success of an open-source project framework rather than actually being the best tool for the job?" So a really thoughtful question. Thanks, Thom. Chris, I'm going to kick it over to you. What are your thoughts about this question? CHRIS: Yeah, this is a super interesting one. And thank you so much, Thom, although I'm not sure that you're listening at this point. But we'll send you a note that we are replying to your question. And when I saw this one come through, it was interesting. I really love the kernel of the discussion here, but it is, again, very difficult to tease apart the bits. I think that the way the question was framed is like, oh, there's this bad thing that it's this big company that has this big name, and they're getting by on that. But really, there are these other great frameworks that exist, and they should get more of the mindshare. And honestly, I'm not sure. I think marketing is a critically important aspect of the work that we do both in open source and, frankly, everywhere. And I'm going to clarify what I mean by that because I think it can take different shapes. But in terms of open-source, Facebook has poured a ton of energy and effort and, frankly, work into React as a framework. And they're also battle testing it on facebook.com, a giant website that gets tons of traffic, that sees various use cases, that has all permissions in there. They're really putting it through the wringer in that way. And so there is a ton of value just in terms of this large organization working on and using this framework in the same way that GitHub and using Rails is a thing that is deeply valuable to us as a community. So I think having a large organization associated with something can actually be deeply valuable in terms of what it produces as an outcome for us as consumers of that open-source framework. I think the other idea of sort of the meritocracy of the better framework should win out is, I don't know, it's like a Field of Dreams. Like, if you build it, they will come. It turns out I don't believe that that's actually true. And I think selling is a critical part of everything. And so if I think back to DHH's original video from so many years ago of like, I'm going to make a blog in 15 minutes; look at how much I'm not doing. That was a fantastic sales pitch for this new framework. And he was able to gain a ton of attention by virtue of making this really great sales pitch that sold on the merits of it. But that was marketing. He did the work of marketing there. And I actually think about it in terms of a pull request. So I'm in a small organization. We're in a private repo. There's still marketing. There's still sales to be done there. I have to communicate to someone else the changes that I'm making, why it's valuable to the system, why they should support this change, this code coming into the codebase. And so I think that sort of communication is as critical to the whole conversation. And so the same thing happens at the level of open source. I would love for the best framework to always win, but we also need large communities with Stack Overflow answers and community-supported plugins and things like that. And so it's a really difficult thing to treat marketing as just other, this different, separate thing when, in fact, I think they're all intertwined. And marketing is critically important, and having a giant organization behind something can actually have negative aspects. But I think overall; it really is useful in a lot of cases. Those are some initial thoughts. What do you think, Steph? STEPH: Yeah, those are some great initial thoughts. I really agree with what you said. And I also like how you brought in the comparison of pull requests and how sales is still part of our job as developers, maybe not in the more traditional sense but in the way that we are marketing and communicating with the team. And circling back to what you were saying earlier about a bit how this is phrased, I think I typically agree that there's nothing nefarious that's afoot in regards to just because a larger company is sponsoring an open-source project or they are the ones responsible for it, I don't think there's anything necessarily bad about that. And I agree with the other points that you made where it is helpful that these teams have essentially cultivated a framework or a project that is working for their team, that is helping their company, and then they have decided to open source it. And then, they have the time and energy that they can continue to invest in that project. And it is battle-tested because they are using it for their own projects as well. So it seems pretty natural that a lot of us then would gravitate towards these larger, more heavily supported projects and frameworks. Because then that's going to make our job easier and also give us more trust that we can turn to them when we do need help or have issues. Or, like you mentioned, when we need to look up documentation, we know that that's going to be there versus some of the other smaller projects. They may also be wonderful projects. But if they are someone that's doing this in their spare time just on the weekends and yet I'm looking for something that I need to be incredibly reliable, then it probably makes sense for me to go with something that is supported by a team that's getting essentially paid to work on that project, at least that they're backed by a larger company. Versus if I'm going with a smaller project where someone is doing some wonderful work, but realistically, they're also doing it more on the weekends or in their spare time. So boiling it down, it's similar to what you just said where marketing plays a very big part in open source, and the projects and frameworks that we adopt, and the things that we use. And I don't think that's necessarily a bad thing. CHRIS: Yeah. I think, if anything, it's possibly a double-edged sword. Part of the question was around does React get to benefit just by the cachet of Facebook? But Facebook, as a larger organization sometimes that's a positive thing. Sometimes there's ire that is directed at Facebook as an organization. And as a similar example, my experience with Google and Microsoft as large organizations, particularly backing open-source efforts, has almost sort of swapped over time, where originally, Microsoft there was almost nothing of Microsoft's open-source efforts that I was using. And I saw them as this very different shape of a company that I probably wouldn't be that interested in. And then they have deeply invested in things like GitHub, and VS Code, and TypeScript, and tons of projects that suddenly I'm like, oh, actually, a lot of what I use in the world is coming from Microsoft. That's really interesting. And at the same time, Google has kind of gone in the opposite direction for me. And I've seen some of their movements switch from like, oh Google the underdog to now they're such a large company. And so the idea that the cachet, as the question phrase, of a company is just this uniformly positive thing and that it's perhaps an unfair benefit I don't see that as actually true. But actually, as a more pointed example of this, I recently chose Svelte over React, and that was a conscious choice. And I went back and forth on it a few times, if we're being honest, because Svelte is a much smaller community. It does not have the large organizational backing that React or other frameworks do. And there was a certain marketing effort that was necessary to raise it into my visibility and then for me to be convinced that there is enough there, that there is a team that will maintain it, and that there are reasons to choose that and continue with it. And I've been very happy with it as a choice. But I was very conscious in that choice that I'm choosing something that doesn't have that large organizational backing. Because there's a nicety there of like, I trust that Facebook will probably keep investing in React because it is the fundamental technology of the front end of their platform. So yeah, it's not going to go anywhere. But I made the choice of going with Svelte. So it's an example of where the large organization didn't win out in my particular case. So I think marketing is a part of the work, a part of the conversation. It's part of communication. And so I am less negative on it, I think, than the question perhaps was framed, but as always, it depends. STEPH: Yeah, I'm trying to think of a scenario where I would be concerned about the fact that I'm using open source that's backed by a specific large company or corporation. And the main scenario I can think of is what happens when you conflict or if you have values that conflict with a company that is sponsoring that project? So if you are using an open-source project, but then the main community or the company that then works on that project does something that you really disagree with, then what do you do? How do you feel about that situation? Do you continue to use that open-source project? Do you try to use a different open-source project? And I had that conversation frankly with myself recently, thinking through what to do in that situation and how to view it. And I realize this may not be how everybody views it, and it's not appropriate for all situations. But I do typically look at open-source projects as more than who they are backed by, but the community that's actively working on that project and who it benefits. So even if there is one particular group that is doing something that I don't agree with, that doesn't necessarily mean that wholesale I no longer want to be a part of this community. It just means that I still want to be a part, but I still want to share my concerns that I think a part of our community is going in a direction that I don't agree with or I don't think is a good direction. That's, I guess, how I reason with myself; even if an open-source project is backed by someone that I don't agree with, either one, you can walk away. That seems very complicated, depending on your dependencies. Or two, you find ways to then push back on those values if you feel that the community is headed in a direction that you don't agree with. And that all depends on how comfortable you are and how much power you feel like you have in that situation to express your opinion. So it's a complicated space. CHRIS: Yeah, that is a super subtle edge case of all of this. And I think I aligned with what you said of trying to view an open-source project as more generally the community that's behind it as opposed to even if there's a strong, singular organization behind it. But that said, that's definitely a part of it. And again, it's a double-edged sword. It's not just, oh, giant company; this is great. That giant company now has to consider this. And I think in the case of Facebook and React, that is a wonderful hiring channel for them. Now all the people that use React anywhere are like, "Oh man, I could go work at Facebook on React? That's exciting." That's a thing that's a marketing tool from a hiring perspective for them. But it cuts both ways because suddenly, if the mindshare moves in a different direction, or if Facebook as an organization does something complicated, then React as a community can start to shift away. Maybe you don't move the current project off of it, but perhaps you don't start the next one with it. And so, there are trade-offs and considerations in all directions. And again, it depends. STEPH: Yeah. I think overall, the thing that doesn't depend is marketing matters. It is a real part of the ecosystem, and it will influence our decisions. And so, just circling back to Thom's question, I think it does play a vital role in the choices that we make. CHRIS: Way to stick the landing. STEPH: Thanks. Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: Changing topics just a bit, what's new in your world? CHRIS: Well, we had what I would call a mini perfect storm this week. We broke the build but in a pretty solid way. And it was a little bit difficult to get it back under control. And it has pushed me ever so slightly forward in my desire to have a fully optimized CI and deploy pipeline. Mostly, I mean that in terms of ratcheting. I'm not actually going to do anything beyond a very small set of configurations. But to describe the context, we use pull requests because that's the way that we communicate. We do code reviews, all that fun stuff. And so there was a particular branch that had a good amount of changes, and then something got merged. And this other pull request was approved. And that person then clicked the rebase and merge button, which I have configured the repository, so that merge commits are not allowed because I'm not interested in that malarkey in our history. But merge commits or rebase and merge. I like that that makes sense. In this particular case, we ran into the very small, subtle edge case of if you click the rebase and merge button, GitHub is now producing a new commit that did not exist before, a new version of the code. So they're taking your changes, and they are rebasing them onto the current main branch. And then they're attempting to merge that in. And A, that was allowed. B, the CI configuration did not require that to be in a passing state. And so basically, in doing that rebase and merge, it produced an artifact in the build that made it fail. And then attempting to unwind that was very complicated. So basically, the rebase produced...there were duplicate changes within a given file. So Git didn't see it as a conflict because the change was made in two different parts of the file, but those were conflicting changes. So Git was like, this seems like it's fine. I can merge this, no problem. But it turns out from a functional perspective; it did not work. The build failed. And so now our main branch was failing and then trying to unwind that it just was surprisingly difficult to unwind that. And it really highlighted the importance of keeping the main branch green, keeping the build always passing. And so, I configured a few things in response to this. There is a branch protection rule that you can enable. And let me actually pull up the specific configuration that I set up. So I now have enabled require status checks to pass before merging, which, if we're being honest, I thought that was the default. It turns out it was not the default. So we are now requiring status checks to pass before merging. I'm fully aware of the awkward, painful like, oh no, the build is failing but also, we have a bug. We need to deploy this. We must get something merged in. So hopefully, if and when that situation presents itself, I will turn this off or somehow otherwise work around it. But for now, I would prefer to have this as a yeah; this is definitely a configuration we want. So require status checks to pass before merging and then require branches to be up to date before merging. So the button that does the rebase and merge, I don't want that to actually do a rebase on GitHub. I want the branch to already be up to date. Basically, I only ever want fast-forward merges on our main branch. So all code should be ahead of main, and we are simply updating what main points at. We are not creating new code. That code has run on CI, that version of the code specifically. We are fully rebased and up to date on top of main, and that's how we're going. STEPH: All of that is super interesting. I have a question about the workflow. I want to make sure I'm understanding it correctly. So let's say that I have issued a PR, and then someone else has merged into the main branch. So now my PR is behind me, and I don't have that latest commit. With the new configuration, can I still use the rebase and merge, or will I need to rebase locally and then push up my branch before I can merge into main but at least using the GitHub UI? CHRIS: I believe that you would be forced to rebase locally, force push, and then CI would rebuild, and that's what it is. So I think that's what require branches to be up to date before merging means. So that's my hope. That is the intention here. I do realize that's complicated. So this requirement, which I like, because again, I really want the idea that no, no, no, we, the developers, are in charge of that final state. That final state should always run as part of a build of CI on our pull request/branch before going into main. So no code should be new. There should be no new commits that have never been tested before going into main. That's my strong belief. I want that world. I realize that's...I don't know. Maybe I'm getting pedantic, or I'm a micromanager of the Git history or whatever. I'm fine with any of those insults that people want to lob at me. That's fine. But that's what I feel. That said, this is a nuisance. I'm fully aware of that. And so imagine the situation where we got a couple of different things that have been in flight. People have been working on different...say there are three pull requests that are all coming to completion at the same time. Then you start to go to merge something, and you realize, oh no, somebody else just merged. So you rebase, and then you wait for CI to build. And just as the CI is completing, somebody else merges something, and you're like, ah, come on. And so then you have to one more time rebase, push, wait for the build to be green. So I get that that is not an ideal situation. Right now, our team is three developers. So there are a few enough of us that I feel like this is okay. We can manage this via human intervention and just deal with the occasional weight. But in the back of my mind, of course, I want to find a better solution to this. So what I've been exploring…there's a handful of different utilities that I'm looking at, but they are basically merged queues as an idea. So there are three that I'm looking at, or maybe just two, but there's mergify.io, which is a hosted solution that does this sort of thing. And then Shopify has a merge queue implementation that they're running. So the idea with this is when you as a developer are ready to merge something, you add a label to it. And when you add that label, there's some GitHub Action or otherwise some workflow in the background that sees that this has happened and now adds it to a merge queue. So it knows all of the different things that might want to be merged. And this is especially important as the team grows so that you don't get that contention. You can just say, "Yes, I would like my changes to go out into production." And so, when you label it, it then goes into this merge queue. And the background system is now going to take care of any necessary rebases. It's going to sequence them, so it's not just constantly churning all of the branches. It's waiting because it knows the order that they're ideally going to go out in. If CI fails for any of them because rebasing suddenly, you're in an inconsistent state; if your build fails, then it will kick you out of the merge queue. It will let you know. So it will send you a notification in some manner and say, "Hey, hey, hey, you got to come look at this again. You've been kicked out of the merge queue. You're not going to production." But ideally, it adds that layer of automation to, frankly, this nuisance of having to keep things up to date and always wanting code to be run on CI and on a pull request before it gets into main. Then the ideal version is when it does actually merge your code, it pings you in Slack or something like that to say, "Hey, your changes just went out to production." Because the other thing I'm hoping for is a continuous deployment. STEPH: The idea of a merge queue sounds really interesting. I've never worked with a process like that. And one of the benefits I can see is if I know I'm ready for something to go like if I'm waiting on a green build and I'm like, hey, as soon as this is green, I'd really like for it to get merged. Then currently, I'm checking in on it, so I will restart the build. And then, every so often, I'm going back to say, "Okay, are you green? Are you green? Can I emerge?" But if I have a merge queue, I can say, "Hey, merge queue, when this is green, please go and merge it for me." If I'm understanding the behavior correctly, that sounds really nifty. CHRIS: I think that's a distinct but useful aspect of this is the idea that when you as a developer decide this PR is ready to go, you don't need to wait for either the current build or any subsequent builds. If there are rebases that need to happen, you basically say, "I think this code's good to go. We've gotten the necessary approvals. We've got the buy-in and the teams into this code." So cool, I now market as good. And you can walk away from it, and you will be notified either if it fails to get merged or if it successfully gets merged and deployed. So yes, that dream of like, you don't have to sit there watching the pot boil anymore. STEPH: Yeah, that sounds nice. I do have to ask you a question. And this is related to one of the blog posts that you and I love deeply and reference fairly frequently. And it's the one that's written by German Velasco about Say No to More Process, and Say Yes to Trust. And I'm wondering, based on the pain that you felt from this new commit, going into main and breaking the main build, how do you feel about that balance of we spent time investigating this issue, and it may or may not happen again, and we're also looking into these new processes to avoid this from happening? I'm curious what your thought process is there because it seems like it's a fair amount of work to invest in the new process, but maybe that's justified based on the pain that you felt from having to fix the build previously. CHRIS: Oh, I love the question. I love the subtle pushback here. I love this frame of mind. I really love that blog post. German writes incredible blog posts. And this is one that I just keep coming back to. In this particular case, when this situation occurred, we had a very brief...well, it wasn't even that brief because actually unwinding the situation was surprisingly painful, and we had some changes that we really wanted to get out, but now the build was broken. And so that churn and slowdown of our build pipeline and of our ability to actually get changes out to production was enough pain that we're like, okay, cool. And then the other thing is we actually all were in agreement that this is the way we want things to work anyway, that idea that things should be rebased and tested on CI as part of a pull request. And then we're essentially only doing fast-forward merges on the main branch, or we're fast forward merging main into this new change. That's the workflow that we wanted. So this configuration was really just adding a little bit of software control to the thing that we wanted. So it was an existing process in our minds. This is the thing we were trying to do. It's just kind of hard to keep up with, frankly. But it turns out GitHub can manage it for us and enforce the process that we wanted. So it wasn't a new process per se. It was new automation to help us hold ourselves to the process that we had chosen. And again, it's minimally painful for the team given the size that we're at now, but I am looking out to the future. And to be clear, this is one of the many things that fall on the list of; man, I would love to have some time to do this, but this is obviously not a priority right now. So I'm not allowed to do this. This is explicitly on the not allowed to touch list, but someday. I'm very excited about this because this does fundamentally introduce some additional work in the pipeline, and I don't want that. Like you said, is this process worth it for the very small set of times that it's going to have a bad outcome? But in my mind, the better version, that down the road version where we have a merge queue, is actually a better version overall, even with just a tiny team of three developers that are maybe never even conflicting in our merges, except for this one standout time that happens once every three months or whatever. This is still nicer. I want to just be able to label a pull request and walk away and have it do the thing that we have decided as a team that we want. So that's the dream. STEPH: Oh, I love that phrasing, to label a pull request and be able to walk away. Going back to our marketing, that really sells that merge queue to me. [laughs] Mid-roll Ad And now we're going to take a quick break to tell you about today's sponsor, Orbit. Orbit is mission control for community builders. Orbit offers data analytics, reporting, and insights across all the places your community exists in a single location. Orbit's origins are in the open-source and developer relations communities. And that continues today with an active open-source culture in an accessible and documented API. With thousands of communities currently relying on Orbit, they are rapidly growing their engineering team. The company is entirely remote-first with team members around the world. You can work from home, from an Orbit outpost in San Francisco or Paris, or find yourself a coworking spot in your city. The tech stack of the main orbit app is Ruby on Rails with JavaScript on the front end. If you're looking for your next role with an empathetic product-driven team that prides itself on work-life balance, professional development, and giving back to the larger community, then consider checking out the Orbit careers page for more information. Bonus points if working in a Ruby codebase with a Ruby-oriented team gives you a lot of joy. Find out more at orbit.love/weloveruby. CHRIS: To be clear, and this is to borrow on some of Charity Majors' comments around continuous deployment and whatnot, is a developer should stay very close to the code if they are merging it. Because if we're doing continuous deployment, that's going to go out to production. If anything's going to happen, I want that individual to be aware. So ideally, there's another set of optimizations that I need to make on top of this. So we've got the merge queue, and that'll be great. Really excited about that. But if we're going to lean into this, I want to optimize our CI pipeline and our deployment pipeline as much as possible such that even in the worst case where there's three different builds that are fighting for contention and trying to get out, the longest any developer might go between labeling a pull request and saying, "This is good to go," and it getting out to production, again, even if they're contending with other PRs, is say 10, 15 minutes, something like that. I want Slack to notify them and them to then re-engage and keep an eye on things, see if any errors pop up, anything like that that they might need to respond to. Because they're the one that's got the context on the code at that point, and that context is decaying. The minute you've just merged a pull request and you're walking away from that code, the next day, you're like, what did I work on? I don't remember that at all. That code doesn't exist anymore in my brain. And so,,, staying close to that context is incredibly important. So there's a handful of optimizations that I've looked at in terms of the CircleCI build. I've talked about my not rebuilding when it actually gets fast-forward merged because we've already done that build on the pull request. I'm being somewhat pointed in saying this has to build on a pull request. So if it did just build on a pull request, let's not rebuild it on main because it's identically the same commit. CircleCI, I'm looking at you. Give me a config button for that, please. I would really love that config button. But there are a couple of other things that I've looked at. There's RSpec::Retry from NoRedInk, which will allow for some retry semantics. Because it will be really frustrating if your build breaks and you fall out of the merge queue. So let's try a little bit of retry logic on there, particularly around feature specs, because that's where this might happen. There's Knapsack Pro which is a really interesting thing that I've looked at, which does parallelization of your RSpec test suite. But it does it in a different way than say Circle does. It actually runs a build queue, and each test gets sent over, and they have build agents on their side. And it's an interesting approach. I'm intrigued. I think it could use some nice speed-ups. There's esbuild on the Heroku side so that our assets build so much more quickly. There are lots of things. I want to make it very fast. But again, this is on the not allowed to do it list. [laughs] STEPH: I love how most of the world has a to-do list, and you have this not-allowed to-do list that you're adding items to. And I'm really curious what all is on the not allowed to touch lists or not allowed to-do list. [laughs] CHRIS: I think this might be inherent to being a developer is like when I see a problem, I want to fix it. I want to optimize it. I want to tweak it. I want to make it so that that never happens again. But plenty of things...coming back to German's post of Say No to More Process, some things shouldn't be fixed, or the cost of fixing is so much higher than the cost of just letting it happen again and dealing with it manually at that moment. And so I think my inherent nature as a developer there's a voice in my head that is like, fix everything that's broken. And I'm like, sorry. Sorry, brain, I do not have that kind of time. And so I have to be really choosy about where the time goes. And this extends to the team as well. We need to be intentional around what we're building. Actually, there's a feeling that I've been feeling more acutely than ever, but it's the idea of this trade-off or optimization between speed and getting features out into the world and laying the right fundamentals. We're still very early on in this project, and I want to make sure we're thinking about things intentionally. I've been on so many projects where it's many years after it started and when I ask someone, "Hey, why do your background jobs work that way? That's a little weird." And they're like, "Yeah, that was just a thing that happened, and then it never changed. And then, we copied it and duplicated, and that pattern just got reinforced deeply within the app. And at this point, it would cost too much to change." I've seen that thing play out so many times at so many different organizations that I'm overwhelmed with that knowledge in the back of my head. And I'm like, okay, I got to get it just right. But I can't take the time that is necessary to get it, quote, unquote, "Just right." I do not have that kind of time. I got to ship some features. And this tension is sort of the name of the game. It's the thing I've been doing for my entire career. But now, given the role that I have with a very early-stage startup, I've never felt it more acutely. I've never had to be equally as concerned with both sides of that. Both matter all the more now than they ever have before, and so I'm kind of existing in that space. STEPH: I really like that phrasing of that space because that deeply resonates with me as well. And that not allowed to-do list I have a similar list. For me, it's just called a wishlist. And so it's a wishlist that I will revisit every so often, but honestly, most things on there don't get done. And then I'll clear it out every so often when I feel it's not likely that I'm going to get to it. And then I'll just start fresh. So I also have a similar this is what I would like to do if I had the time. And I agree that there's this inclination to automate as well. As soon as we have to do something that felt painful once, then we feel like, oh, we should automate it. And that's a conversation that I often have with myself is at what point is the cost of automation worthwhile versus should we just do this manually until we get to that point? So I love those nuanced conversations around when is the right time to invest further in this, and what is the impact? And what is the cost of it? And what are the trade-offs? And making that decision isn't always clear. And so I think that's why I really enjoy these conversations because it's not a clear rubric as to like, this is when you invest, and this is when you don't. But I do feel like being a consultant has helped me hone those skills because I am jumping around to different teams, and I'm recognizing they didn't do this thing. Maybe they didn't address this or invest in it, and it's working for them. There are some oddities. Like you said, maybe I'll ask, "Why is this? It seems a little funky. What's the history?" And they'll be like, "Yeah, it was built in a hurry, but it works. And so there hasn't been any churn. We don't have any issues with it, so we have just left it." And that has helped reinforce the idea that just because something could be improved doesn't mean it's worthwhile to improve it. Circling back to your original quest where you are looking to improve the process for merging and ensuring that CI stays green, I do like that you highlighted the fact that we do need to just be able to override settings. So that's something that has happened recently this week for me and my client work where we have had PRs that didn't have a green build because we have some flaky tests that we are actively working on. But we recognize that they're flaky, and we don't want that to block us. I'm still shipping work. So I really appreciate the consideration where we want to optimize so that everyone has an easy merging experience. We know things are green. It's trustworthy. But then we also have the ability to still say, "No, I am confident that I know what I'm doing here, and I want to merge it anyways, but thank you for the warning." CHRIS: And the constant pendulum swing of over-correcting in various directions I've experienced that. And as you said, in the back of my mind, I'm like, oh, I know that this setting I'm going to need a way to turn this setting off. So I want to make sure that, most importantly, I'm not the only one on the team who can turn that off because the day that I am away on vacation and the build is broken, and we have a critical bug that we need to fix, somebody else needs to be able to do that. So that's sort of the story in my head. At the same time, though, I've worked on so many teams where they're like, oh yeah, the build has been broken for seven weeks. We have a ticket in the backlog to fix that. And it's like, no, the build has to not be broken for that long. And so I agree with what you were saying of consulting has so usefully helped me hone where I fall on these various spectrums. But I do worry that I'm just constantly over-correcting in one direction or the other. I'm never actually at an optimum. I am just constantly whatever the most recent thing was, which is really impacting my thinking on this. And I try to not do that, but it's hard. STEPH: Oh yeah. I'm totally biased towards my most recent experiences, and whatever has caused me the most pain or success recently. I'm definitely skewed in that direction. CHRIS: Yeah, I definitely have the recency bias, and I try to have a holistic view of all of the things I've seen. There's actually a particular one that I don't want to pat myself on the back for because it's not a good thing. But currently, our test suite, when it runs, there's just a bunch of noise. There's a bunch of other stuff that gets printed out, like a bunch of it. And I'm reminded of a tweet from Kevin Newton, a friend of the show, and I just pulled it up here. "Oh, the lengths I will go to avoid warnings in my terminal, especially in the middle of my green dots. Don't touch my dots." It's a beautiful beauty. He actually has a handful about the green dots. And I feel this feel. When I run my test suite, I just want a sea of green dots. That's all I want to see. But right now, our test suite is just noise. It's so much noise. And I am very proud of...I feel like this is a growth moment for me where I've been like, you know what? That is not the thing to fix today. We can deal with some noise amongst the green dots for now. Someday, I'm just going to lose it, and I'm going to fix it, and it's going to come back to green dots. [chuckles] STEPH: That sounds like such a wonderful children's book or Dr. Seuss. Oh, the importance of green dots or, oh, the places green dots will take you. CHRIS: Don't touch my dots. [laughter] STEPH: Okay. Maybe a slightly aggressive Dr. Seuss, but I still really like it. CHRIS: A little more, yeah. STEPH: On that note of our love of green dots, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeee!!! Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Equity
The pure hell of managing your JPEGs

Equity

Play Episode Listen Later Aug 27, 2021 27:26


Natasha and Alex and Grace and Chris were joined by none other than TechCrunch's own Mary Ann Azevedo, in her first-ever appearance on the show. She's pretty much the best person and we're stoked to have her on the pod.And it was good that Mary Ann was on the show this week as she wrote about half the dang site. Which meant that we got to include all sorts of her work in the rundown. Here's the agenda:Funding rounds from: Ramp, which raised $300 million at a $3.9 billion valuation; NoRedInk which put together an impressive $50 million Series B; and Playbook, which is building a sort of Dropbox for designers. Each company gave us something different to noodle on, be it the diverging strategies at Ramp and Brex, how NoRedInk is different from Grammarly, and why Dropbox is not the Dropbox for designers.Then we spun the globe to narrow our focus to Latin America, a booming startup scene that Mary Ann recently profiled for Extra Crunch. In a nutshell, venture capital is helping drive an enormous wave of startup activity in the region -- or perhaps a wave of startup activity is driving a boom in venture investment? -- leading to huge companies, and perhaps some tech-powered inclusion of more folks into the modern banking, and digital economy. (For more, here are notes on the Brazilian market's rising exit tally! And Flink raised, which was worth chewing on as well.)We quickly pivoted to the hot button issue of the moment for every startup (and business): hiring. Natasha noted how startups used to focus on runway, and now they are looking to fill empty seats amid the great resignation. Finally, we nattered about huge venture results from Boston, big numbers from Austin, and what increasingly feels like an everything bubble. Chicago is doing well, too. Pick a city, it's putting up big numbers.And that's a wrap, for, well, at least the next 5 seconds.

Equity
The pure hell of managing your JPEGs

Equity

Play Episode Listen Later Aug 27, 2021 27:26


Natasha and Alex and Grace and Chris were joined by none other than TechCrunch's own Mary Ann Azevedo, in her first-ever appearance on the show. She's pretty much the best person and we're stoked to have her on the pod.And it was good that Mary Ann was on the show this week as she wrote about half the dang site. Which meant that we got to include all sorts of her work in the rundown. Here's the agenda:Funding rounds from: Ramp, which raised $300 million at a $3.9 billion valuation; NoRedInk which put together an impressive $50 million Series B; and Playbook, which is building a sort of Dropbox for designers. Each company gave us something different to noodle on, be it the diverging strategies at Ramp and Brex, how NoRedInk is different from Grammarly, and why Dropbox is not the Dropbox for designers.Then we spun the globe to narrow our focus to Latin America, a booming startup scene that Mary Ann recently profiled for Extra Crunch. In a nutshell, venture capital is helping drive an enormous wave of startup activity in the region -- or perhaps a wave of startup activity is driving a boom in venture investment? -- leading to huge companies, and perhaps some tech-powered inclusion of more folks into the modern banking, and digital economy. (For more, here are notes on the Brazilian market's rising exit tally! And Flink raised, which was worth chewing on as well.)We quickly pivoted to the hot button issue of the moment for every startup (and business): hiring. Natasha noted how startups used to focus on runway, and now they are looking to fill empty seats amid the great resignation. Finally, we nattered about huge venture results from Boston, big numbers from Austin, and what increasingly feels like an everything bubble. Chicago is doing well, too. Pick a city, it's putting up big numbers.And that's a wrap, for, well, at least the next 5 seconds.

海外スタートアップラジオ
学生に文章の書き方を教えるプラットフォームNoRedInk

海外スタートアップラジオ

Play Episode Listen Later Aug 26, 2021 9:18


サンフランシスコを拠点とするスタートアップのNoRedInkは10年近くにわたって、ライティングを上達させたい学生をソフトウェアで支援している。 ★NoRedInk http://noredink.com/ ★Grammarly https://www.grammarly.com/ ★Youtube紹介動画 https://www.youtube.com/watch?v=WaMonZpiY7Y ★TECH CRUNCH https://jp.techcrunch.com/2021/08/26/2021-08-24-noredink-raises-50-million-series-b-to-help-students-become-better-writers/ ★転職のご相談は以下より、ご連絡お待ちしています! 「d.kominato.914@gmail.com」 ★だいじろうのTwitter(ご意見お待ちしてます!) https://twitter.com/daijirostartup #海外 #スタートアップ #ライティング #AI #文章 --- Send in a voice message: https://anchor.fm/daijirostartup/message

TechCrunch Startups – Spoken Edition
NoRedInk raises $50 million Series B to help students become better writers

TechCrunch Startups – Spoken Edition

Play Episode Listen Later Aug 25, 2021 4:39


“In order to become a better writer, read your written words out loud.” That's one of the first, and best, writing tips I ever received. I always found the advice ironic because it required me to change the medium of my writing to become a better writer. Still, all these years later, it's true: vocalizing […]

Modern Web
S08E014 Modern Web Podcast - Elm with Richard Feldman

Modern Web

Play Episode Listen Later Jul 23, 2021 58:12


In this episode, Lindsay Wardell talks with Richard Feldman about Elm, the delightful functional programming language for creating web applications. They discuss how Richard got into programming and his first experiences with Elm, then dive into some of the key features of Elm such as no runtime exceptions and its helpful compiler. They then discuss where Elm is going, and some of the great community tools that exist in the Elm ecosystem.   Guest: Richard Feldman (@rtfeldman) - Head of Technology at NoRedInk and author of Elm In Action    Host: Lindsay Wardell (@lindsaykwardell) - Software Engineer at This Dot, co-host of Views on Vue    This episode is sponsored by Progress KendoReact & This Dot Labs. 

Elm Radio
032: Elm's Universal Pattern

Elm Radio

Play Episode Listen Later Jun 7, 2021 74:32


Joël Quenneville (Twitter)Joël's blog post Elm's Universal Patternmap2Maybe.map2MetaphorsSome common metaphors for Elm's Universal Pattern (Applicative Pattern).MappingCombiningLiftingWrapping and unwrapping boxesBlog post on Two ways of looking at map functionsExamplesRandom generatorsApply mapping functions to vanilla value functions to keep things cleanTipsSeparate branching code from doing code (discussed in-depth in Joël's blog post Problem Solving with Maybe)Stay at one level of abstractionJson decoders as combining functionsScott Wlaschin Railway Oriented ProgrammingDillon's blog post Combinators - Inverting Top-Down TransformsThe JSON structure and Elm type don't have to mirror each other - start with your ideal type and work backwardsApplicative patternApplicative needs 1) way to construct, 2) map2 or andMapJson.Decode.Pipeline.required functionRecord constructorsPractice writing it with an anonymous function to convince yourself it's equivalentRecord constructors are just a plain old elm functionmap2 doesn't take a type, it takes a function -NoRedInk/elm-json-decode-pipeline is a useful reference for implementing this kind of api on your ownApplicative Laws in HaskellMonomorphic vs polymorphicParser Combinatorselm/parser episodeJoël's blog posts on the ThoughtBot blogJoël's Random generators talkJoël's Maybe talkSome more blog posts by Joël that related to Elm's Universal Pattern:Running out of mapsPipeline Decoders in ElmJoël's journey to building a parser combinator:Nested cases - https://ellie-app.com/b9nGmZVp9Vca1Extracted Result functions - https://ellie-app.com/b9qtqTf8zYda1Introducing a Parser alias and map2 - https://ellie-app.com/b9MwZ3y4t8ra1Re-implementing with elm/parser - https://ellie-app.com/b9NZhkTGdfya1Getting Unstuck with Elm JSON Decoders - because mapping is universal, you can solve equivalent problems with the same pattern (described in this post)

Teach and Wine About It
Throwing Away 2020 And All That Stuff in Your Desk

Teach and Wine About It

Play Episode Listen Later Jan 13, 2021 23:53


This week, we're kicking off 2021 by dumping all those trinkets from our desks. In addition, we explore what we're taking with us in this new year both in the classroom and at home. Sean also makes the worst wine-related pun in the history of all puns. Check out Edpuzzle, Mote, and NoRedInk from this week's episode! --- Send in a voice message: https://anchor.fm/teachandwineaboutit/message

All Ruby Podcasts by Devchat.tv
RUBY 483: Unlocking the Power of Functional Programming and Elm with Richard Feldman

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Jan 5, 2021 57:36


Richard Feldman - author of Elm in Action - joins the Rogues to discuss the advantages of Functional Programming and using Elm. Elm is a programming language that is a functional programming language built for the front-end that compiles to JavaScript. Due to its set of enforced assumptions, it leads to clean code and powerful programming constructs. Panel John Epperson Luke Stutters Guest Richard Feldman Sponsors  Raygun | Click here to get started on your free 14-day trial Links Vue.js GitHub- NoRedInk/elm-rails ELM Homepage Discourse ELM ELM Slack Built with Elm Picks John- GitHub: spree/spree John- GitHub: solidusio/solidus John- Merlin Series (The Lost Years by T.A.) Luke- PQINA | Designs and Builds Performant, Responsive, and Highly Polished Web Components Richard- TV series: Battlestar Galactica Richard- Frontend Masters Richard- Barbell medicine

Ruby Rogues
RUBY 483: Unlocking the Power of Functional Programming and Elm with Richard Feldman

Ruby Rogues

Play Episode Listen Later Jan 5, 2021 57:36


Richard Feldman - author of Elm in Action - joins the Rogues to discuss the advantages of Functional Programming and using Elm. Elm is a programming language that is a functional programming language built for the front-end that compiles to JavaScript. Due to its set of enforced assumptions, it leads to clean code and powerful programming constructs. Panel John Epperson Luke Stutters Guest Richard Feldman Sponsors  Raygun | Click here to get started on your free 14-day trial Links Vue.js GitHub- NoRedInk/elm-rails ELM Homepage Discourse ELM ELM Slack Built with Elm Picks John- GitHub: spree/spree John- GitHub: solidusio/solidus John- Merlin Series (The Lost Years by T.A.) Luke- PQINA | Designs and Builds Performant, Responsive, and Highly Polished Web Components Richard- TV series: Battlestar Galactica Richard- Frontend Masters Richard- Barbell medicine

Devchat.tv Master Feed
RUBY 483: Unlocking the Power of Functional Programming and Elm with Richard Feldman

Devchat.tv Master Feed

Play Episode Listen Later Jan 5, 2021 57:36


Richard Feldman - author of Elm in Action - joins the Rogues to discuss the advantages of Functional Programming and using Elm. Elm is a programming language that is a functional programming language built for the front-end that compiles to JavaScript. Due to its set of enforced assumptions, it leads to clean code and powerful programming constructs. Panel John Epperson Luke Stutters Guest Richard Feldman Sponsors  Raygun | Click here to get started on your free 14-day trial Links Vue.js GitHub- NoRedInk/elm-rails ELM Homepage Discourse ELM ELM Slack Built with Elm Picks John- GitHub: spree/spree John- GitHub: solidusio/solidus John- Merlin Series (The Lost Years by T.A.) Luke- PQINA | Designs and Builds Performant, Responsive, and Highly Polished Web Components Richard- TV series: Battlestar Galactica Richard- Frontend Masters Richard- Barbell medicine

React Round Up
RUBY 483: Unlocking the Power of Functional Programming and Elm with Richard Feldman

React Round Up

Play Episode Listen Later Jan 5, 2021 57:36


Richard Feldman - author of Elm in Action - joins the Rogues to discuss the advantages of Functional Programming and using Elm. Elm is a programming language that is a functional programming language built for the front-end that compiles to JavaScript. Due to its set of enforced assumptions, it leads to clean code and powerful programming constructs. Panel John Epperson Luke Stutters Guest Richard Feldman Sponsors  Raygun | Click here to get started on your free 14-day trial Links Vue.js GitHub- NoRedInk/elm-rails ELM Homepage Discourse ELM ELM Slack Built with Elm Picks John- GitHub: spree/spree John- GitHub: solidusio/solidus John- Merlin Series (The Lost Years by T.A.) Luke- PQINA | Designs and Builds Performant, Responsive, and Highly Polished Web Components Richard- TV series: Battlestar Galactica Richard- Frontend Masters Richard- Barbell medicine

Dreams with Deadlines
Create high five situations

Dreams with Deadlines

Play Episode Listen Later Jun 8, 2020 48:58


Joining me is Blake Thomas, the Director of Engineering at NoRedInk. He’s learned a lot while working with engineers for over two decades. On this episode, you’ll hear us chat about the 2 modes of failure, why we experience issues with people updating their OKRs (and what we can do about it), how to prevent “OKR Theatre,” the universal reality of today’s workforce and more.

More and More Every Day
1.16. Connect First; Teach Second (with Stephanie Burke-Liggett)

More and More Every Day

Play Episode Listen Later May 5, 2020 38:14


It's National Teacher Appreciation Week! To celebrate, we are dropping an episode every day during the week of May 4-8th. Each episode will feature an interview with a classroom (K-12) teacher about his/her experiences teaching in the COVID19 era. Stephanie Burke-Liggett is a Middle School Language Arts Teacher at Woodrow Wilson Academy in Westminster, Colorado. She joined us to talk about migrating her classes to an online format. Her suggestion that we connect first; teach second demonstrates her approach to working with middle school students during the COVID19 era. See Stephanie's profile at https://southphoenixoralhistory.com/more-and-more-every-day/Show notes:Woodrow Wilson Academy is a free & public K-8 charter school with approximately 500 students. WWA is in the Jefferson County School District, the largest school district in the Denver metro area. Some of Stephanie's suggestions for Language Arts teaching tools: No Red Ink and QuizletStephanie's article suggestion: AP, "I just can't do this," April 23, 2020. Connect with Stephanie: Email: sburke-liggett@wwacademy.orgInstagram: sbl07Twitter: @sburkeliggett07Connect with us:Click here to tell us your story.Why is it called More and More Every Day? Click here to read our first More and More post. Follow us on Instagram @smcchistoryInterview date: 4/23/20

Implementing Elm
101: Brian Hicks on Quill, custom elements, and how NoRedInk handles text editing

Implementing Elm

Play Episode Listen Later Jan 20, 2020 33:02


In this episode I chat with Brian Hicks from NoRedInk to discuss how they use a custom element to interface with Quill, a JavaScript rich text editor.

School Growth Mastery
53. The Real Way Technology Will Impact Our Classrooms, with Matt Greenfield

School Growth Mastery

Play Episode Listen Later Jan 9, 2020 30:02


From where do contemporary, resilient changes begin - from a top-down mandate, or from a bottom-up, tech-driven evolution? Today’s guest, Matt Greenfield, is an investor in entrepreneurial companies that drive transformative social outcomes through the power of using technology in education. He sifts through fascinating trends and prophetic scenarios as we discuss everything from the Common Core to cloud-based platforms, bullying to virtual reality, and pedagogy to washing machines. If you are curious about how technology will continue to impact education, have a listen to the man who searches the horizon for innovation so that he can invest in change for the educational ecosystem.Quotes:12:39 “There is a whole range of needs that children have that have to be addressed one way or another if they are to have even the slightest chance of getting a decent education and carving out a place for themselves in the workforce of the 21st-century.”28:28 “Unlocking the passions and the creativity of the students is the key; the first thing you have to do is to ask them what they are passionately interested in or curious about. Everything has to start with that.”Here are some resources mentioned in our discussion:Rethink - https://rethink.vc/NoRedInk - https://www.noredink.com/Bright Hive - https://brighthive.io/Burning Glass Technologies - https://www.burning-glass.com/Outschool - https://outschool.com/Keith Rabois - https://podcastnotes.org/2019/03/05/rabois-5/GoNoodle - https://www.gonoodle.com/Most Likely to Succeed - https://teddintersmith.com/mltsfilm/Naviance - https://www.naviance.com/services/professional-developmentWhere to learn more about the guest:Matt at Rethink - https://rethink.vc/people/matt-greenfield/Matt on Twitter - https://twitter.com/mattgreenfieldMatt on Linkedin - linkedin.com/in/matt-greenfield-07b96815Matt at EdSurge - https://www.edsurge.com/writers/matt-greenfieldWhere to learn more about Enrollhand:Website: www.enrollhand.comOur training on how to grow your school: https://webinar-replay.enrollhand.comOur free Facebook group: https://www.facebook.com/groups/schoolgrowth/You can always reach out by emailing hello@enrollhand.com

Elm Town
Elm Town 47 - A Cool, Easy Way To Start Learning Haskell

Elm Town

Play Episode Listen Later Sep 21, 2019 49:33


Stöffel talks about Jetpack, a simplified build tool that NoRedInk built to replace webpack, and how it started his journey to learn Haskell and eventually end up on the team behind NoRedInk's next-generation, Haskell-based server-side architecture.

One Year From Now
BSO Podcast S2, E2 - Leaving Space for Crafting a Career Through Your Business with Blake Thomas

One Year From Now

Play Episode Listen Later May 7, 2019 53:42


Today, we're chatting with Blake Thomas, Director of Engineering at No Red Ink. Blake has had a long career in technology, and the reason I want to share Blake with this community is that he is really practiced at making calculated career decisions. What does this mean for people who work for themselves? Well, it's easy to be business-centric and forget about the career we want and are crafting for ourselves through our business. We don't always have answers to the questions of what do I want to learn next? and what do I want to be creating? If you are in this for the long haul, it's important that you know what you want to learn and how you want to grow so you can thrive going forward and understand how you are crafting a career through your business. 

Devchat.tv Master Feed
JSJ 354: Elm with Richard Feldman

Devchat.tv Master Feed

Play Episode Listen Later Mar 5, 2019 37:56


Sponsors Kendo UI Sentry use the code “devchat” for $100 credit Clubhouse CacheFly Panel Joe Eames Aimee Knight Joined by special guest: Richard Feldman Episode Summary In this episode of JavaScript Jabber, Richard Feldman, primarily known for his work in Elm, the author of “Elm in Action” and Head of Technology at NoRedInk, talks about Elm 0.19 and the new features introduced in it. He explains how the development work is distributed between the Elm creator – Evan Czaplicki and the other members of the community and discusses the challenges on the way to Elm 1.0. Richard also shares some educational materials for listeners interested in learning Elm and gives details on Elm conferences around the world touching on the topic of having diversity among the speakers. He finally discusses some exciting things about Elm which would encourage developers to work with it. Links Elm in Action Frontend Masters – Introduction to Elm Frontend Masters – Advanced Elm Small Assets without the Headache Elm Guide ElmBridge San Francisco Renee Balmert Picks Aimee Knight: Most lives are lived by default Joe Eames: Thinkster Richard Feldman: Framework Summit 2018 – Keynote speech Nix Package Manager A Philosophy of Software Design

JavaScript Jabber
JSJ 354: Elm with Richard Feldman

JavaScript Jabber

Play Episode Listen Later Mar 5, 2019 37:56


Sponsors Kendo UI Sentry use the code “devchat” for $100 credit Clubhouse CacheFly Panel Joe Eames Aimee Knight Joined by special guest: Richard Feldman Episode Summary In this episode of JavaScript Jabber, Richard Feldman, primarily known for his work in Elm, the author of “Elm in Action” and Head of Technology at NoRedInk, talks about Elm 0.19 and the new features introduced in it. He explains how the development work is distributed between the Elm creator – Evan Czaplicki and the other members of the community and discusses the challenges on the way to Elm 1.0. Richard also shares some educational materials for listeners interested in learning Elm and gives details on Elm conferences around the world touching on the topic of having diversity among the speakers. He finally discusses some exciting things about Elm which would encourage developers to work with it. Links Elm in Action Frontend Masters – Introduction to Elm Frontend Masters – Advanced Elm Small Assets without the Headache Elm Guide ElmBridge San Francisco Renee Balmert Picks Aimee Knight: Most lives are lived by default Joe Eames: Thinkster Richard Feldman: Framework Summit 2018 – Keynote speech Nix Package Manager A Philosophy of Software Design

SPS Digital Learning Hour
Episode 52: UC Updates, Awesome Websites and Samantha Stolz

SPS Digital Learning Hour

Play Episode Listen Later Nov 6, 2018 23:39


We're back! It's been a rough couple of weeks, listen in to find out what we have been up to. I also talk about a bunch of sites that would be awesome to use in the classroom, any classroom! I also interview Samantha Stolz, who is doing some awesome things in the classroom.Follow Me on Twitter, Instagram and Facebook:@beardedteched and @beardedtechedguyOneNote Conference:https://learnonconference.com/Follow these guys on Twitter for OneNote Tips@MSOneNote—Official Microsoft OneNote account@OneNoteEDU—Official Microsoft OneNote in Education account@mtholfsen—Product Manager on the #MicrosoftEDU team@kurtsoeser—Owner of o365school and sponsor of this year’s event@Jared_DeCamp—Organizer and host of this conferenceLinks:Blog Posts:https://my.springfieldpublicschools.com/welearn/SitePages/Blog.aspxhttps://www.musictheory.net/https://www.historypin.org/en/http://www.scholastic.com/teachdearamerica/https://schools.duolingo.com/https://www.mathplayground.com/http://en.childrenslibrary.org/http://www.loyalbooks.com/genre/Childrenhttps://storybird.com/https://kids.nationalgeographic.com/https://www.nasa.gov/Links from Samantha Stolzhttps://www.zearn.org/https://www.freckle.com/https://www.noredink.com/https://www.khanacademy.org/https://learnzillion.com/p/https://www.prodigygame.com/Music: Ashamaluev Music https://www.ashamaluevmusic.com/royalty-free-music

SPS Digital Learning Hour
Episode 52: UC Updates, Awesome Websites and Samantha Stolz

SPS Digital Learning Hour

Play Episode Listen Later Nov 6, 2018 23:39


We're back! It's been a rough couple of weeks, listen in to find out what we have been up to. I also talk about a bunch of sites that would be awesome to use in the classroom, any classroom! I also interview Samantha Stolz, who is doing some awesome things in the classroom.Follow Me on Twitter, Instagram and Facebook:@beardedteched and @beardedtechedguyOneNote Conference:https://learnonconference.com/Follow these guys on Twitter for OneNote Tips@MSOneNote—Official Microsoft OneNote account@OneNoteEDU—Official Microsoft OneNote in Education account@mtholfsen—Product Manager on the #MicrosoftEDU team@kurtsoeser—Owner of o365school and sponsor of this year’s event@Jared_DeCamp—Organizer and host of this conferenceLinks:Blog Posts:https://my.springfieldpublicschools.com/welearn/SitePages/Blog.aspxhttps://www.musictheory.net/https://www.historypin.org/en/http://www.scholastic.com/teachdearamerica/https://schools.duolingo.com/https://www.mathplayground.com/http://en.childrenslibrary.org/http://www.loyalbooks.com/genre/Childrenhttps://storybird.com/https://kids.nationalgeographic.com/https://www.nasa.gov/Links from Samantha Stolzhttps://www.zearn.org/https://www.freckle.com/https://www.noredink.com/https://www.khanacademy.org/https://learnzillion.com/p/https://www.prodigygame.com/Music: Ashamaluev Music https://www.ashamaluevmusic.com/royalty-free-music

Ruby Rogues
RR 385: “Ruby/Rails Testing” with Jason Swett

Ruby Rogues

Play Episode Listen Later Oct 23, 2018 62:03


Panel: Dave Kimura Eric Berry Nathan Hopkins David Richards Special Guest: Jason Swett In this episode of Ruby Rogues, the panel talks with Jason Swett who is a host of the podcast show, Ruby Testing! Jason also teaches Rails testing at CodeWithJason.com. He currently resides in the Michigan area and works for Ben Franklin Labs. Check-out today’s episode where the panelists and the guest discuss testing topics. Show Topics: 0:00 – Sentry.IO – Advertisement! Check out the code: DEVCHAT @ Sentry.io. 1:07 – I am David Kimura and here is the panel! Tell us what is going on? 1:38 – Jason: I started my own podcast, and have been doing that for the past few months. That’s one thing. I started a new site with CodeWithJason.com. 2:04 – You released a course? 2:10 – Jason: Total flop and it doesn’t exist, but I am doing something else. 2:24 – I bet you learned a lot by creating the course? 2:34 – Jason: The endeavor of TEACHING it has helped me a lot. 2:50 – Tell us why we should drink the Koolaid? 3:02 – Jason: What IS testing? Good question. Whether is it is manual testing or automated testing. We might was well automate it. 3:25 – If we are testing our code what does that look like? 3:34 – Jason: Not sure what you mean, but I am doing tests at a fine grain vs. coarser grain. 4:00 – Show of hands who has...? 4:19 – What different tests are there? 4:20 – Jason: Good question. One term that one person uses is different to a different person. Let’s start with unit tests vs. integration tests. Jason dives into the similarities and differences between these 2 tests (see above). There are different tests, such as: featured tests, acceptance tests, etc. 5:45 – What tests are THE best? 5:50 – Jason: Good question. The kind of tests you are writing depends on what type of coverage you are going for. If I had a sign-up page for a user, I would... 7:36 – What anti-patterns are you seeing? What is your narrative in teaching people how to use them? 8:07 – Jason talks first about his background and his interaction with one of his colleagues. 8:58 – Question. 9:00 – Jason continues with his answers from 8:07. 9:32 – Jason: Feel free to chime-in. What have you done? 9:42 – I often ignore it until I feel bad and then I say: wait-a-minute I am a professional. Then I realize I ignored the problem because I was acting cowardly. 10:29 – For me it depends on the test that it is. One gem that I found is: RSpec RETRY. 11:16 – Jason: The test is flapping because of something is wrong with the database or something else. Since you asked about anti-patterns let’s talk about that! Rails and Angular are mentioned. 13:10 – Do you find that you back off of your unit testing when you are using integration? 13:22 – Jason: It depends on the context we are talking about. Jason talks about featured testing, model-level testing, and more. 13:58 – What is your view on using MOCKS or FAKES. What should we be doing there? 14:10 – Jason: Going to the Angular world I understand Mocks better than now. There was a parable that I think is applicable here about the young and the old fish. 16:23 – Jason continues talking about testing things in isolation. 16:36 – Question.  16:39 – I have been looking for an area to specialize in and I wrote an eBook. (Check out here to see the articles and books that Jason has authored.) Then I was looking around and I wanted to see what people’s issues are with Rails? They have a hard time with testing. I wanted to help them feel competent with it. 18:03 – In your course you have how to choose a framework. I know Ruby has several options on that front – how do you choose? 18:24 – Jason: There are 2 factors to consider. Jason tells us what those two factors are. Jason: Angular, React and Vue. 19:52 – Panelist: I had a conversation with a beginner and we were talking about the different tests. He said the DSL really appealed to him. The surface area of the AI made it approachable for him. 20:27 – Jason: I wished I had figured out DSL out a little better. Understanding the concept of a block. The IT is just a function and you can put parentheses in different areas and... 21:01 – That makes sense. Let’s revisit the Tweet you wrote. 21:35 – Jason: There are certain use cases where it makes sense. Where Gmail was the thing out there. At some point the Internet formed the opinion that... 22:39 – Old saying: Nobody gets fired for using Microsoft and then it was IBM. Nothing wrong with those things if that’s what you are trying to do. Sometimes we make decisions to not be criticized. We try to grab big frameworks and big codes so we are not criticized for. 23:48 – Jason: I think developers have this idea that OLD is OUTDATED. Not so. I think it’s mature, not necessarily outdated. I think it’s a pervasive idea. 24:31 – I think it suffers a bit when all the mind shares get lumped into one thing. The panelist continues... 24:53 – Jason: I don’t know if I like this analogy. 26:00 – I agree with that sentiment. It’s crazy that the complexity has become so pervasive. 26:18 – I think of SPAs as... 26:37 – Jason: Going back to the Tweet I wrote, I am pulling in JavaScript but I am preferring to sprinkle Java into Rails. 27:02 – Absolutely. I think that’s where we agree on. Late in 2017 we had the guest... “Use JavaScript sprinkles.” 27:49 – Panelist chimes-in. 28:37 – Jason: That make sense. Use your preexisting... I am afraid of committing to a single framework. I don’t have anything against JavaScript but I am afraid of using only one thing when something else becomes fashionable. 29:30 – Have you found that Java sparkle approach is easy to test? 29:38 – Jason: I think it’s easier. Client server architecture... 30:10 – Advertisement: Get A Coder Job! 30:41 – Shout-out to the Rails team! What other testing frameworks are there? What if you are not the developer but you are the Quality Assurance (QA) person. They have been given the task of testing on the application. 31:30 – Jason: So someone who is not a developer and they want to test the application. I don’t want to get out of my role of expertise. I did talk to a QA engineer and I asked them: What do you do? All of his tests are manual. He does the same stuff as a Rails developer would do. 32:52 – Panelist talks about pseudo code. 34:07 – Jason: I am curious, Dave, about the non-programmer helping with tests what is the team structure? 34:23 – Dave: You will have one QA per three developers. 34:44 – Jason: If you have a QA person he is integrated within the team – that’s what has been the case for me. 35:02 – Dave: It’s a nice thing to have because we need to crank out some features and we have a good idea what is wrong with the app. We can go in there and see if our application is good, but they are combining different scenarios to do the unit tests and see what they are lacking. They are uncovering different problems that we hadn’t thought of. 36:07 – The organization has to have the right culture for that to work. 36:35 – If it’s a small team then it will help to see what everyone is doing – it’s that engagement level. If the team is too large then it could be a problem. 37:15 – Jason: Engagement between whom? 37:27 – Both. Panelist goes into detail about different engagement levels throughout the team. 38:10 – Jason: Yeah that’s a tough thing. 38:49 – It’s interesting to see the things that are being created. Testing seems to help that out. We are getting bugs in that area or se didn’t design it well there... We see that we need some flexibility and getting that input and having a way to solve the problems. 39:32 – Jason: Continuous deployment – let’s segue into this topic. 41:17 – Panelist: Do you have recommendations on how often we should be deploying in that system per day/week? 41:40 – Jason: We would deploy several times a day, which was great. The more the better because the more frequently you are deploying the fewer things will go wrong. 42:21 – More frequently the better and more people involved. 42:45 – Jason continues this conversation. 42:51 – Panelist: Continuous integration – any time you were say to forgo tests or being less rigid? 43:14 – Jason: I don’t test everything. I don’t write tests for things that have little risks. 43:56 – I think it is a good segue into how you write your code. If you write a code that is like spaghetti then it will be a mess. Making things easier to test. 44:48 – Jason: This is fresh in my mind because I am writing an app called Green Field. 46:32 – Uniqueness Validations, is mentioned by Jason. 47:00 – Anything else to add to testing a Rails application? 47:08 – Jason: Let’s talk about 2 things: walking skeleton and small stories. This book is a great resource for automated testing. Last point that I want to talk about is small stories: continues deployment and continuous delivery. If you make your stories smaller then you are making your stories crisply defined. Have some bullet points to make it really easy to answer the question. Answer the question: is this story done or not done? Someone should be able to run through the bullet points and answer that question. 50:02 – I am in favor of small stories, too. Makes you feel more productive, too. 50:14 – Work tends to lend itself to these types of stories and running a sprint. 51:22 – You don’t have to carry that burden when you go home. You might have too big of a chunk – it carries too much weight to it. 51:47 – Book the Phoenix Project. Work in progress is a bad thing. That makes sense. You want to have fewer balls in the air. 52:17 – Anything else? 52:22 – Jason: You can find me at: CodewithJason.com also Twitter!  52:45 – Advertisement – Fresh Books! 1:01:50 – Cache Fly! Links: Get a Coder Job Course Erlang Ruby Ruby Motion Ruby on Rails Angular Single Page Application (SPA) RSpec – Retry Ruby Testing Podcast The Feynman Technique Model Book: Growing Object-Oriented Software, Guided by Tests (1st edition) Jason Swett’s Twitter Jason Swett’s LinkedIn Parable: Young Fish and Old Fish – What is Water? Jason’s articles and eBook Jason’s Website Sponsors: Sentry Get a Coder Job Course Fresh Books Cache Fly Picks: David This is Water The Feynman Technique Model Nate Taking some time off Pry Test Eric Fake App Ruby Hack Conference Dave Brooks Shoes Jason The Food Lab Growing Object-Oriented Software

Devchat.tv Master Feed
RR 385: “Ruby/Rails Testing” with Jason Swett

Devchat.tv Master Feed

Play Episode Listen Later Oct 23, 2018 62:03


Panel: Dave Kimura Eric Berry Nathan Hopkins David Richards Special Guest: Jason Swett In this episode of Ruby Rogues, the panel talks with Jason Swett who is a host of the podcast show, Ruby Testing! Jason also teaches Rails testing at CodeWithJason.com. He currently resides in the Michigan area and works for Ben Franklin Labs. Check-out today’s episode where the panelists and the guest discuss testing topics. Show Topics: 0:00 – Sentry.IO – Advertisement! Check out the code: DEVCHAT @ Sentry.io. 1:07 – I am David Kimura and here is the panel! Tell us what is going on? 1:38 – Jason: I started my own podcast, and have been doing that for the past few months. That’s one thing. I started a new site with CodeWithJason.com. 2:04 – You released a course? 2:10 – Jason: Total flop and it doesn’t exist, but I am doing something else. 2:24 – I bet you learned a lot by creating the course? 2:34 – Jason: The endeavor of TEACHING it has helped me a lot. 2:50 – Tell us why we should drink the Koolaid? 3:02 – Jason: What IS testing? Good question. Whether is it is manual testing or automated testing. We might was well automate it. 3:25 – If we are testing our code what does that look like? 3:34 – Jason: Not sure what you mean, but I am doing tests at a fine grain vs. coarser grain. 4:00 – Show of hands who has...? 4:19 – What different tests are there? 4:20 – Jason: Good question. One term that one person uses is different to a different person. Let’s start with unit tests vs. integration tests. Jason dives into the similarities and differences between these 2 tests (see above). There are different tests, such as: featured tests, acceptance tests, etc. 5:45 – What tests are THE best? 5:50 – Jason: Good question. The kind of tests you are writing depends on what type of coverage you are going for. If I had a sign-up page for a user, I would... 7:36 – What anti-patterns are you seeing? What is your narrative in teaching people how to use them? 8:07 – Jason talks first about his background and his interaction with one of his colleagues. 8:58 – Question. 9:00 – Jason continues with his answers from 8:07. 9:32 – Jason: Feel free to chime-in. What have you done? 9:42 – I often ignore it until I feel bad and then I say: wait-a-minute I am a professional. Then I realize I ignored the problem because I was acting cowardly. 10:29 – For me it depends on the test that it is. One gem that I found is: RSpec RETRY. 11:16 – Jason: The test is flapping because of something is wrong with the database or something else. Since you asked about anti-patterns let’s talk about that! Rails and Angular are mentioned. 13:10 – Do you find that you back off of your unit testing when you are using integration? 13:22 – Jason: It depends on the context we are talking about. Jason talks about featured testing, model-level testing, and more. 13:58 – What is your view on using MOCKS or FAKES. What should we be doing there? 14:10 – Jason: Going to the Angular world I understand Mocks better than now. There was a parable that I think is applicable here about the young and the old fish. 16:23 – Jason continues talking about testing things in isolation. 16:36 – Question.  16:39 – I have been looking for an area to specialize in and I wrote an eBook. (Check out here to see the articles and books that Jason has authored.) Then I was looking around and I wanted to see what people’s issues are with Rails? They have a hard time with testing. I wanted to help them feel competent with it. 18:03 – In your course you have how to choose a framework. I know Ruby has several options on that front – how do you choose? 18:24 – Jason: There are 2 factors to consider. Jason tells us what those two factors are. Jason: Angular, React and Vue. 19:52 – Panelist: I had a conversation with a beginner and we were talking about the different tests. He said the DSL really appealed to him. The surface area of the AI made it approachable for him. 20:27 – Jason: I wished I had figured out DSL out a little better. Understanding the concept of a block. The IT is just a function and you can put parentheses in different areas and... 21:01 – That makes sense. Let’s revisit the Tweet you wrote. 21:35 – Jason: There are certain use cases where it makes sense. Where Gmail was the thing out there. At some point the Internet formed the opinion that... 22:39 – Old saying: Nobody gets fired for using Microsoft and then it was IBM. Nothing wrong with those things if that’s what you are trying to do. Sometimes we make decisions to not be criticized. We try to grab big frameworks and big codes so we are not criticized for. 23:48 – Jason: I think developers have this idea that OLD is OUTDATED. Not so. I think it’s mature, not necessarily outdated. I think it’s a pervasive idea. 24:31 – I think it suffers a bit when all the mind shares get lumped into one thing. The panelist continues... 24:53 – Jason: I don’t know if I like this analogy. 26:00 – I agree with that sentiment. It’s crazy that the complexity has become so pervasive. 26:18 – I think of SPAs as... 26:37 – Jason: Going back to the Tweet I wrote, I am pulling in JavaScript but I am preferring to sprinkle Java into Rails. 27:02 – Absolutely. I think that’s where we agree on. Late in 2017 we had the guest... “Use JavaScript sprinkles.” 27:49 – Panelist chimes-in. 28:37 – Jason: That make sense. Use your preexisting... I am afraid of committing to a single framework. I don’t have anything against JavaScript but I am afraid of using only one thing when something else becomes fashionable. 29:30 – Have you found that Java sparkle approach is easy to test? 29:38 – Jason: I think it’s easier. Client server architecture... 30:10 – Advertisement: Get A Coder Job! 30:41 – Shout-out to the Rails team! What other testing frameworks are there? What if you are not the developer but you are the Quality Assurance (QA) person. They have been given the task of testing on the application. 31:30 – Jason: So someone who is not a developer and they want to test the application. I don’t want to get out of my role of expertise. I did talk to a QA engineer and I asked them: What do you do? All of his tests are manual. He does the same stuff as a Rails developer would do. 32:52 – Panelist talks about pseudo code. 34:07 – Jason: I am curious, Dave, about the non-programmer helping with tests what is the team structure? 34:23 – Dave: You will have one QA per three developers. 34:44 – Jason: If you have a QA person he is integrated within the team – that’s what has been the case for me. 35:02 – Dave: It’s a nice thing to have because we need to crank out some features and we have a good idea what is wrong with the app. We can go in there and see if our application is good, but they are combining different scenarios to do the unit tests and see what they are lacking. They are uncovering different problems that we hadn’t thought of. 36:07 – The organization has to have the right culture for that to work. 36:35 – If it’s a small team then it will help to see what everyone is doing – it’s that engagement level. If the team is too large then it could be a problem. 37:15 – Jason: Engagement between whom? 37:27 – Both. Panelist goes into detail about different engagement levels throughout the team. 38:10 – Jason: Yeah that’s a tough thing. 38:49 – It’s interesting to see the things that are being created. Testing seems to help that out. We are getting bugs in that area or se didn’t design it well there... We see that we need some flexibility and getting that input and having a way to solve the problems. 39:32 – Jason: Continuous deployment – let’s segue into this topic. 41:17 – Panelist: Do you have recommendations on how often we should be deploying in that system per day/week? 41:40 – Jason: We would deploy several times a day, which was great. The more the better because the more frequently you are deploying the fewer things will go wrong. 42:21 – More frequently the better and more people involved. 42:45 – Jason continues this conversation. 42:51 – Panelist: Continuous integration – any time you were say to forgo tests or being less rigid? 43:14 – Jason: I don’t test everything. I don’t write tests for things that have little risks. 43:56 – I think it is a good segue into how you write your code. If you write a code that is like spaghetti then it will be a mess. Making things easier to test. 44:48 – Jason: This is fresh in my mind because I am writing an app called Green Field. 46:32 – Uniqueness Validations, is mentioned by Jason. 47:00 – Anything else to add to testing a Rails application? 47:08 – Jason: Let’s talk about 2 things: walking skeleton and small stories. This book is a great resource for automated testing. Last point that I want to talk about is small stories: continues deployment and continuous delivery. If you make your stories smaller then you are making your stories crisply defined. Have some bullet points to make it really easy to answer the question. Answer the question: is this story done or not done? Someone should be able to run through the bullet points and answer that question. 50:02 – I am in favor of small stories, too. Makes you feel more productive, too. 50:14 – Work tends to lend itself to these types of stories and running a sprint. 51:22 – You don’t have to carry that burden when you go home. You might have too big of a chunk – it carries too much weight to it. 51:47 – Book the Phoenix Project. Work in progress is a bad thing. That makes sense. You want to have fewer balls in the air. 52:17 – Anything else? 52:22 – Jason: You can find me at: CodewithJason.com also Twitter!  52:45 – Advertisement – Fresh Books! 1:01:50 – Cache Fly! Links: Get a Coder Job Course Erlang Ruby Ruby Motion Ruby on Rails Angular Single Page Application (SPA) RSpec – Retry Ruby Testing Podcast The Feynman Technique Model Book: Growing Object-Oriented Software, Guided by Tests (1st edition) Jason Swett’s Twitter Jason Swett’s LinkedIn Parable: Young Fish and Old Fish – What is Water? Jason’s articles and eBook Jason’s Website Sponsors: Sentry Get a Coder Job Course Fresh Books Cache Fly Picks: David This is Water The Feynman Technique Model Nate Taking some time off Pry Test Eric Fake App Ruby Hack Conference Dave Brooks Shoes Jason The Food Lab Growing Object-Oriented Software

All Ruby Podcasts by Devchat.tv
RR 385: “Ruby/Rails Testing” with Jason Swett

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Oct 23, 2018 62:03


Panel: Dave Kimura Eric Berry Nathan Hopkins David Richards Special Guest: Jason Swett In this episode of Ruby Rogues, the panel talks with Jason Swett who is a host of the podcast show, Ruby Testing! Jason also teaches Rails testing at CodeWithJason.com. He currently resides in the Michigan area and works for Ben Franklin Labs. Check-out today’s episode where the panelists and the guest discuss testing topics. Show Topics: 0:00 – Sentry.IO – Advertisement! Check out the code: DEVCHAT @ Sentry.io. 1:07 – I am David Kimura and here is the panel! Tell us what is going on? 1:38 – Jason: I started my own podcast, and have been doing that for the past few months. That’s one thing. I started a new site with CodeWithJason.com. 2:04 – You released a course? 2:10 – Jason: Total flop and it doesn’t exist, but I am doing something else. 2:24 – I bet you learned a lot by creating the course? 2:34 – Jason: The endeavor of TEACHING it has helped me a lot. 2:50 – Tell us why we should drink the Koolaid? 3:02 – Jason: What IS testing? Good question. Whether is it is manual testing or automated testing. We might was well automate it. 3:25 – If we are testing our code what does that look like? 3:34 – Jason: Not sure what you mean, but I am doing tests at a fine grain vs. coarser grain. 4:00 – Show of hands who has...? 4:19 – What different tests are there? 4:20 – Jason: Good question. One term that one person uses is different to a different person. Let’s start with unit tests vs. integration tests. Jason dives into the similarities and differences between these 2 tests (see above). There are different tests, such as: featured tests, acceptance tests, etc. 5:45 – What tests are THE best? 5:50 – Jason: Good question. The kind of tests you are writing depends on what type of coverage you are going for. If I had a sign-up page for a user, I would... 7:36 – What anti-patterns are you seeing? What is your narrative in teaching people how to use them? 8:07 – Jason talks first about his background and his interaction with one of his colleagues. 8:58 – Question. 9:00 – Jason continues with his answers from 8:07. 9:32 – Jason: Feel free to chime-in. What have you done? 9:42 – I often ignore it until I feel bad and then I say: wait-a-minute I am a professional. Then I realize I ignored the problem because I was acting cowardly. 10:29 – For me it depends on the test that it is. One gem that I found is: RSpec RETRY. 11:16 – Jason: The test is flapping because of something is wrong with the database or something else. Since you asked about anti-patterns let’s talk about that! Rails and Angular are mentioned. 13:10 – Do you find that you back off of your unit testing when you are using integration? 13:22 – Jason: It depends on the context we are talking about. Jason talks about featured testing, model-level testing, and more. 13:58 – What is your view on using MOCKS or FAKES. What should we be doing there? 14:10 – Jason: Going to the Angular world I understand Mocks better than now. There was a parable that I think is applicable here about the young and the old fish. 16:23 – Jason continues talking about testing things in isolation. 16:36 – Question.  16:39 – I have been looking for an area to specialize in and I wrote an eBook. (Check out here to see the articles and books that Jason has authored.) Then I was looking around and I wanted to see what people’s issues are with Rails? They have a hard time with testing. I wanted to help them feel competent with it. 18:03 – In your course you have how to choose a framework. I know Ruby has several options on that front – how do you choose? 18:24 – Jason: There are 2 factors to consider. Jason tells us what those two factors are. Jason: Angular, React and Vue. 19:52 – Panelist: I had a conversation with a beginner and we were talking about the different tests. He said the DSL really appealed to him. The surface area of the AI made it approachable for him. 20:27 – Jason: I wished I had figured out DSL out a little better. Understanding the concept of a block. The IT is just a function and you can put parentheses in different areas and... 21:01 – That makes sense. Let’s revisit the Tweet you wrote. 21:35 – Jason: There are certain use cases where it makes sense. Where Gmail was the thing out there. At some point the Internet formed the opinion that... 22:39 – Old saying: Nobody gets fired for using Microsoft and then it was IBM. Nothing wrong with those things if that’s what you are trying to do. Sometimes we make decisions to not be criticized. We try to grab big frameworks and big codes so we are not criticized for. 23:48 – Jason: I think developers have this idea that OLD is OUTDATED. Not so. I think it’s mature, not necessarily outdated. I think it’s a pervasive idea. 24:31 – I think it suffers a bit when all the mind shares get lumped into one thing. The panelist continues... 24:53 – Jason: I don’t know if I like this analogy. 26:00 – I agree with that sentiment. It’s crazy that the complexity has become so pervasive. 26:18 – I think of SPAs as... 26:37 – Jason: Going back to the Tweet I wrote, I am pulling in JavaScript but I am preferring to sprinkle Java into Rails. 27:02 – Absolutely. I think that’s where we agree on. Late in 2017 we had the guest... “Use JavaScript sprinkles.” 27:49 – Panelist chimes-in. 28:37 – Jason: That make sense. Use your preexisting... I am afraid of committing to a single framework. I don’t have anything against JavaScript but I am afraid of using only one thing when something else becomes fashionable. 29:30 – Have you found that Java sparkle approach is easy to test? 29:38 – Jason: I think it’s easier. Client server architecture... 30:10 – Advertisement: Get A Coder Job! 30:41 – Shout-out to the Rails team! What other testing frameworks are there? What if you are not the developer but you are the Quality Assurance (QA) person. They have been given the task of testing on the application. 31:30 – Jason: So someone who is not a developer and they want to test the application. I don’t want to get out of my role of expertise. I did talk to a QA engineer and I asked them: What do you do? All of his tests are manual. He does the same stuff as a Rails developer would do. 32:52 – Panelist talks about pseudo code. 34:07 – Jason: I am curious, Dave, about the non-programmer helping with tests what is the team structure? 34:23 – Dave: You will have one QA per three developers. 34:44 – Jason: If you have a QA person he is integrated within the team – that’s what has been the case for me. 35:02 – Dave: It’s a nice thing to have because we need to crank out some features and we have a good idea what is wrong with the app. We can go in there and see if our application is good, but they are combining different scenarios to do the unit tests and see what they are lacking. They are uncovering different problems that we hadn’t thought of. 36:07 – The organization has to have the right culture for that to work. 36:35 – If it’s a small team then it will help to see what everyone is doing – it’s that engagement level. If the team is too large then it could be a problem. 37:15 – Jason: Engagement between whom? 37:27 – Both. Panelist goes into detail about different engagement levels throughout the team. 38:10 – Jason: Yeah that’s a tough thing. 38:49 – It’s interesting to see the things that are being created. Testing seems to help that out. We are getting bugs in that area or se didn’t design it well there... We see that we need some flexibility and getting that input and having a way to solve the problems. 39:32 – Jason: Continuous deployment – let’s segue into this topic. 41:17 – Panelist: Do you have recommendations on how often we should be deploying in that system per day/week? 41:40 – Jason: We would deploy several times a day, which was great. The more the better because the more frequently you are deploying the fewer things will go wrong. 42:21 – More frequently the better and more people involved. 42:45 – Jason continues this conversation. 42:51 – Panelist: Continuous integration – any time you were say to forgo tests or being less rigid? 43:14 – Jason: I don’t test everything. I don’t write tests for things that have little risks. 43:56 – I think it is a good segue into how you write your code. If you write a code that is like spaghetti then it will be a mess. Making things easier to test. 44:48 – Jason: This is fresh in my mind because I am writing an app called Green Field. 46:32 – Uniqueness Validations, is mentioned by Jason. 47:00 – Anything else to add to testing a Rails application? 47:08 – Jason: Let’s talk about 2 things: walking skeleton and small stories. This book is a great resource for automated testing. Last point that I want to talk about is small stories: continues deployment and continuous delivery. If you make your stories smaller then you are making your stories crisply defined. Have some bullet points to make it really easy to answer the question. Answer the question: is this story done or not done? Someone should be able to run through the bullet points and answer that question. 50:02 – I am in favor of small stories, too. Makes you feel more productive, too. 50:14 – Work tends to lend itself to these types of stories and running a sprint. 51:22 – You don’t have to carry that burden when you go home. You might have too big of a chunk – it carries too much weight to it. 51:47 – Book the Phoenix Project. Work in progress is a bad thing. That makes sense. You want to have fewer balls in the air. 52:17 – Anything else? 52:22 – Jason: You can find me at: CodewithJason.com also Twitter!  52:45 – Advertisement – Fresh Books! 1:01:50 – Cache Fly! Links: Get a Coder Job Course Erlang Ruby Ruby Motion Ruby on Rails Angular Single Page Application (SPA) RSpec – Retry Ruby Testing Podcast The Feynman Technique Model Book: Growing Object-Oriented Software, Guided by Tests (1st edition) Jason Swett’s Twitter Jason Swett’s LinkedIn Parable: Young Fish and Old Fish – What is Water? Jason’s articles and eBook Jason’s Website Sponsors: Sentry Get a Coder Job Course Fresh Books Cache Fly Picks: David This is Water The Feynman Technique Model Nate Taking some time off Pry Test Eric Fake App Ruby Hack Conference Dave Brooks Shoes Jason The Food Lab Growing Object-Oriented Software

Changelog Master Feed
Keepin' up with Elm (The Changelog #319)

Changelog Master Feed

Play Episode Listen Later Oct 17, 2018 65:56 Transcription Available


Jerod invites Richard Feldman back on the show to catch up on all things Elm. Did you hear? NoRedInk finally had a production runtime error, the community grew quite a bit (from ‘obscure’ to just ‘niche’), and Elm 0.19 added some killer new features around asset optimization.

The Changelog
Keepin' up with Elm

The Changelog

Play Episode Listen Later Oct 17, 2018 65:56 Transcription Available


Jerod invites Richard Feldman back on the show to catch up on all things Elm. Did you hear? NoRedInk finally had a production runtime error, the community grew quite a bit (from ‘obscure’ to just ‘niche’), and Elm 0.19 added some killer new features around asset optimization.

Elm Town
Elm Town 37 - Upgrading to Elm 0.19 with Luke Westby & Richard Feldman

Elm Town

Play Episode Listen Later Aug 24, 2018 48:13


Richard Feldman (No Red Ink) and Luke Westby (Ellie, No Red Ink) visit Elm Town to celebrate the just-released Elm 0.19, talk through No Red Ink's plan to upgrade its 250K lines of Elm to the new release, and revisit Luke and Richard's talks from Elm Europe on web components and data structures, respectively.

Cross Cutting Concerns Podcast
Podcast 069 - Correl Roush on Elm

Cross Cutting Concerns Podcast

Play Episode Listen Later Feb 11, 2018 14:37


Correl Roush is back to talk about Elm. Show Notes: This is Correl's second episode. Be sure to check out the Erlang episode Elm website Haskell site Evan Czaplicki on Github Book: Seven More Languages in Seven Weeks No Red Ink Richard Feldman on Github Text editors mentions: Atom, Emacs Elm-package command line tool Book: Elm in Action by Richard Feldman Elm slack channel Video: Time traveling debugger with a Mario platforming game Correl Roush is on Twitter. Want to be on the next episode? You can! All you need is the willingness to talk about something technical. Music is by Joe Ferg, check out more music on JoeFerg.com!

Fixate on Code | Weekly interviews on how to write better code, for frontend developers

Tereza is a front-end developer obsessed with the Elm language. NoRedInk, a platform helping millions of students learn how to write, nabbed her after spotting her skills. Tereza is also the creator of Elm Plot - a graph plotting library built in Elm.

Soft Skills Engineering
Episode 88: How To Dress For Interviews and Learning To Interview

Soft Skills Engineering

Play Episode Listen Later Dec 22, 2017 38:01


This week Jamison and Dave answer these questions: How do you dress for interviews? Full on informal beach bum? Smart casual? Formal suit tie? I’m a new developer and have been asked to interview incoming developers. How do I learn how to interview? This is the NoRedInk interview process. This is the blog post Jamison likes on getting data out of the technical portion of the interview. This is a slightly pessimistic look at pitfalls in the standard interview process. Google wrote a great article about structured interviewing that might also be helpful.

Javascript to Elm
8: Types with Help

Javascript to Elm

Play Episode Listen Later Oct 5, 2017 24:57


Compiler help. The promise of Elm is no runtime errors. This has been proven in large code bases like that of NoRedInk. But how can it make such a claim? Elm has a strong, static type system.

JAMstack Radio
Ep. #19, Elm: A Well Built Compile to JavaScript Language

JAMstack Radio

Play Episode Listen Later Aug 16, 2017 31:32


In this episode of JAMstack Radio, Brian is joined by Richard Feldman and Brooke Angel from NoRedInk for a discussion on Elm, a functional programming language that compiles to JavaScript. The post Ep. #19, Elm: A Well Built Compile to JavaScript Language appeared first on Heavybit.

JAMstack Radio
Ep. #19, Elm: A Well Built Compile to JavaScript Language

JAMstack Radio

Play Episode Listen Later Aug 16, 2017 31:32


In this episode of JAMstack Radio, Brian is joined by Richard Feldman and Brooke Angel from NoRedInk for a discussion on Elm, a functional programming language that compiles to JavaScript.

EdTech.TV
048 – NoRedInk, Vocabulary.com, & Google Certification

EdTech.TV

Play Episode Listen Later Jul 4, 2017


Episode 48 of the EdTech TV Podcast visits “older” technology that may not get a highlight as often in the days where there’s a new toy to play with every week. Also: a brief review of Google Certification. Revisiting Old Tech – NoRedInk, Vocabulary.com, Kahoot There’s a lot of great technology out there that can […]

vocabulary noredink google certifications
All JavaScript Podcasts by Devchat.tv
MJS #010: Richard Feldman

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Mar 23, 2017 59:38


Welcome to the 9th My JS Story! Today, Charles Max Wood welcomes Richard Feldman. Richard works at No Red Ink, and he is the author of Elm in Action. He was in JavaScript Jabber and talked about Elm with Evan Czlapicki in episode 175 and covered the same topic alone in episode 229 . Stay tuned to My JS Story Richard Feldman to learn more how he started in programming and what he's up to now.

My JavaScript Story
MJS #010: Richard Feldman

My JavaScript Story

Play Episode Listen Later Mar 23, 2017 59:38


Welcome to the 9th My JS Story! Today, Charles Max Wood welcomes Richard Feldman. Richard works at No Red Ink, and he is the author of Elm in Action. He was in JavaScript Jabber and talked about Elm with Evan Czlapicki in episode 175 and covered the same topic alone in episode 229 . Stay tuned to My JS Story Richard Feldman to learn more how he started in programming and what he's up to now.

Devchat.tv Master Feed
MJS #010: Richard Feldman

Devchat.tv Master Feed

Play Episode Listen Later Mar 23, 2017 59:38


Welcome to the 9th My JS Story! Today, Charles Max Wood welcomes Richard Feldman. Richard works at No Red Ink, and he is the author of Elm in Action. He was in JavaScript Jabber and talked about Elm with Evan Czlapicki in episode 175 and covered the same topic alone in episode 229 . Stay tuned to My JS Story Richard Feldman to learn more how he started in programming and what he's up to now.

The Web Platform Podcast
108 Elm Revisited

The Web Platform Podcast

Play Episode Listen Later Sep 30, 2016 67:23


Summary It's been awhile since we have chatted about Elm Lang. Richard Feldman (@rtfeldman), Developer at No Red Ink, returns to the podcast with Conner Ruhl (@connerruhl), Developer Operations at Carfax, in his podcast debut. The two Elm fanatics chat about their experiences with Elm and how it's made their productivity exponentially better. Resources Elm - http://elm-lang.org/ Richards Github - https://github.com/rtfeldman Elm in Action - https://www.manning.com/books/elm-in-action?a_aid=elm_in_action&a_bid=b15edc5c JSAir: Typed Functional Programming in JS - http://audio.javascriptair.com/e/034-jsair-typed-functional-programming-in-javascript-with-alfonso-garcia-caro-richard-feldman-phil-freeman-and-jordan-walke/ JSJ 175: https://devchat.tv/js-jabber/175-jsj-elm-with-evan-czaplicki-and-richard-feldman JSJ 229: https://devchat.tv/js-jabber/229-jsj-elm-with-richard-feldman

Devchat.tv Master Feed
229 JSJ Elm with Richard Feldman

Devchat.tv Master Feed

Play Episode Listen Later Sep 14, 2016 54:34


1:13 No Red Ink is hiring; Richard’s book-in-progress 2:10 Frontend Masters Workshop 2:55 Elm’s primary function 5:10 Using Elm over using Haskell, React, Javascript, etc. 9:15 Increased usability of Elm with each update 13:45 Striking differences between Elm and Javascript 16:08 Community reactions to Elm 20:21 First Elm conference in September 22:11 The approach for structuring an Elm app 23:45 Realistic time frame for building an app from scratch 32:20 Writing pure functions and immutable data; how Elm uses Side-Effects 38:20 Scaling a big FP application 44:15 What Javascript developers can take away from using Elm 48:00 Richard on Twitter PICKS “In a World…” Movie Building a Live-Validated Signup Form in Elm Apple Cider Vinegar CETUSA – Foreign exchange program

All JavaScript Podcasts by Devchat.tv
229 JSJ Elm with Richard Feldman

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Sep 14, 2016 54:34


1:13 No Red Ink is hiring; Richard’s book-in-progress 2:10 Frontend Masters Workshop 2:55 Elm’s primary function 5:10 Using Elm over using Haskell, React, Javascript, etc. 9:15 Increased usability of Elm with each update 13:45 Striking differences between Elm and Javascript 16:08 Community reactions to Elm 20:21 First Elm conference in September 22:11 The approach for structuring an Elm app 23:45 Realistic time frame for building an app from scratch 32:20 Writing pure functions and immutable data; how Elm uses Side-Effects 38:20 Scaling a big FP application 44:15 What Javascript developers can take away from using Elm 48:00 Richard on Twitter PICKS “In a World…” Movie Building a Live-Validated Signup Form in Elm Apple Cider Vinegar CETUSA – Foreign exchange program

JavaScript Jabber
229 JSJ Elm with Richard Feldman

JavaScript Jabber

Play Episode Listen Later Sep 14, 2016 54:34


1:13 No Red Ink is hiring; Richard’s book-in-progress 2:10 Frontend Masters Workshop 2:55 Elm’s primary function 5:10 Using Elm over using Haskell, React, Javascript, etc. 9:15 Increased usability of Elm with each update 13:45 Striking differences between Elm and Javascript 16:08 Community reactions to Elm 20:21 First Elm conference in September 22:11 The approach for structuring an Elm app 23:45 Realistic time frame for building an app from scratch 32:20 Writing pure functions and immutable data; how Elm uses Side-Effects 38:20 Scaling a big FP application 44:15 What Javascript developers can take away from using Elm 48:00 Richard on Twitter PICKS “In a World…” Movie Building a Live-Validated Signup Form in Elm Apple Cider Vinegar CETUSA – Foreign exchange program

The Changelog
Elm and Functional Programming

The Changelog

Play Episode Listen Later Sep 2, 2016 87:51 Transcription Available


Evan Czaplicki, creator of Elm, and Richard Feldman of NoRedInk joined the show to talk deeper about Elm, the pains of CSS it solves, scaling the Elm architecture, reusable components, and more.

Changelog Master Feed
Elm and Functional Programming (The Changelog #218)

Changelog Master Feed

Play Episode Listen Later Sep 2, 2016 87:51 Transcription Available


Evan Czaplicki, creator of Elm, and Richard Feldman of NoRedInk joined the show to talk deeper about Elm, the pains of CSS it solves, scaling the Elm architecture, reusable components, and more.

Spacedojo Show
Replacing JS with Elm

Spacedojo Show

Play Episode Listen Later Mar 29, 2016 55:10


Josh Owens, Ben Strahan, and Ramsay Lanier talk to Richard Feldman about No Red Ink and using Elm.

The Changelog
Elm and Functional Programming

The Changelog

Play Episode Listen Later Jan 15, 2016 92:14


Richard Feldman from NoRedInk joined the show to talk about Elm and Functional Programming. Elm labeled itself “the best of functional programming in your browser” and boasts “no runtime exceptions.” We talked about the language, whether or not it’s really faster than React, JavaScript fatigue, and the best ways to get started with Elm.

Changelog Master Feed
Elm and Functional Programming (The Changelog #191)

Changelog Master Feed

Play Episode Listen Later Jan 15, 2016 92:14


Richard Feldman from NoRedInk joined the show to talk about Elm and Functional Programming. Elm labeled itself “the best of functional programming in your browser” and boasts “no runtime exceptions.” We talked about the language, whether or not it’s really faster than React, JavaScript fatigue, and the best ways to get started with Elm.

JavaScript Jabber
175 JSJ Elm with Evan Czaplicki and Richard Feldman

JavaScript Jabber

Play Episode Listen Later Sep 2, 2015 77:02


02:27 - Evan Czaplicki Introduction Twitter GitHub Prezi 02:32 - Richard Feldman Introduction Twitter GitHub NoRedInk 02:38 - Elm @elmlang 04:06 - Academic Ideas 05:10 - Functional Programming, Functional Reactive Programming & Immutability 16:11 - Constraints Faruk Ateş Modernizr The Beauty of Constraints Types / Typescript 24:24 - Compilation 27:05 - Signals start-app 36:34 - Shared Concepts & Guarantees at the Language Level 43:00 - Elm vs React 47:24 - Integration Ports lunr.js 52:23 - Upcoming Features 54:15 - Testing Elm-Test elm-check 56:38 - Websites/Apps Build in Elm CircuitHub 58:37 - Getting Started with Elm The Elm Architecture Tutorial Elm Examples 59:41 - Canonical Uses? 01:01:26 - The Elm Community & Contributions The Elm Discuss Mailing List Elm user group SF Stack Overflow ? The Sublime Text Plugin WebStorm Support for Elm? Coda grunt-elm gulp-elm Extras & Resources Evan Czaplicki: Let's be mainstream! User focused design in Elm @ Curry On 2015 Evan Czaplicki: Blazing Fast HTML: Virtual DOM in Elm Picks The Pragmatic Studio: What is Elm? Q&A (Aimee) Elm (Joe) Student Bodies (Joe) Mike Clark: Getting Started With Elm (Joe) Angular Remote Conf (Chuck) Stripe (Chuck) Alcatraz versus the Evil Librarians (Alcatraz, No. 1) by Brandon Sanderson (Chuck) Understanding Comics: The Invisible Art by Scott McCloud (Evan) The Glass Bead Game: (Magister Ludi) A Novel by Hermann Hesse (Evan) The Design of Everyday Things: Revised and Expanded Edition by Don Norman (Richard) Rich Hickey: Simple Made Easy (Richard) NoRedInk Tech Blog (Richard)

All JavaScript Podcasts by Devchat.tv
175 JSJ Elm with Evan Czaplicki and Richard Feldman

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Sep 2, 2015 77:02


02:27 - Evan Czaplicki Introduction Twitter GitHub Prezi 02:32 - Richard Feldman Introduction Twitter GitHub NoRedInk 02:38 - Elm @elmlang 04:06 - Academic Ideas 05:10 - Functional Programming, Functional Reactive Programming & Immutability 16:11 - Constraints Faruk Ateş Modernizr The Beauty of Constraints Types / Typescript 24:24 - Compilation 27:05 - Signals start-app 36:34 - Shared Concepts & Guarantees at the Language Level 43:00 - Elm vs React 47:24 - Integration Ports lunr.js 52:23 - Upcoming Features 54:15 - Testing Elm-Test elm-check 56:38 - Websites/Apps Build in Elm CircuitHub 58:37 - Getting Started with Elm The Elm Architecture Tutorial Elm Examples 59:41 - Canonical Uses? 01:01:26 - The Elm Community & Contributions The Elm Discuss Mailing List Elm user group SF Stack Overflow ? The Sublime Text Plugin WebStorm Support for Elm? Coda grunt-elm gulp-elm Extras & Resources Evan Czaplicki: Let's be mainstream! User focused design in Elm @ Curry On 2015 Evan Czaplicki: Blazing Fast HTML: Virtual DOM in Elm Picks The Pragmatic Studio: What is Elm? Q&A (Aimee) Elm (Joe) Student Bodies (Joe) Mike Clark: Getting Started With Elm (Joe) Angular Remote Conf (Chuck) Stripe (Chuck) Alcatraz versus the Evil Librarians (Alcatraz, No. 1) by Brandon Sanderson (Chuck) Understanding Comics: The Invisible Art by Scott McCloud (Evan) The Glass Bead Game: (Magister Ludi) A Novel by Hermann Hesse (Evan) The Design of Everyday Things: Revised and Expanded Edition by Don Norman (Richard) Rich Hickey: Simple Made Easy (Richard) NoRedInk Tech Blog (Richard)

Devchat.tv Master Feed
175 JSJ Elm with Evan Czaplicki and Richard Feldman

Devchat.tv Master Feed

Play Episode Listen Later Sep 2, 2015 77:02


02:27 - Evan Czaplicki Introduction Twitter GitHub Prezi 02:32 - Richard Feldman Introduction Twitter GitHub NoRedInk 02:38 - Elm @elmlang 04:06 - Academic Ideas 05:10 - Functional Programming, Functional Reactive Programming & Immutability 16:11 - Constraints Faruk Ateş Modernizr The Beauty of Constraints Types / Typescript 24:24 - Compilation 27:05 - Signals start-app 36:34 - Shared Concepts & Guarantees at the Language Level 43:00 - Elm vs React 47:24 - Integration Ports lunr.js 52:23 - Upcoming Features 54:15 - Testing Elm-Test elm-check 56:38 - Websites/Apps Build in Elm CircuitHub 58:37 - Getting Started with Elm The Elm Architecture Tutorial Elm Examples 59:41 - Canonical Uses? 01:01:26 - The Elm Community & Contributions The Elm Discuss Mailing List Elm user group SF Stack Overflow ? The Sublime Text Plugin WebStorm Support for Elm? Coda grunt-elm gulp-elm Extras & Resources Evan Czaplicki: Let's be mainstream! User focused design in Elm @ Curry On 2015 Evan Czaplicki: Blazing Fast HTML: Virtual DOM in Elm Picks The Pragmatic Studio: What is Elm? Q&A (Aimee) Elm (Joe) Student Bodies (Joe) Mike Clark: Getting Started With Elm (Joe) Angular Remote Conf (Chuck) Stripe (Chuck) Alcatraz versus the Evil Librarians (Alcatraz, No. 1) by Brandon Sanderson (Chuck) Understanding Comics: The Invisible Art by Scott McCloud (Evan) The Glass Bead Game: (Magister Ludi) A Novel by Hermann Hesse (Evan) The Design of Everyday Things: Revised and Expanded Edition by Don Norman (Richard) Rich Hickey: Simple Made Easy (Richard) NoRedInk Tech Blog (Richard)

Devchat.tv Master Feed
212 RR Elm with Richard Feldman and Evan Czaplicki

Devchat.tv Master Feed

Play Episode Listen Later Jun 17, 2015 62:33


Get your Ruby Remote Conf tickets and check out the @rubyremoteconf Twitter feed for exciting updates about the conference.   03:09 - Evan Czaplicki Introduction Twitter GitHub Prezi 03:15 - Richard Feldman Introduction Twitter GitHub NoRedInk 03:42 - Elm @elmlang 04:18 - Elm vs JavaScript dreamwriter 06:52 - Reactivity 07:28 - Functional Principles Immutability Union Types 09:42 - “Side Effects” (Reactivity Cont’d) JavaScript Promises Signals React Flux Excel Spreadsheet Comparison Two-way Data Binding vs One-way 24:19 - Syntax and Semantics Haskell ML ML Family of Programming Languages Strict vs Lazy 30:45 - Testing Elm-Test elm-check Property-Based Testing elm-reactor 34:49 - Debugging Elm’s Time Traveling Debugger 42:12 - Next Release? 46:00 - Use Cases/Getting Started Resources elm-architecture-tutorial dreamwriter 48:45 - Why should Ruby devs care about Elm? Picks The Expanse (Avdi) Git LFS (Jessica) The City & The City by China Miéville (Jessica) Patterns (Coraline) Ruby Remote Conf (Chuck) Find a change of pace (Chuck) Listen to other people’s views (Chuck) Richard Feldman: Functional Frontend Frontier (Richard) EconTalk (Evan) elm-architecture-tutorial (Evan)

Ruby Rogues
212 RR Elm with Richard Feldman and Evan Czaplicki

Ruby Rogues

Play Episode Listen Later Jun 17, 2015 62:33


Get your Ruby Remote Conf tickets and check out the @rubyremoteconf Twitter feed for exciting updates about the conference.   03:09 - Evan Czaplicki Introduction Twitter GitHub Prezi 03:15 - Richard Feldman Introduction Twitter GitHub NoRedInk 03:42 - Elm @elmlang 04:18 - Elm vs JavaScript dreamwriter 06:52 - Reactivity 07:28 - Functional Principles Immutability Union Types 09:42 - “Side Effects” (Reactivity Cont’d) JavaScript Promises Signals React Flux Excel Spreadsheet Comparison Two-way Data Binding vs One-way 24:19 - Syntax and Semantics Haskell ML ML Family of Programming Languages Strict vs Lazy 30:45 - Testing Elm-Test elm-check Property-Based Testing elm-reactor 34:49 - Debugging Elm’s Time Traveling Debugger 42:12 - Next Release? 46:00 - Use Cases/Getting Started Resources elm-architecture-tutorial dreamwriter 48:45 - Why should Ruby devs care about Elm? Picks The Expanse (Avdi) Git LFS (Jessica) The City & The City by China Miéville (Jessica) Patterns (Coraline) Ruby Remote Conf (Chuck) Find a change of pace (Chuck) Listen to other people’s views (Chuck) Richard Feldman: Functional Frontend Frontier (Richard) EconTalk (Evan) elm-architecture-tutorial (Evan)

All Ruby Podcasts by Devchat.tv
212 RR Elm with Richard Feldman and Evan Czaplicki

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Jun 17, 2015 62:33


Get your Ruby Remote Conf tickets and check out the @rubyremoteconf Twitter feed for exciting updates about the conference.   03:09 - Evan Czaplicki Introduction Twitter GitHub Prezi 03:15 - Richard Feldman Introduction Twitter GitHub NoRedInk 03:42 - Elm @elmlang 04:18 - Elm vs JavaScript dreamwriter 06:52 - Reactivity 07:28 - Functional Principles Immutability Union Types 09:42 - “Side Effects” (Reactivity Cont’d) JavaScript Promises Signals React Flux Excel Spreadsheet Comparison Two-way Data Binding vs One-way 24:19 - Syntax and Semantics Haskell ML ML Family of Programming Languages Strict vs Lazy 30:45 - Testing Elm-Test elm-check Property-Based Testing elm-reactor 34:49 - Debugging Elm’s Time Traveling Debugger 42:12 - Next Release? 46:00 - Use Cases/Getting Started Resources elm-architecture-tutorial dreamwriter 48:45 - Why should Ruby devs care about Elm? Picks The Expanse (Avdi) Git LFS (Jessica) The City & The City by China Miéville (Jessica) Patterns (Coraline) Ruby Remote Conf (Chuck) Find a change of pace (Chuck) Listen to other people’s views (Chuck) Richard Feldman: Functional Frontend Frontier (Richard) EconTalk (Evan) elm-architecture-tutorial (Evan)

#EdTech Radio
An Edtech Minute with NoRedInk

#EdTech Radio

Play Episode Listen Later Nov 6, 2014 3:36


Help your students improve their grammar and writing skills -Create assignments and quizzes without doing any grading -Target Common Core skills using your students’ interests -Provide students with unlimited help whenever they need it -Track growth using our color-coded heat maps URL: https://www.noredink.com/ Follow:@bamradionetwork

#EdTech Minute
An Edtech Minute with NoRedInk

#EdTech Minute

Play Episode Listen Later Nov 6, 2014 3:36


Help your students improve their grammar and writing skills -Create assignments and quizzes without doing any grading -Target Common Core skills using your students’ interests -Provide students with unlimited help whenever they need it -Track growth using our color-coded heat maps URL: https://www.noredink.com/ Follow:@bamradionetwork