Podcasts about sdks

  • 696PODCASTS
  • 1,341EPISODES
  • 42mAVG DURATION
  • 1DAILY NEW EPISODE
  • Oct 5, 2022LATEST

POPULARITY

20152016201720182019202020212022

Categories



Best podcasts about sdks

Show all podcasts related to sdks

Latest podcast episodes about sdks

IFTTD - If This Then Dev
#175 - Mettre les apps en musique - Geoffrey Métais

IFTTD - If This Then Dev

Play Episode Listen Later Oct 5, 2022 59:48


"On a le même code Kotlin compilé 3 fois pour être en natif sur 3 plateformes" Le D.E.V. de la semaine est Geoffrey Métais, Engineering Manager chez Deezer. Au début, une app Android se faisait en Java. A mesure qu'Android s'est ajouté dans des appareils très différents, il a fallu changer plusieurs paradigmes, jusqu'à finalement passer à Kotlin et Jetpack Compose. Geoffrey vient nous raconter cette épopée. Nous parlons aussi des technos cross-plateformes comme React ou Flutter. Et Geoffrey nous explique pourquoi Kotlin est un aujourd'hui un excellent langage qui est le fondement du SDK cross-plateformes de Deezer. Liens évoqués pendant l'émissionC Programming LanguageJava Concurrency in PracticeAndroids: The Team that Built the Android Operating SystemRéseaux sociauxTwitter **Continuons la discussion**@ifthisthendev@bibear@geoffreymetaisLinkedInLinkedIn de Geoffrey MétaisDiscord** Plus de contenus de dev **Retrouvez tous nos épisodes sur notre site.Nous sommes aussi sur Instagram, TikTok, Youtube, Twitch ** Le bitcoin c'est so 2018 **La blockchain est une formidable innovation, mais les crypto-monnaies sont dépassées ! Désormais, avec RealT vous pouvez investir dans l'immobilier via la blockchain. Enfin un moyen simple de devenir propriétaire et de générer un revenu passif combiné avec une application pratique de la blockchain !** Cherchez l'équipe de vos rêves **Si vous avez envie de changer de job, testez My Little Team qui vous permet de choisir une équipe qui vous ressemble plutôt qu'une entreprise sans trop savoir où vous arriverez !** La Boutique IFTTD !!! **Affichez votre appréciation de ce podcast avec des goodies faits avec amour sur la boutique ou affichez clairement votre camp tabulation ou espace.** Soutenez le podcast **Ou pour aller encore plus loin, rejoignez le Patréon IFTTD.** Participez au prochain enregistrement !**Retrouvez-nous tous les lundis à 19:00 (mais pas que) pour assister à l'enregistrement de l'épisode en live et pouvoir poser vos questions pendant l'épisode :)Nous sommes en live sur Youtube, Twitch, LinkedIn et Twitter

web3 for gen z
#9: The Music NFT Landscape with Charlie Durbin

web3 for gen z

Play Episode Listen Later Oct 4, 2022 40:57


This week's guest is Charlie Durbin (@cdurbinxyz), the CEO of Decent.xyz. Decent is a music NFT platform that enables fans and artists to build a deeper relationship through on-chain solutions such as royalty-backed NFTs and new minting and staking and mechanisms. Announced this week, Decent has packaged these contracts into a modular SDK that enables artists and developers to deploy interesting multi-contract applications with simple JavaScript functions and a no-code creator studio.Charlie and I talk about the music NFT landscape: how is it different from record labels, why people buy music NFTs, and what is the future of the industry. We also talk about his background and experiences building Decent and going through Y Combinator.---Topics Covered:(1:10) Charlie's background and the birth of Decent(3:32) How much money an artist makes from platforms like Spotify(8:36) How royalties from music NFTs align incentives between fans and artists(10:17) Comparing record labels with music NFTs(12:56) Creating "index funds" for music NFTs(17:28) Are music NFTs financial or cultural goods?(22:49) Who's responsible for the future of music NFTs? Fans, artists, or platforms?(25:32) What Decent is building next(32:26) Choosing the right startup accelerator(35:56) The best advice Charlie got from Y Combinator(37:54) Advice for people interested in web3 and music NFTs---Follow @webforgenz on Twitter for the best clips from each episode.

Google Cloud Reader
Productivity unlocked with new Cloud SDK reference docs

Google Cloud Reader

Play Episode Listen Later Sep 29, 2022 6:54


Original blog post   List of reference docs by language   Github repositories More articles at  cloud.google.com/blog

Citizen Cosmos
Riley Edmund, sustainability, Berkeley & liquid staking

Citizen Cosmos

Play Episode Listen Later Sep 29, 2022 54:39


In this episode, we talk to Riley Edmund, the co-founder of Stride Labs, a liquid staking protocol built with Cosmos SDK & Tendermint. Stride allows users to liquid stake any IBC-compatible cosmos SDK native appchain token. Under the hood, Stride leverages the Inter-Blockchain Communication protocol, Interchain Accounts and Interchain Queries. Riley's Twitter (https://twitter.com/interchainriley) We spoke to Riley about Stride, and: Liquid staking How to choose a validator & Governance IBC & tokenomics Bridgewater & Sustainable business models The path towards blockchain & Blockchain Berkeley The team Issues with launching the project Advice on which mistakes to avoid Motivation The projects and people that have been mentioned in this episode: | Tendermint (https://tendermint.com/) | Cosmos (https://cosmos.network/) | Stride (https://stride.zone/) | Bridgewater (https://www.bridgewater.com/) | Juno (https://junonetwork.io/) | CoinGecko (https://www.coingecko.com/) | Osmosis (https://www.citizencosmos.space/osmosis) | QuickSilver (https://www.citizencosmos.space/quicksilver) | Berkeley (https://www.berkeley.edu/) | If you like what we do at Citizen Cosmos: Stake with Citizen Cosmos validator (https://www.citizencosmos.space/staking) Help support the project via Gitcoin Grants (https://gitcoin.co/grants/1113/citizen-cosmos-podcast) Listen to the YouTube version (https://youtu.be/2JcSAsQisJw) Read our blog (https://citizen-cosmos.github.io/blog/) Check out our GitHub (https://github.com/citizen-cosmos/) Join our Telegram (https://t.me/citizen_cosmos) Follow us on Twitter (https://twitter.com/cosmos_voice) Sign up to the RSS feed (https://www.citizencosmos.space/rss) Special Guest: Riley Edmunds.

PRONEWS
EDIUS X、version 10.33へアップデート。Blackmagic RAW SDKを更新

PRONEWS

Play Episode Listen Later Sep 28, 2022 0:40


「EDIUS X、version 10.33へアップデート。Blackmagic RAW SDKを更新」 グラスバレーは、「EDIUS X Pro「Version 10.31」アップデートを公開した。同社Webサイトよりダウンロード可能。機能追加の詳細は以下の通り。

Podcasty Aktuality.sk
Dzurinda nebol bez chýb, ale pre slovenskú politiku by bol prínosom, tvrdí Ivan Šimko

Podcasty Aktuality.sk

Play Episode Listen Later Sep 27, 2022 34:39


Máme generačný inštinkt prikláňať sa k tým, čo majú moc a nejsť proti prúdu, veď „načo si robiť problémy“. Je asi daňou malých národov aby prežili, že tu vždy bolo dosť tých, čo kolaborovali. I takto vysvetľuje náš obdiv k putinovmu Rusku či silným politickým vodcom spoluzakladateľ KDH i SDK Ivan Šimko. A čo by dnes priniesol Dzurinda do slovenskej politiky? Konzervatívnu konferenciu otvára Boris Kollár a šéfovia OľaNO a SaS si nevedia prísť na meno. Slovenská pravica je roztrieštená do mnohých strán a konzervatívci vedú s liberálmi všemožné „kultúrne vojny“. Vládna koalícia, ktorá pôvodne disponovala ústavnou väčšinou dnes bojuje v parlamente o doslova holú existenciu a kedysi dominantný subjekt slovenskej pravice, Kresťanskí demokrati, sú dnes úplne mimo parlamentu. No a napokon, no a na dvere politickej scény už klope aj bývalý dvojnásobný premiér Mikuláš Dzurinda. V akom stave sa to ocitla slovenská politická pravica? Majú všetky tie kultúrne vojny, niekedy doslova o nezmysly, vôbec nejaký zmysel? Ako vládnuť v menšine a vedieť hľadať kompromisy s ideovými oponentmi? A bol by dzurindov návrat do politiky prínosom pre slovenskú demokraciu? Témy a otázky pre bývalého viacnásobného ministra, spoluzakladateľa KDH, ale aj SDK či SDKÚ Ivana Šimka. Demokracia, to je podľa neho "fuška" a každodenná drina. Počúvate Ráno Nahlas, pekný deň a pokoj v duši praje Braňo Dobšinský

Ráno Nahlas
Dzurinda by aj dnes bol pre slovensku politiku prínosom. Tvrdí jeho dlhoročný súputnik Ivan Šimko

Ráno Nahlas

Play Episode Listen Later Sep 27, 2022 34:39


Máme generačný inštinkt prikláňať sa k tým, čo majú moc a nejsť proti prúdu, veď „načo si robiť problémy“. Je asi daňou malých národov aby prežili, že tu vždy bolo dosť tých, čo kolaborovali. I takto vysvetľuje náš obdiv k putinovmu Rusku či silným politickým vodcom spoluzakladateľ KDH i SDK Ivan Šimko. A čo by dnes priniesol Dzurinda do slovenskej politiky? Konzervatívnu konferenciu otvára Boris Kollár a šéfovia OľaNO a SaS si nevedia prísť na meno. Slovenská pravica je roztrieštená do mnohých strán a konzervatívci vedú s liberálmi všemožné „kultúrne vojny“. Vládna koalícia, ktorá pôvodne disponovala ústavnou väčšinou dnes bojuje v parlamente o doslova holú existenciu a kedysi dominantný subjekt slovenskej pravice, Kresťanskí demokrati, sú dnes úplne mimo parlamentu. No a napokon, no a na dvere politickej scény už klope aj bývalý dvojnásobný premiér Mikuláš Dzurinda. V akom stave sa to ocitla slovenská politická pravica? Majú všetky tie kultúrne vojny, niekedy doslova o nezmysly, vôbec nejaký zmysel? Ako vládnuť v menšine a vedieť hľadať kompromisy s ideovými oponentmi? A bol by dzurindov návrat do politiky prínosom pre slovenskú demokraciu? Témy a otázky pre bývalého viacnásobného ministra, spoluzakladateľa KDH, ale aj SDK či SDKÚ Ivana Šimka.Demokracia, to je podľa neho "fuška" a každodenná drina. Počúvate Ráno Nahlas, pekný deň a pokoj v duši praje Braňo Dobšinský  

The Bike Shed
356: The Value of Specialized Vocabulary

The Bike Shed

Play Episode Listen Later Sep 27, 2022 39:20


Guest and fellow thoughtbotter Stephanie Minn and Joël chat about how the idea of specialized vocabulary came up during a discussion of the Ruby Science book. We have all these names for code smells and refactors. Before knowing these names, we often have a vague sense of the ideas but having a name makes them more real. They also give us ways to talk precisely about what we mean. However, there is a downside since not everyone is familiar with the jargon. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Stephanie's previous talk (https://www.youtube.com/watch?v=m0dC5RmxcFk) Non Violent Communication (https://www.goodreads.com/book/show/71730.Nonviolent_Communication) RubyConfMini (http://www.rubyconfmini.com/) Ruby Science book (https://books.thoughtbot.com/books/ruby-science.html) Connascence as a vocabulary to discuss coupling (https://thoughtbot.com/blog/connascence-as-a-vocabulary-to-discuss-coupling) Wired series "5 levels of teaching" (https://www.youtube.com/playlist?list=PLibNZv5Zd0dyCoQ6f4pdXUFnpAIlKgm3N) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined with fellow thoughtboter Stephanie Minn. STEPHANIE: Hey, Joël. JOËL: And together, we're here to share a little bit of what we've learned along the way. Stephanie, what is new in your world? STEPHANIE: Thanks for asking. I am on a new project I just started a few weeks ago, and I'm feeling the new project vibes, I think, kind of scoping out what's going on with the client with the work that they're doing. Trying to make a good impression. I think lately I've been in that mode of where can I find some work to do even when I'm just getting on boarded and learning the domain, trying to make those README updates in the areas that are a bit outdated, and yeah, just kind of along for the ride. One thing that has been surprising already is that in my second week, the project pivoted into a different direction than what I was expecting. So that has been kind of exciting and also pretty interesting to see sometimes this stuff happens. I was brought on thinking that we were working on rebuilding the front end in React and TypeScript, pulling out pieces of their 15-year-old Rails monolith. So that was kind of one area that they decided to start with. But recently, they actually decided to pivot to just revamping the look of the existing pages in the Rails app using the same templates and forms. So it's kind of shifted from more greenfield-esque work to figuring out what the heck's going on in this legacy codebase and making it a little bit more modern-looking and cleaning out the cobwebs, I suppose as we find them. JOËL: As a consultant, how do you deal with that kind of dramatic shift in expectations? STEPHANIE: I think it's tough because I necessarily wasn't in those conversations, and so I have to come at it with the understanding that they have a deep knowledge of the business and things that are going on behind the scenes that I don't, and I am coming in kind of with a fresh set of eyes. And it definitely raises some questions for me, right? Like, why now? What were the trade-offs that were made in the decisions? And I hope that as a consultant, I can poke and prod a little bit to help them with the transition and also figuring out its impact on the rest of the team in a way maybe someone who is more familiar with the situation and more tied to the politics of the org might not have that perspective. JOËL: I have a lot of questions here. But actually, I'm thinking that onboarding as a topic would probably make a good standalone episode. So maybe we'll have to bring you back for a future episode to talk about how to onboard well and how to deal with surprises. STEPHANIE: Yeah, I think that's a great idea. What about you, Joël? What's going on in your world? JOËL: I'm doing an integration with a third-party gem, and I am really struggling. And I've gotten to the point where I'm reading through the source of the gem to try to figure out some weird errors, some things that come up that are not documented. I think that's a really valuable skill. Ideally, you're not having to bring it out too often, but when you do, being able to drop into the code can really help unblock you or at least make some amount of progress. STEPHANIE: Are you having to dig into the gem's code because you weren't able to find what you needed from the documentation? JOËL: That's correct. I'm getting some cryptic errors where the gem is crashing, and I'm finding some lines in my logs that point back to the gem. And now I'm trying to reconstruct what is happening. Why is it not behaving the way it should be based on the documentation that I read? STEPHANIE: Oh, that's tough. Getting into gem code is uncharted territory. JOËL: It's nice when you're working with an open-source gem because the source is there, and you can just follow the stack trace and go through the code. Sometimes it's long and tedious, but it generally gives you results. It definitely is intimidating. STEPHANIE: Yeah. When you're facing this kind of problem where you have no idea where the light at the end of the tunnel might be, how long do you spend with it? At what point do you take away with what you've learned and try to figure out a different approach? JOËL: That's a good observation because digging through the source of a gem can definitely be a time sink. I think how much time I want to budget depends on a variety of other factors. How big of a problem is this? If I can't figure it out through reading the source, do I have alternate approaches to debug the problem, such as asking for help, opening an issue, reaching out to somebody else who's used it, complaining about it on The Bike Shed and hoping someone responds with an answer? There are other options that I can do that might leave me blocked but maybe eventually give me results. The advantage with reading the source is that you're at least feeling like you're making progress. STEPHANIE: Nice. Well, I wish you luck on that journey. [laughs] It sounds pretty tough. I'm sure that you'll get to one of those solutions and figure out how to get unblocked. JOËL: I hope so. I'm pursuing a few strategies in tandem. So I've asked for help, but I'm also reading the source code. And between the two of those, I hope I'll get to a good solution. So, Stephanie, last time you were on the show, you talked about your experience creating talk proposals for RubyConf. Have you heard back from them since then? STEPHANIE: I have. I will be speaking at RubyConf Mini this year. And I'm really excited because this will be my first IRL conference talk. So last time, I recorded my talk for RubyConf, and this time I will be up on a stage in front of real people. JOËL: That's really exciting. Congratulations. STEPHANIE: Thanks. JOËL: What is the topic of your talk? STEPHANIE: I will be talking about pair programming and specifically pair programming through the lens of a framework called Nonviolent Communication, which is a framework I learned about through a friend who recommended the canonical book on it. And it's a self-help book, to be totally frank, but I found it so illuminating. It really changed how I communicated in my relationships in my personal life. And the more time I spent with it, the more I realized that it would be a great application in pair programming because it's so collaborative and intimate. I've experienced the highs and lows of pair programming. You can feel so good when you are super connected with your pair. You make a lot of progress. You meet whatever professional goals that you might be meeting, and you have someone along for the ride the whole time. It can be just so rewarding. But it can also be really challenging when you are having more of those interpersonal conflicts, and navigating that can be tough. And so I'm really excited to share this style of communication that might help others who want to take their pair programming to the next level and get the most out of that experience no matter who they're pairing with. JOËL: I'm excited to hear this talk because pair programming has always been an important part of what we do at thoughtbot. And I think now that we're remote, we do a lot of remote pair programming. And the interpersonal interactions are a little bit different there than when you're physically in a room with each other, or you have to maybe pay a little bit more attention to them. I'm really excited to hear that. I think that's going to be really useful, not just for me but for a lot of the audience who are there. STEPHANIE: Thanks. Joël, you have a talk accepted at RubyConf Mini too. JOËL: Yes, I also had a talk accepted titled Teaching Ruby to Count. And it's going to be all about series, enumerators, enumerables, and ranges in Ruby and the cool things that you can do with them. So I'm really excited to share about that. I've done some deep dives on these topics, and I'm excited to share that with the broader Ruby community. STEPHANIE: Nice. I'm really excited to hear more about it. JOËL: Did you submit more than one proposal this year? STEPHANIE: This year, I didn't. But I would love to get to a point where I have a lot of content on the backburner and can pull it out when CFP season rolls around to just have some more options. Yeah, I have all these ideas in my head. I think we talked about how we come up with content in our last episode. But yeah, having a content bank sounds really nice for the future, so maybe when that season rolls around, it is a lot easier to get the ball rolling on submitting. What about you? Did you submit more than one? JOËL: I submitted two, but this is the one I was most excited about. I think the other idea was maybe a little bit more polished, but this one was a newer one I came up with towards the end of the CFP period. And I was like, ooh, I'm excited about this. I've just done a deep dive on enumerators, and I think there are some cool things to share with the community. And so that's what I'm excited to share about, and maybe that came through the proposal because that is what the committee picked. So I'm super happy to be talking about that. STEPHANIE: Nice. Yeah, I was just thinking the same, that your excitement about it was probably palpable to the committee. JOËL: For any of our viewers who are interested in coming to watch the talks by Stephanie and myself and plenty of other gifted speakers, this will be at RubyConf Mini in Providence, Rhode Island, from November 15th to 17th. And if you can't make it in person, the videos will be published online early in 2023. And we'll definitely share the links to that when they come out. So as we mentioned in your last episode, thoughtbot has a book club where we've been discussing the book Ruby Science, which goes through a lot of code smells and talks about some various refactoring patterns that can be used to fix them. Most recently, we looked at a code smell that has a very evocative name; it's called shotgun surgery. STEPHANIE: Yeah, it's a very visceral name for sure. I think that is what prompted this next topic that we're about to discuss because someone in the book club, another thoughtboter, mentioned that they were learning this term for the first time. But it made a lot of sense to them because they had experienced shotgun surgery and didn't have the term for it previously. Joël, do you mind giving the listeners a recap of what shotgun surgery is? JOËL: So shotgun surgery is when in order to make a change to a codebase, you have to make a bunch of little changes in a lot of different files, a lot of different locations. And that means that all of these little pieces are weirdly coupled to each other in a way that to make one change, you have to make a bunch of little changes in a lot of places. And that results in a very high churn diff, and that's a common symptom of this problem. STEPHANIE: Nice. Thanks for explaining. MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! STEPHANIE: I think I came away from that conversation thinking about the idea of learning new terms, especially technical ones, and the power that learning those terms can give you as a developer, especially when you're communicating with other people on your team. JOËL: So you mentioned the value in communication there. Some terms have a very precise meaning, and so that allows you to communicate a very specific idea. How do you balance having some jargon and some terminology that allows you to speak very precisely versus having to learn all the terms? Because the more narrow the term is, the more terms you need to talk about all the different things. STEPHANIE: That's a great question. I don't know if I have a great answer because I think I'm still on my journey. I have always noticed when developers I work with have that really precise, articulate technical vocabulary, probably because I don't. I am constantly referring to functions or classes as things, like, that thingy over there talks to this thing over here, and then does something. [laughs] And I think it's because I maybe didn't always have that exposure to very precise technical vocabulary. And so I had an understanding of how things worked in my head, but I couldn't necessarily map that to words. And I'm also from California, so, I don't know, maybe some of that is showing through a little bit. [laughs] But I've been trying to incorporate more technical terms when I speak and also in written form, too, such as in code review, because I want to be able to communicate more clearly my intentions and leave less room for ambiguity. Sometimes I've noticed when you do speak more casually about code, turns out other people can interpret it in different ways. And if you are using, like you said, that narrower technical term for it, that leaves less room for misunderstanding. But in the same vein, I think a lot of people are like me, where they might not know the technical terms for things. And when you start using a lot of jargon like that, it can be a bit exclusive to folks earlier in their career, especially since software as an industry we have folks from all different backgrounds. We don't necessarily have the expectation of assured formal training. We want to be inclusive of people who came to this career from different places and make sure that we are speaking the same language. And so it's been top of mind for me thinking about how we can balance those two things. I don't know, what do you think? JOËL: I want to speak to some of the value of precision first because I think there are a few different points. We have the value of precision, then we have the difficulty of learning vocabulary, and how are we inclusive of everyone. But on the topic of precision, I don't know if you saw not too long ago, another fellow thoughtboter, Matheus Sales, published an article on the thoughtbot blog about the concept of connascence. And he introduces this as a new set of vocabulary, not vocabulary that he's created but a vocabulary that is out there that not that many developers are aware of for different ways to talk about coupling. So it's easy in a pull request to just say, "Oh, well, that thing looks coupled. How about this other way?" And then I respond, "Well, that's also coupled in a different way." And then we just go back and forth like, "Well, mine is more coupled than yours is," or whatever. And connascence provides a more precise, narrow vocabulary to talk about the different ways that things are coupled and which ones are more coupled than others. And so it allows us to break out of maybe those unproductive discussions because now we can talk about things in a more granular way. STEPHANIE: Yeah, I loved that blog post. It was really exciting for me to pick up a new term to describe something that I had experienced, or seen in codebases, or felt the pain of, and be able to describe it more accurately. I'm curious, Joël, if you were to use that term next time, how would you make sure that folks also have the same level of familiarity with it? JOËL: I think on a pull request, I would link to Matheus' article depending on...I might give a little bit of context in a comment. So I might say something like, "This area here is coupled. Here's a suggested refactor. It's also coupled but in a different way. It's because we've moved up this hierarchy of connascence from, you know, connascence of names to some other form" (I don't have them all memorized.) and then link to the article. And hopefully, that becomes the start of a productive discussion. But yeah, having the resources you can link to people is great. And that's one of the nice things about text communication on a pull request is that you can just link to external resources that people can find helpful. STEPHANIE: To continue talking about the value of precision and specialized vocabulary, Joël, I think you are a very articulate communicator. And I'm curious from your perspective if you have always been this way, if you've always wanted to collect technical terms to describe exactly what you want to convey, or if this was a bit of a journey for you to get to this level of clear communication in your technical speaking and writing. JOËL: It's definitely been a journey. I think there are sort of two components to this; one is being able to communicate clearly to others; make sure that they understand what you're talking about. So for that, it's really important to be able to put yourself in somebody else's shoes. So when I'm building a conference talk or writing up a blog post, I will try to read it or go through my slide deck and try to pretend that I am the audience. And then I ask myself the questions: where do I get confused? Where am I going to have questions? Maybe even where am I going to roll my eyes a little bit and be like, eh, I didn't agree with that leap of logic there; where are you going? And then shift back in author mode and say, how can I address these? How can I make my content speak to you in an area where maybe you disagreed, or you were confused? So I kind of jump between moving from the audience seat to back to the author and try to make that material as much as possible resonate with those people. STEPHANIE: Do you do that in more real-time communication, such as in meetings or in pairing? JOËL: I think that's a little bit harder to do. And then it's maybe a little bit more of asking directly, either pausing to let people interject, or you can ask the question directly and say, "Are you familiar with this term?" That can also sometimes be tricky to manage because you don't want to make it sound like you think they don't know anything. But you can also make it sound really natural in a conversation where you're like, "Oh, we're going to do this thing with a strategy pattern. Have you seen a strategy pattern before? Are you familiar with this? Great, let's keep moving." And if not, maybe it's like, "Hey, let's take a few minutes to talk about what the strategy pattern means." STEPHANIE: I think you are really great at asking the audience about their level of familiarity with the content, especially in book club. I have definitely experienced just as a developer pairing, or in meetings, or whatnot times when people don't pause and ask. And usually, I have to muster up the courage to interrupt and ask, "Hey, what is X, Y, and Z?" And that is tough sometimes. I am certainly comfortable with it in a space where there is trust developed in terms of I don't feel worried that people might question my level of familiarity or experience. And I can very enthusiastically say, "Hey, I don't know what this means. Could you please explain it?" But sometimes it can be a little tough when you might not have that relationship with someone, or you haven't talked about it, talked about assumptions about your knowledge or experience level upfront. And so I have found that to be a really good way to build that trust to make sure that we aren't excluding folks is to just talk about some of that stuff, even before we start pairing or before a meeting. And that can really help with some of those miscommunications that might come down later in the process. JOËL: It's interesting that you bring up miscommunication because I think sometimes, even though certain jargon can be very precise, sometimes people will not use it to mean exactly what its dictionary definition is. And so sometimes two people are using the same term, and you're not meaning quite the same thing. And so sometimes I'll be pairing with someone, and I'll have to sort of pause and say, "Hey, wait a minute, you're using the term adapter in a certain way that seems to be a little bit different than the way I'm using it. Can you maybe tell me what your personal definition is? And I'll tell you mine, and we can reconcile those two together." Sometimes that can also feel like a situation where maybe I'm hazy on the topic. Like, I have a vague sense of it, and maybe it does or does not align with the way the other person is using it. And so that's an opportunity for me to ask them to define the term for me without completely having to say, "I have no idea what this term is. Please, oh, great sage, explain the meaning." STEPHANIE: Are there times that you feel more or less comfortable doing that kind of reset? JOËL: I think sometimes the fear is in breaking flow. And so you're doing a thing, and then somebody is trying to explain something, and you don't want to break out of that. Or you're trying to explain something, and you have to decide, is it worth making sure to explain a term, or do you keep moving? So I think that is a big concern. And there is just the interpersonal concern of if there is less trust, do I want to put myself out there? Does somebody else maybe not feel comfortable you asked them to explain a term? Maybe they're using it wrong. It's not always good in a pairing situation to just come up and say, "Hey, that's not technically the adapter pattern; you're wrong. Let me pull out The Gang of Four book. You see on page 54..." that's not productive. STEPHANIE: Yeah, for sure. JOËL: So a lot of it, I think...and maybe this ties into your topic of communication while pairing. But ideally, you're working constructively with a person. And so debating definitions is not generally productive but asking someone, "What do you mean when you say this?" I find is a very helpful way to lead into that type of conversation. STEPHANIE: Yeah, that's a great strategy because you're coming from a place of curiosity rather than a place of this is my definition, and it's the right definition, and so, therefore, you are wrong. [laughs] JOËL: It's interesting the place that jargon occupies in our imagination of expertise. If you've ever seen any movie where they're trying to show that somebody is technically competent, they usually demonstrate that a person is competent by having them just spout out a long chain of jargon, and that makes them sound smart. But I think to a certain extent; maybe we believe it in the industry as well. If somebody can use a lot of terms and talk about a system using this very specific jargon, we tend to think that they're smart or at least look up to them a little bit. STEPHANIE: Yeah, which I think isn't always the best assumption because I've certainly worked with folks who did throw out a lot of jargon but weren't necessarily, like you were saying, using it the way that I understood it, and that made communicating with them challenging. I also think what true expertise really is is having the knowledge that when you use a jargony term that not everyone might be familiar with it, having the awareness to pause and ask someone how they're doing with the vocabulary and be able to tailor how you explain that term to that other person. I think that demonstrates a really deep level of understanding that doesn't get enough credit. JOËL: I 100% agree. Jargon, vocabulary, it's a means to an end, not an end in and of itself. So the goal is to communicate clearly to others and maybe to help yourself in your own learning. And if you're not accomplishing those goals, then what's the point? I guess maybe there is another personal goal which is to sound smart, but that's not really a good goal, [laughs] especially not when the way you do that is by confusing everybody else in the room because they don't understand you, to make you try to feel smarter than them. Like, that's bad communication. STEPHANIE: Yeah, for sure. I've definitely experienced listening to someone explain something and have to really think very hard about every single word that they're saying because they were using terms that are just less common. And so, in my brain, I had to map them to things that made sense to me, and things that I was familiar with that were the same concepts. Like, I was experienced enough to have that shared understanding, but just the words that they used required another layer of brain work. Maybe we could have found a happy medium between them communicating the way that they expressed themselves the best with my ability to understand easily and quickly so that we could get on the same page. JOËL: So you mentioned that there are sometimes situations where you're aware of a particular concept, but maybe you're just not aware that the term that somebody else is using maps to this concept you already understand. And I know that for me, oftentimes, being able to give a name to something that I understand is an incredibly powerful thing. Even though I already know the idea of passing objects to another object in this particular configuration, or of wrapping things in some way or whatever the thing that I'm trying to do, all of a sudden, instead of it being a more nebulous concept in my head or a list of 10 steps or something like that, now I have one thing I can just point to and say it is this. So that's been really helpful for me in my learning to be able to take a label and put it on something that I already know. And somehow, it cements the idea in my head and also then allows me to build on it to the next things that I want to learn. STEPHANIE: Yeah, absolutely. It's really exciting when you're able to have that light-bulb moment when you have that precise term, or you learn that precise term for something that you have been wrestling with or experiencing for a while now. I was just reminded of reading documentation. I have a very vivid memory of the first time I read; I don't know, even the Rails official docs, all of these terms that I didn't understand at the time. But then once I started digging into it, exploring, and just doing the work, when I revisited those docs, I could understand them a lot more comprehensively because I had experience with the things (There I am using things again.) [laughs] and seeing the terms for them and that helping solidify my understanding. JOËL: I'm curious, in your personal learning, do you find it easier to encounter a term first and then learn what it means, or do the reverse, learn the concept first and then cap it off by being able to give it a name? STEPHANIE: That's a good question. I think the latter because I've certainly spent a lot of time Googling terms and then reading whatever first search results came up and being like, okay, I think I got it, and then Googling the same term like two weeks later because I didn't really get it the first time. But whenever I come across a term for a concept I already am familiar with, it is like, oh yes, uh-huh! That really ends up sticking with me. Matheus Sales' blog post that you mentioned earlier is a really great example of that term really standing out to me because I didn't know it at the time, but I suppose was seeking out something to describe the concept of connascence. So that was really cool and really memorable. What about you? Do you have a preferred way of learning new technical terms? JOËL: I think there can be value to both approaches. But I'm with you; I think it generally is easier to add a name to a concept you already understand. And I experienced this pretty dramatically when I tried to get into functional programming. So several years ago, I tried to learn the language Haskell which is notorious for being difficult to learn and very abstract and technical. And the way that the Haskell community typically tries to teach things is learn the fundamentals first, very top-down, learn the theory, and then, later on, you can do things in practice. So it's like before you can write an actual program, let us teach you about applicatives, and monads, and all these things that are really difficult to learn. And they're kind of scary technical terms. So I choked out partway through, gave up on Haskell. A year later, got back into it, tried it again, choked out again. And then, eventually, I pivoted. I started getting into a similar language called Elm, which is similar syntax but compiles to JavaScript for the front end. And that community has the opposite philosophy when it comes to teaching. They want to get you productive as soon as possible. And you can learn some of the theory as you go along. And so with that, I felt like I was learning something new all the time and being productive as well, like, constantly adding new features to things in an application, and that's really exciting. And what's really beautiful there is that you eventually learn a lot of the same concepts that you would learn in something like Haskell because the two languages share a lot of similar concepts. But instead of saying first, you need to learn about monads as a general concept, and then you can build a program; Elm says, build a bunch of programs first. We'll show you the basic syntax. And after you've built a bunch of them, you'll start realizing, wait a minute, these things all kind of look alike. There are patterns I'm starting to recognize. And then you can just point to that and say, hey, that pattern that you started recognizing, and you see a bunch of times that's monad. You've known it all along, and now you can put a label on it. And you've gotten there. And so that's the way that I learned those concepts. And that was much easier for me than the approach of trying to learn the abstract concept first. STEPHANIE: Monad is literally the word I just Googled earlier this week and still have a very, very hazy understanding of. So maybe I'll have to go learn Elm now. [chuckles] JOËL: I recommend a lot of people to use that as their entry point into the statically typed functional programming world, just because of how much more shallow the learning curve is compared to alternatives. And I think a lot of it has to do with that approach of saying, let's get you productive quickly. Let's get you doing things. And eventually, patterns will emerge, and you can put names on them later. But we'll not make you learn all of the theory upfront, all the jargon. STEPHANIE: Now that you do understand all the technical jargon around functional programming, how do you approach communicating about it when you do talk about Elm or those kinds of concepts? JOËL: A lot of it depends on your audience. If you have an audience that already knows these concepts, then being able to use those names is really valuable because it's a shortcut. You can just say, oh yeah, this thing is a monad, and so, therefore, we can do these actions with it. And everybody in the audience just already knows monads have these properties. That's wonderful. Now I can follow to step two instead of having to have a slow build-up. So if I'm writing an article or giving a talk, or even just having a conversation with someone, if I knew they didn't know the term, I would have to really build up to it, and maybe I wouldn't introduce the term at all. I would just talk about some of the properties that are interesting for the purpose of this particular demo. But I would probably have to work up to it and say, "See, we have this simpler thing, and then this more complex thing. But here are the problems that we have with it. Here's a change we can make to our code that will make it work." And you walk through the process without necessarily getting into all of the theory. But with somebody else who did know, I could just say, "Oh, what we need here is monad." And they look at me, and they're like, "Oh, of course," and then we do it. STEPHANIE: What you just described reminds me a lot of the WIRED Video Series, five levels of teaching where they have an expert come in and teach the same concept to different-aged people starting from young kids to an expert in their field as well. And I really liked how you answered that question just with the awareness that you tailor how you explain something to your audience because we could all benefit from just having that intentionality when we communicate in order to get the most value out of our interactions and knowledge sharing, and collaborative working. JOËL: I think a theme that underlies a lot of what you and I have talked about today is just that communication, good communication is the fundamental value that we're going for here. And jargon and vocabulary can be something that empowers that but used poorly; it can also defeat that purpose. And most importantly, good communication starts with the audience, not with you. So when you work back from the audience, you can use the appropriate vocabulary and words that serve everybody and your ultimate goal of communicating. STEPHANIE: I love that. JOËL: So, Stephanie, thank you so much for joining us on The Bike Shed today. And as we wrap up, I wanted to ask you, what is a really fun piece of vocabulary that you've learned that you might want to share with the audience? STEPHANIE: So lately, I learned the term WYSIWYG, which stands for What You See Is What You Get to describe text editing software that lets you see and edit the content as it would actually be displayed. So that was a fun, little term that someone brought up when we were paring and looking at some text editing code. And I was really excited because it sounds fun, and also, now I had just an opportunity to say it on a podcast. [laughs] JOËL: It's amazing that an acronym that is that long has enough vowels in the right places that you can just pronounce it. STEPHANIE: Oh yeah. JOËL: WYSIWYG. That's a fun word to say. STEPHANIE: 100%. I also try to pronounce all acronyms, regardless of how pronounceable they actually are. [laughs] Thanks for asking. JOËL: With that, shall we wrap up? STEPHANIE: Let's wrap up. JOËL: The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. If you have any feedback, you can reach us at @_bikeshed, or reach me at @joelquen on Twitter, or at hosts@bikeshed.fm via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Syntax - Tasty Web Development Treats
Supper Club × Arc Browser with Hursh Agrawal

Syntax - Tasty Web Development Treats

Play Episode Listen Later Sep 23, 2022 46:04


In this supper club episode of Syntax, Wes and Scott talk with Hursh Agrawal of The Browser Company about Scott's favorite browser, Arc. How do you make a browser in 2022? Will there be a Windows version? And who is the target market for Arc? FireHydrant - Sponsor Incidents are hard. Managing them shouldn't be. FireHydrant makes it easy for anyone in your organization to respond to incidents efficiently and consistently. Intuitive, guided workflows provide turn-by-turn navigation for incident response, while thoughtful prompts and powerful integrations capture all of your incident data to drive useful retros and actionable analytics. Gatsby - Sponsor Today's episode was sponsored by Gatsby, the fastest frontend for the headless web. Gatsby is the framework of choice for content-rich sites backed by a headless CMS as its GraphQL data layer makes it straightforward to source website content from anywhere. Gatsby's opinionated, React-based framework makes the hardest parts of building a performant website simpler. Visit Gatsby.dev/Syntax to get your first Gatsby site up in minutes and experience the speed. ⚡️ Auth0 - Sponsor Auth0 is the easiest way for developers to add authentication and secure their applications. They provides features like user management, multi-factor authentication, and you can even enable users to login with device biometrics with something like their fingerprint. Not to mention, Auth0 has SDKs for your favorite frameworks like React, Next.js, and Node/Express. Make sure to sign up for a free account and give Auth0 a try with the link below. https://a0.to/syntax Show Notes 00:34 Welcome HurshAgrawal.com @Hursh 02:53 What is Arc and why create another browser? Arc browser The browser company 05:36 What is different about Arc? 08:20 Who is the target market for Arc? 09:30 Sponsor: Auth0 10:39 How do you make a browser? 13:38 Will there be a Windows version of Arc? 15:57 Where did the CMD-T functionality come from? 19:27 Sponsor: FireHydrant 20:39 How do you build on top of the Chrome engine? 24:17 How does The Browser Company make money? 27:26 Do you mess with the user agent? 29:05 Why do you require account set up to use Arc? 32:58 Sponsor: Gatsby 33:59 How did you come up with your theming engine? 36:15 Supper Club Questions Warp Hacker News Changelog Every Ben Thompson Bundling and Unbundling 42:59 SIIIIICK ××× PIIIICKS ××× ××× SIIIIICK ××× PIIIICKS ××× Jabra speakerphone Tweet us your tasty treats Scott's Instagram LevelUpTutorials Instagram Wes' Instagram Wes' Twitter Wes' Facebook Scott's Twitter Make sure to include @SyntaxFM in your tweets

Aktien, Börse & Co. - Der SdK Anlegerpodcast
Reina Beckers Kampf für Steuergerechtigkeit | Finanzfrauen

Aktien, Börse & Co. - Der SdK Anlegerpodcast

Play Episode Listen Later Sep 23, 2022 8:05


Die Steuerberaterin Reina Becker aus Westerstede sieht in unserem Steuerrecht klaren Bedarf für Optimierung. Ihrer Ansicht nach werden u.A. Familien darin stark benachteiligt. Carola Rinker von der SdK erkundigt sich in diesem Gespräch nach dem aktuellen Stand im Verfahren und wie junge Menschen jetzt fürs Alter vorsorgen können. ⌚ Timestamps: (0:00) Intro (0:55) Der Kampf für Steuergerechtigkeit (3:47) Medienpräsenz (4:48) Der Altersvorsorge-Rat für junge Menschen (5:58) Der aktuelle Stand beim Verfahren

TerraSpaces
Building on Cosmos SDK

TerraSpaces

Play Episode Listen Later Sep 21, 2022 44:33


Today on the Ether we have Carbon hosting a space with Edward Sam discussing building on the Cosmos SDK. Recorded on September 20th 2022. If you enjoy the music at the end of the episodes, you can find the albums streaming on Spotify, and the rest of your favorite streaming platforms. Check out Project Survival, Virus Diaries, and Plan B wherever you get your music. Thank you to everyone in the community who supports TerraSpaces.

NTN » The DawgHouse - Motorcycling news, racing and analysis
The DawgHouse Motorcycle Racing Radio 690: What’s up with Sean Dylan Kelly ????

NTN » The DawgHouse - Motorcycling news, racing and analysis

Play Episode Listen Later Sep 21, 2022 44:48


Complete ad lib convo about SDK and why he is struggling. The Future of American Racers over seas… The Highs and Lows of Moto2 Marc Marquez the one man demolition derby. Oh yeah and once again Phil was right, called it months ago and Roczen and Honda parting ways.

Decent Crypto Podcast
Deep Dive | Cosmos Ecosystem (Tendermint, IBC, Cosmos SDK, and more)

Decent Crypto Podcast

Play Episode Listen Later Sep 21, 2022 63:43


Join Matt and Karan as they break down the Cosmos Ecosysteml! We discuss its history, founding, technical details and fundamentals in this wide ranging discussion. -------------------------------------------------------------------------------------------------------------------------- Be sure to like and subscribe if you like our content! Follow us on Twitter and everywhere you get your podcasts below: Twitter: https://twitter.com/decentcryptopod YouTube: https://www.youtube.com/channel/UCWQQLP0GR5Hhk2VE5wW1-pg Matthew Blumberg: https://mobile.twitter.com/matt_blumberg Karan Karia: https://mobile.twitter.com/karankaria_ Podcast Links: https://linktr.ee/DecentCryptoPodcast -------------------------------------------------------------------------------------------------------------------------- Timestamps --- Send in a voice message: https://anchor.fm/decentcryptopodcast/message

The Bike Shed
355: Test Performance

The Bike Shed

Play Episode Listen Later Sep 20, 2022 42:44


Guest Geoff Harcourt, CTO of CommonLit, joins Joël to talk about a thing that comes up with a lot with clients: the performance of their test suite. It's often a concern because with test suites, until it becomes a problem, people tend to not treat it very well, and people ask for help on making their test suites faster. Geoff shares how he handles a scenario like this at CommonLit. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Geoff Harcourt (https://twitter.com/geoffharcourt) Common Lit (https://www.commonlit.org/) Cuprite driver (https://cuprite.rubycdp.com/) Chrome DevTools Protocol (CDP) (https://chromedevtools.github.io/devtools-protocol/) Factory Doctor (https://test-prof.evilmartians.io/#/profilers/factory_doctor) Joël's RailsConf talk (https://www.youtube.com/watch?v=LOlG4kqfwcg) Formal Methods (https://www.hillelwayne.com/post/formally-modeling-migrations/) Rails multi-database support (https://guides.rubyonrails.org/active_record_multiple_databases.html) Knapsack pro (https://knapsackpro.com/) Prior episode with Eebs (https://www.bikeshed.fm/353) Shopify article on skipping specs (https://shopify.engineering/spark-joy-by-running-fewer-tests) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by Geoff Harcourt, who is the CTO of CommonLit. GEOFF: Hi, Joël. JOËL: And together, we're here to share a little bit of what we've learned along the way. Geoff, can you briefly tell us what is CommonLit? What do you do? GEOFF: CommonLit is a 501(c)(3) non-profit that delivers a literacy curriculum in English and Spanish to millions of students around the world. Most of our tools are free. So we take a lot of pride in delivering great tools to teachers and students who need them the most. JOËL: And what does your role as CTO look like there? GEOFF: So we have a small engineering team. There are nine of us, and we run a Rails monolith. I'd say a fair amount of the time; I'm hands down in the code. But I also do the things that an engineering head has to do, so working with vendors, and figuring out infrastructure, and hiring, and things like that. JOËL: So that's quite a variety of things that you have to do. What is new in your world? What's something that you've encountered recently that's been fun or interesting? GEOFF: It's the start of the school year in America, so traffic has gone from a very tiny amount over the summer to almost the highest load that we'll encounter all year. So we're at a new hosting provider this fall. So we're watching our infrastructure and keeping an eye on it. The analogy that we've been using to describe this is like when you set up a bunch of plumbing, it looks like it all works, but until you really pump water through it, you don't see if there are any leaks. So things are in good shape right now, but it's a very exciting time of year for us. JOËL: Have you ever done some actual plumbing yourself? GEOFF: I am very, very bad at home repair. But I have fixed a toilet or two. I've installed a water filter but nothing else. What about you? JOËL: I've done a little bit of it when I was younger with my dad. Like, I actually welded copper pipes and that kind of thing. GEOFF: Oh, that's amazing. That's cool. Nice. JOËL: So I've definitely felt that thing where you turn the water source back on, and it's like, huh, let's see, is this joint going to leak, or are we good? GEOFF: Yeah, they don't have CI for plumbing, right? JOËL: [laughs] You know, test it in production, right? GEOFF: Yeah. [laughs] So we're really watching right now traffic starting to rise as students and teachers are coming back. And we're also figuring out all kinds of things that we want to do to do better monitoring of our application, so some of this is watching metrics to see if things happen. But some of this is also doing some simulated user activity after we do deploys. So we're using some automated browsers with Cypress to log into our application and do some user flows, and then report back on the results. JOËL: So is this kind of like a feature test in CI, except that you're running it in production? GEOFF: Yeah. Smoke test is the word that we've settled on for it, but we run it against our production server every time we deploy. And it's a small suite. It's nowhere as big as our big Capybara suite that we run in CI, but we're trying to get feedback in less than six minutes. That's sort of the goal. In addition to running tests, we also take screenshots with a tool called Percy, and that's a visual regression testing tool. So we get to see the screenshots, and if they differ by more than one pixel, we get a ping that lets us know that maybe our CSS has moved around or something like that. JOËL: Has that caught some visual bugs for you? GEOFF: Definitely. The state of CSS at CommonLit was very messy when I arrived, and it's gotten better, but it still definitely needs some love. There are some false positives, but it's been really, really nice to be able to see visual changes on our production pages and then be able to approve them or know that there's something we have to go back and fix. JOËL: I'm curious, for this smoke test suite, how long does it take to run? GEOFF: We run it in parallel. It runs on Buildkite, which is the same tool that we use to orchestrate our CI, and the longest test takes about five minutes. It signs in as a teacher, creates an account. It creates a class; it invites the student to that class. It then logs out, logs in as that student creates the student account, signs in as the student, joins the class. It then assigns a lesson to the student then the student goes and takes the lesson. And then, when the student submits the lesson, then the test is over. And that confirms all of the most critical flows that we would want someone to drop what they were doing if it's broken, you know, account creation, class creation, lesson creation, and students taking a lesson. JOËL: So you're compressing the first few weeks of school into five minutes. GEOFF: Yes. And I pity the school that has thousands of fake teachers, all named Aaron McCarronson at the school. JOËL: [laughs] GEOFF: But we go through and delete that data every once in a while. But we have a marketer who just started at CommonLit maybe a few weeks ago, and she thought that someone was spamming our signup form because she said, "I see hundreds of teachers named Aaron McCarronson in our user list." JOËL: You had to admit that you were the spammer? GEOFF: Yes, I did. [laughs] We now have some controls to filter those people out of reports. But it's always funny when you look at the list, and you see all these fake people there. JOËL: Do you have any rate limiting on your site? GEOFF: Yeah, we do quite a bit of it, actually. Some of it we do through Cloudflare. We have tools that limit a certain flow, like people trying to credential stuffing our password, our user sign-in forms. But we also do some further stuff to prevent people from hitting key endpoints. We use Rack::Attack, which is a really nice framework. Have you had to do that in client work with clients setting that stuff up? JOËL: I've used Rack:Attack before. GEOFF: Yeah, it's got a reasonably nice interface that you can work with. And I always worry about accidentally setting those things up to be too sensitive, and then you get lots of stuff back. One issue that we sometimes find is that lots of kids at the same school are sharing an IP address. So that's not the thing that we want to use for rate limiting. We want to use some other criteria for rate limiting. JOËL: Right, right. Do you ever find that you rate limit your smoke tests? Or have you had to bypass the rate limiting in the smoke tests? GEOFF: Our smoke tests bypass our rate limiting and our bot detection. So they've got some fingerprints they use to bypass that. JOËL: That must have been an interesting day at the office. GEOFF: Yes. [laughter] With all of these things, I think it's a big challenge to figure out, and it's similar when you're making tests for development, how to make tests that are high signal. So if a test is failing really frequently, even if it's testing something that's worthwhile, if people start ignoring it, then it stops having value as a piece of signal. So we've invested a ton of time in making our test suite as reliable as possible, but you sometimes do have these things that just require a change. I've become a really big fan of...there's a Ruby driver for Capybara called Cuprite, and it doesn't control chrome with Chrome Driver or with Selenium. It controls it with the Chrome DevTools protocol, so it's like a direct connection into the browser. And we find that it's very, very fast and very, very reliable. So we saw that our Capybara specs got significantly more reliable when we started using this as our driver. JOËL: Is this because it's not actually moving the mouse around and clicking but instead issuing commands in the background? GEOFF: Yeah. My understanding of this is a little bit hazy. But I think that Selenium and ChromeDriver are communicating over a network pipe, and sometimes that network pipe is a little bit lossy. And so it results in asynchronous commands where maybe you don't get the feedback back after something happens. And CDP is what Chrome's team and I think what Puppeteer uses to control things directly. So it's great. And you can even do things with it. Like, you can simulate different time zone for a user almost natively. You can speed up or slow down the traveling of time and the direction of time in the browser and all kinds of things like that. You can flip it into mobile mode so that the device reports that it's a touch browser, even though it's not. We have a set of mobile specs where we flip it with CDP into mobile mode, and that's been really good too. Do you find when you're doing client work that you have a demand to build mobile-specific specs for system tests? JOËL: Generally not, no. GEOFF: You've managed to escape it. JOËL: For something that's specific to mobile, maybe one or two tests that have a weird interaction that we know is different on mobile. But in general, we're not doing the whole suite under mobile and the whole suite under desktop. GEOFF: When you hand off a project...it's been a while since you and I have worked together. JOËL: For those who don't know, Geoff used to be with us at thoughtbot. We were colleagues. GEOFF: Yeah, for a while. I remember my very first thoughtbot Summer Summit; you gave a really cool lightning talk about Eleanor of Aquitaine. JOËL: [laughs] GEOFF: That was great. So when you're handing a project off to a client after your ending, do you find that there's a transition period where you're educating them about the norms of the test suite before you leave it in their hands? JOËL: It depends a lot on the client. With many clients, we're working alongside an existing dev team. And so it's not so much one big handoff at the end as it is just building that in the day-to-day, making sure that we are integrating with the team from the outset of the engagement. So one thing that does come up a lot with clients is the performance of their test suite. That's often a concern because the test suite until it becomes a problem, people tend to not treat it very well. And by the time that you're bringing on an external consultant to help, generally, that's one of the areas of the code that's been a little bit neglected. And so people ask for help on making their test suite faster. Is that something that you've had to deal with at CommonLit as well? GEOFF: Yeah, that's a great question. We have struggled a lot with the speed that our test suite...the time it takes for our test suite to run. We've done a few things to improve it. The first is that we have quite a bit of caching that we do in our CI suite around dependencies. So gems get cached separately from NPM packages and browser assets. So all three of those things are independently cached. And then, we run our suites in parallel. Our Jest specs get split up into eight containers. Our Ruby non-system tests...I'd like to say unit tests, but we all know that some of those are actually integration tests. JOËL: [laughs] GEOFF: But those tests run in 15 containers, and they start the moment gems are built. So they don't wait for NPM packages. They don't wait for assets. They immediately start going. And then our system specs as soon as the assets are built kick off and start running. And we actually run that in 40 parallel containers so we can get everything finished. So our CI suite can finish...if there are no dependency bumps and no asset bumps, our specs suite you can finish in just under five minutes. But if you add up all of that time, cumulatively, it's something like 75 minutes is the total execution as it goes. Have you tried FactoryDoctor before for speeding up test suites? JOËL: This is the gem from Evil Martians? GEOFF: Yeah, it's part of TestProf, which is their really, really unbelievable toolkit for improving specs, and they have a whole bunch of things. But one of them will tell you how many invocations of FactoryBot factories each factory got. So you can see a user factory was fired 13,000 times in the test suite. It can even do some tagging where it can go in and add metadata to your specs to show which ones might be candidates for optimization. JOËL: I gave a talk at RailsConf this year titled Your Tests Are Making Too Many Database Calls. GEOFF: Nice. JOËL: And one of the things I talked about was creating a lot more data via factories than you think that you are. And I should give a shout-out to FactoryProf for finding those. GEOFF: Yeah, it's kind of a silent killer with the test suite, and you really don't think that you're doing a whole lot with it, and then you see how many associations. How do you fight that tension between creating enough data that things are realistic versus the streamlining of not creating extraneous things or having maybe mystery guests via associations and things like that? JOËL: I try to have my base factories be as minimal as possible. So if there's a line in there that I can remove, and the factory or the model still saves, then it should be removed. Some associations, you can't do that if there's a foreign key constraint, and so then I'll leave it in. But I am a very hardcore minimalist, at least with the base factory. GEOFF: I think that makes a lot of sense. We use foreign keys all over the place because we're always worried about somehow inserting student data that we can't recover with a bug. So we'd rather blow up than think we recorded it. And as a result, sometimes setting up specs for things like a student answering a multiple choice question on a quiz ends up being this sort of if you give a mouse a cookie thing where it's you need the answer options. You need the question. You need the quiz. You need the activity. You need the roster, the students to be in the roster. There has to be a teacher for the roster. It just balloons out because everything has a foreign key. JOËL: The database requires it, but the test doesn't really care. It's just like, give me a student and make it valid. GEOFF: Yes, yeah. And I find that that challenge is really hard. And sometimes, you don't see how hard it is to enforce things like database integrity until you have a lot of concurrency going on in your application. It was a very rude surprise to me to find out that browser requests if you have multiple servers going on might not necessarily be served in the order that they were made. JOËL: [laughs] So you're talking about a scenario where you're running multiple instances of your app. You make two requests from, say, two browser tabs, and somehow they get served from two different instances? GEOFF: Or not even two browser tabs. Imagine you have a situation where you're auto-saving. JOËL: Oooh, background requests. GEOFF: Yeah. So one of the coolest features we have at CommonLit is that students can annotate and highlight a text. And then, the teachers can see the annotations and highlights they've made, and it's actually part of their assignment often to highlight key evidence in a passage. And those things all fire in the background asynchronously so that it doesn't block the student from doing more stuff. But it also means that potentially if they make two changes to a highlight really quickly that they might arrive out of order. So we've had to do some things to make sure that we're receiving in the right order and that we're not blowing away data that was supposed to be there. Just think about in a Heroku environment, for example, which is where we used to be, you'd have four dynos running. If dyno one takes too long to serve the thing for dyno two, request one may finish after request two. That was a very, very rude surprise to learn that the world was not as clean and neat as I thought. JOËL: I've had to do something similar where I'm making a bunch of background requests to a server. And even with a single dyno, it is possible for your request to come back out of order just because of how TCP works. So if it's waiting for a packet and you have two of these requests that went out not too long before each other, there's no guarantee that all the packets for request one come back before all the packets from request two. GEOFF: Yeah, what are the strategies for on the client side for dealing with that kind of out-of-order response? JOËL: Find some way to effectively version the requests that you make. Timestamp is an easy one. Whenever a request comes in, you take the response from the latest timestamp, and that wins out. GEOFF: Yeah, we've started doing some unique IDs. And part of the unique ID is the browser's timestamp. We figure that no one would try to hack themselves and intentionally screw up their own data by submitting out of order. JOËL: Right, right. GEOFF: It's funny how you have to pick something to trust. [laughs] JOËL: I'd imagine, in this case, if somebody did mess around with it, they would really only just be screwing up their own UI. It's not like that's going to then potentially crash the server because of something, and then you've got a potential vector for a denial of service. GEOFF: Yeah, yeah, that's always what we're worried about, and we have to figure out how to trust these sorts of requests as what's a valid thing and what is, as you're saying, is just the user hurting themselves as opposed to hurting someone else's stuff? MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! GEOFF: You were talking about test suites. What are some things that you have found are consistently problems in real-world apps, but they're really, really hard to test in a test suite? JOËL: Difficult to test or difficult to optimize for performance? GEOFF: Maybe difficult to test. JOËL: Third-party integrations. Anything that's over the network that's going to be difficult. Complex interactions that involve some heavy frontend but then also need a lot of backend processing potentially with asynchronous workers or something like that, there are a lot of techniques that we can use to make all those play together, but that means there's a lot of complexity in that test. GEOFF: Yeah, definitely. I've taken a deep interest in what I'm sure there's a better technical term for this, but what I call network hostile environments or bandwidth hostile environments. And we see this a lot with kids. Especially during the pandemic, kids would often be trying to do their assignments from home. And maybe there are five kids in the house, and they're all trying to do their homework at the same time. And they're all sharing a home internet connection. Maybe they're in the basement because they're trying to get some peace and quiet so they can do their assignment or something like that. And maybe they're not strongly connected. And the challenge of dealing with intermittent connectivity is such an interesting problem, very frustrating but very interesting to deal with. JOËL: Have you explored at all the concept of Formal Methods to model or verify situations like that? GEOFF: No, but I'm intrigued. Tell me more. JOËL: I've not tried it myself. But I've read some articles on the topic. Hillel Wayne is a good person to follow for this. GEOFF: Oh yeah. JOËL: But it's really fascinating when you'll see, okay, here are some invariants and things. And then here are some things that you set up some basic properties for a system. And then some of these modeling languages will then poke holes and say, hey, it's possible for this 10-step sequence of events to happen that will then crash your server. Because you didn't think that it's possible for five people to be making concurrent requests, and then one of them fails and retries, whatever the steps are. So it's really good at modeling situations that, as developers, we don't always have great intuition, things like parallelism. GEOFF: Yeah, that sounds so interesting. I'm going to add that to my list of reading for the fall. Once the school year calms down, I feel like I can dig into some technical topics again. I've got this book sitting right next to my desk, Designing Data-Intensive Applications. I saw it referenced somewhere on Twitter, and I did the thing where I got really excited about the book, bought it, and then didn't have time to read it. So it's just sitting there unopened next to my desk, taunting me. JOËL: What's the 30-second spiel for what is a data-intensive app, and why should we design for it differently? GEOFF: You know, that's a great question. I'd probably find out if I'd dug further into the book. JOËL: [laughs] GEOFF: I have found at CommonLit that we...I had a couple of clients at thoughtbot that dealt with data at the scale that we deal with here. And I'm sure there are bigger teams doing, quote, "bigger data" than we're doing. But it really does seem like one of our key challenges is making sure that we just move data around fast enough that nothing becomes a bottleneck. We made a really key optimization in our application last year where we changed the way that we autosave students' answers as they go. And it resulted in a massive increase in throughput for us because we went from trying to store updated versions of the students' final answers to just storing essentially a draft and often storing that draft in local storage in the browser and then updating it on the server when we could. And then, as a result of this, we're making key updates to the table where we store a student's answers much less frequently. And that has a huge impact because, in addition to being one of the biggest tables at CommonLit...it's got almost a billion recorded answers that we've gotten from students over the years. But because we're not writing to it as often, it also means that reads that are made from the table, like when the teacher is getting a report for how the students are doing in a class or when a principal is looking at how a school is doing, now, those queries are seeing less contention from ongoing writes. And so we've seen a nice improvement. JOËL: One strategy I've seen for that sort of problem, especially when you have a very write-heavy table but that also has a different set of users that needs to read from it, is to set up a read replica. So you have your main that is being written to, and then the read replica is used for reports and people who need to look at the data without being in contention with the table being written. GEOFF: Yeah, Rails multi-DB support now that it's native to the framework is excellent. It's so nice to be able to just drop that in and fire it up and have it work. We used to use a solution that Instacart had built. It was great for our needs, but it wasn't native to the framework. So every single time we upgraded Rails, we had to cross our fingers and hope that it didn't, you know, whatever private APIs of ActiveRecord it was using hadn't broken. So now that that stuff, which I think was open sourced from GitHub's multi-database implementation, so now that that's all native in Rails, it's really, really nice to be able to use that. JOËL: So these kinds of database tricks can help make the application much more performant. You'd mentioned earlier that when you were trying to make your test performant that you had introduced parallelism, and I feel like that's maybe a bit of an intimidating thing for a lot of people. How would you go about converting a test suite that's just vanilla RSpec, single-threaded, and then moving it in a direction of being more parallel? GEOFF: There's a really, really nice tool called Knapsack, which has a free version. But the pro version, I feel like if you're spending any money at all on CI, it's immediately worth the cost. I think it's something like $75 a month for each suite that you run on it. And Knapsack does this dynamic allocation of tests across containers. And it interfaces with several of the popular CI providers so that it looks at environment variables and can tell how many containers you're splitting across. It'll do some things, like if some of your containers start early and some of them start late, it will distribute the work so that they all end at the same time, which is really nice. We've preferred CI providers that charge by the minute. So rather than just paying for a service that we might not be using, we've used services like Semaphore, and right now, we're on Buildkite, which charge by the minute, which means that you can decide to do as much parallelism as you want. You're just paying for the compute time as you run things. JOËL: So that would mean that two minutes of sequential build time costs just the same as splitting it up in parallel and doing two simultaneous minutes of build time. GEOFF: Yeah, that is almost true. There's a little bit of setup time when a container spins up. And that's one of the key things that we optimize. I guess if we ran 200 containers if we were like Shopify or something like that, we could technically make our CI suite finish faster, but it might cost us three times as much. Because if it takes a container 30 seconds to spin up and to get ready, that's 30 seconds of dead time when you're not testing, but you're paying for the compute. So that's one of the key optimizations that we make is figuring out how many containers do we need to finish fast when we're not just blowing time on starting and finishing? JOËL: Right, because there is a startup cost for each container. GEOFF: Yeah, and during the work day when our engineers are working along, we spin up 200 EC2 machines or 150 EC2 machines, and they're there in the fleet, and they're ready to go to run CI jobs for us. But if you don't have enough machines, then you have jobs that sit around waiting to start, that sort of thing. So there's definitely a tension between figuring out how much parallelism you're going to do. But I feel like to start; you could always break your test suite into four pieces or two pieces and just see if you get some benefit to running a smaller number of tests in parallel. JOËL: So, manually splitting up the test suite. GEOFF: No, no, using something like Knapsack Pro where you're feeding it the suite, and then it's dividing up the tests for you. I think manually splitting up the suite is probably not a good practice overall because I'm guessing you'll probably spend more engineering time on fiddling with which tests go where such that it wouldn't be cost-effective. JOËL: So I've spent a lot of time recently working to improve a parallel test suite. And one of the big problems that you have is trying to make sure that all of your parallel surfaces are being used efficiently, so you have to split the work evenly. So if you said you have 70 minutes worth of work, if you give 50 minutes to one worker and 20 minutes to the other, that means that your total test suite is still 50 minutes, and that's not good. So ideally, you split it as evenly as possible. So I think there are three evolutionary steps on the path here. So you start off, and you're going to manually split things out. So you're going to say our biggest chunk of tests by time are the feature specs. We'll make them almost like a separate suite. Then we'll make the models and controllers and views their own thing, and that's roughly half and half, and run those. And maybe you're off by a little bit, but it's still better than putting them all in one. It becomes difficult, though, to balance all of these because then one might get significantly longer than the other then, you have to manually rebalance it. It works okay if you're only splitting it among two workers. But if you're having to split it among 4, 8, 16, and more, it's not manageable to do this, at least not by hand. If you want to get fancy, you can try to automate that process and record a timing file of how long every file takes. And then when you kick off the build process, look at that timing file and say, okay, we have 70 minutes, and then we'll just split the file so that we have roughly 70 divided by number of workers' files or minutes of work in each process. And that's what gems like parallel_tests do. And Knapsack's Classic mode works like this as well. That's decently good. But the problem is you're working off of past information. And so if the test has changed or just if it's highly variable, you might not get a balanced set of workers. And as you mentioned, there's a startup cost, and so not all of your workers boot up at the same time. And so you might still have a very uneven amount of work done by each worker by statically determining the work to be done via a timing file. So the third evolution here is a dynamic or a self-balancing approach where you just put all of the tests or the files in a queue and then just have every worker pull one or two tests when it's ready to work. So that way, if something takes a lot longer than expected, well, it's just not pulling more from the queue. And everybody else still pulls, and they end up all balancing each other out. And then ideally, every worker finishes work at exactly the same time. And that's how you know you got the most value you could out of your parallel processes. GEOFF: Yeah, there's something about watching all the jobs finish in almost exactly, you know, within 10 seconds of each other. It just feels very, very satisfying. I think in addition to getting this dynamic splitting where you're getting either per file or per example split across to get things finishing at the same time, we've really valued getting fast feedback. So I mentioned before that our Jest specs start the moment NPM packages get built. So as soon as there's JavaScripts that can be executed in test, those kick-off. As soon as our gems are ready, the RSpec non-system tests go off, and they start running specs immediately. So we get that really, really fast feedback. Unfortunately, the browser tests take the longest because they have to wait for the most setup. They have the most dependencies. And then they also run the slowest because they run in the browser and everything. But I think when things are really well-oiled, you watch all of those containers end at roughly the same time, and it feels very satisfying. JOËL: So, a few weeks ago, on an episode of The Bike Shed, I talked with Eebs Kobeissi about dependency graphs and how I'm super excited about it. And I think I see a dependency graph in what you're describing here in that some things only depend on the gem file, and so they can start working. But other things also depend on the NPM packages. And so your build pipeline is not one linear process or one linear process that forks into other linear processes; it's actually a dependency graph. GEOFF: That is very true. And the CI tool we used to use called Semaphore actually does a nice job of drawing the dependency graph between all of your steps. Buildkite does not have that, but we do have a bunch of steps that have to wait for other steps to finish. And we do it in our wiki. On our repo, we do have a diagram of how all of this works. We found that one of the things that was most wasteful for us in CI was rebuilding gems, reinstalling NPM packages (We use Yarn but same thing.), and then rebuilding browser assets. So at the very start of every CI run, we build hashes of a bunch of files in the repository. And then, we use those hashes to name Docker images that contain the outputs of those files so that we are able to skip huge parts of our CI suite if things have already happened. So I'll give an example if Ruby gems have not changed, which we would know by the Gemfile.lock not having changed, then we know that we can reuse a previously built gems image that has the gems that just gets melted in, same thing with yarn.lock. If yarn.lock hasn't changed, then we don't have to build NPM packages. We know that that already exists somewhere in our Docker registry. In addition to skipping steps by not redoing work, we also have started to experiment...actually, in response to a comment that Chris Toomey made in a prior Bike Shed episode, we've started to experiment with skipping irrelevant steps. So I'll give an example of this if no Ruby files have changed in our repository, we don't run our RSpec unit tests. We just know that those are valid. There's nothing that needs to be rerun. Similarly, if no JavaScript has changed, we don't run our Jest tests because we assume that everything is good. We don't lint our views with erb-lint if our view files haven't changed. We don't lint our factories if the model or the database hasn't changed. So we've got all these things to skip key types of processing. I always try to err on the side of not having a false pass. So I'm sure we could shave this even tighter and do even less work and sometimes finish the build even faster. But I don't want to ever have a thing where the build passes and we get false confidence. JOËL: Right. Right. So you're using a heuristic that eliminates the really obvious tests that don't need to be run but the ones that maybe are a little bit more borderline, you keep them in. Shaving two seconds is not worth missing a failure. GEOFF: Yeah. And I've read things about big enterprises doing very sophisticated versions of this where they're guessing at which CI specs might be most relevant and things like that. We're nowhere near that level of sophistication right now. But I do think that once you get your test suite parallelized and you're not doing wasted work in the form of rebuilding dependencies or rebuilding assets that don't need to be rebuilt, there is some maybe not low, maybe medium hanging fruit that you can use to get some extra oomph out of your test suite. JOËL: I really like that you brought up this idea of infrastructure and skipping. I think in my own way of thinking about improving test suites, there are three broad categories of approaches you can take. One variable you get to work with is that total number of time single-threaded, so you mentioned 70 minutes. You can make that 70 minutes shorter by avoiding database writes where you don't need them, all the common tricks that we would do to actually change the test themselves. Then we can change...as another variable; we get to work with parallelism, we talked about that. And then finally, there's all that other stuff that's not actually executing RSpec like you said, loading the gems, installing NPM packages, Docker images. All of those, if we can skip work running migrations, setting up a database, if there are situations where we can improve the speed there, that also improves the total time. GEOFF: Yeah, there are so many little things that you can pick at to...like, one of the slowest things for us is Elasticsearch. And so we really try to limit the number of specs that use Elasticsearch if we can. You actually have to opt-in to using Elasticsearch on a spec, or else we silently mock and disable all of the things that happen there. When you're looking at that first variable that you were talking about, just sort of the overall time, beyond using FactoryDoctor and FactoryProf, is there anything else that you've used to just identify the most egregious offenders in a test suite and then figure out if they're worth it? JOËL: One thing you can do is hook into Active Support notification to try to find database writes. And so you can find, oh, here's where all of the...this test is making way too many database writes for some reason, or it's making a lot, maybe I should take a look at it; it's a hotspot. GEOFF: Oh, that's really nice. There's one that I've always found is like a big offender, which is people doing negative expectations in system specs. JOËL: Oh, for their Capybara wait time. GEOFF: Yeah. So there's a really cool gem, and the name of it is eluding me right now. But there's a gem that raises a special exception if Capybara waits the full time for something to happen. So it lets you know that those things exist. And so we've done a lot of like hunting for...Knapsack will report the slowest examples in your test suite. So we've done some stuff to look for the slowest files and then look to see if there are examples of these negative expectations that are waiting 10 seconds or waiting 8 seconds before they fail. JOËL: Right. Some files are slow, but they're slow for a reason. Like, a feature spec is going to be much slower than a model test. But the model tests might be very wasteful and because you have so many of them, if you're doing the same pattern in a bunch of them or if it's a factory that's reused across a lot of them, then a small fix there can have some pretty big ripple effects. GEOFF: Yeah, I think that's true. Have you ever done any evaluation of test suite to see what files or examples you could throw away? JOËL: Not holistically. I think it's more on an ad hoc basis. You find a place, and you're like, oh, these tests we probably don't need them. We can throw them out. I have found dead tests, tests that are not executed but still committed to the repo. GEOFF: [laughs] JOËL: It's just like, hey, I'm going to get a lot of red in my diff today. GEOFF: That always feels good to have that diff-y check-in, and it's 250 lines or 1,000 lines of red and 1 line of green. JOËL: So that's been a pretty good overview of a lot of different areas related to performance and infrastructure around tests. Thank you so much, Geoff, for joining us today on The Bike Shed to talk about your experience at CommonLit doing this. Do you have any final words for our listeners? GEOFF: Yeah. CommonLit is hiring a senior full-stack engineer, so if you'd like to work on Rails and TypeScript in a place with a great test suite and a great team. I've been here for five years, and it's a really, really excellent place to work. And also, it's been really a pleasure to catch up with you again, Joël. JOËL: And, Geoff, where can people find you online? GEOFF: I'm Geoff with a G, G-E-O-F-F Harcourt, @geoffharcourt. And that's my name on Twitter, and it's my name on GitHub, so you can find me there. JOËL: And we'll make sure to include a link to your Twitter profile in the show notes. The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. If you have any feedback, you can reach us at @_bikeshed or reach me at @joelquen on Twitter or at hosts@bikeshed.fm via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Data Science at Home
Predicting Out Of Memory Kill events with Machine Learning (Ep. 203)

Data Science at Home

Play Episode Listen Later Sep 20, 2022 19:33


Sometimes applications crash. Some other times applications crash because memory is exhausted. Such issues exist because of bugs in the code, or heavy memory usage for reasons that were not expected during design and implementation. Can we use machine learning to predict and eventually detect out of memory kills from the operating system? Apparently, the Netflix app many of us use on a daily basis leverage ML and time series analysis to prevent OOM-kills. Enjoy the show! Our Sponsors Explore the Complex World of Regulations. Compliance can be overwhelming. Multiple frameworks. Overlapping requirements. Let Arctic Wolf be your guide. Check it out at https://arcticwolf.com/datascience   Amethix works to create and maximize the impact of the world's leading corporations and startups, so they can create a better future for everyone they serve. We provide solutions in AI/ML, Fintech, Healthcare/RWE, and Predictive maintenance.   Transcript 1 00:00:04,150 --> 00:00:09,034 And here we are again with the season four of the Data Science at Home podcast. 2 00:00:09,142 --> 00:00:19,170 This time we have something for you if you want to help us shape the data science leaders of the future, we have created the the Data Science at Home's Ambassador program. 3 00:00:19,340 --> 00:00:28,378 Ambassadors are volunteers who are passionate about data science and want to give back to our growing community of data science professionals and enthusiasts. 4 00:00:28,534 --> 00:00:37,558 You will be instrumental in helping us achieve our goal of raising awareness about the critical role of data science in cutting edge technologies. 5 00:00:37,714 --> 00:00:45,740 If you want to learn more about this program, visit the Ambassadors page on our website@datascienceathome.com. 6 00:00:46,430 --> 00:00:49,234 Welcome back to another episode of Data Science at Home podcast. 7 00:00:49,282 --> 00:00:55,426 I'm Francesco Podcasting from the Regular Office of Amethyx Technologies, based in Belgium. 8 00:00:55,618 --> 00:01:02,914 In this episode, I want to speak about a machine learning problem that has been formulated at Netflix. 9 00:01:03,022 --> 00:01:22,038 And for the record, Netflix is not sponsoring this episode, though I still believe that this problem is a very well known problem, a very common one across factors, which is how to predict out of memory kill in an application and formulate this problem as a machine learning problem. 10 00:01:22,184 --> 00:01:39,142 So this is something that, as I said, is very interesting, not just because of Netflix, but because it allows me to explain a few points that, as I said, are kind of invariance across sectors. 11 00:01:39,226 --> 00:01:56,218 Regardless of your application, is a video streaming application or any other communication type of application, or a fintech application, or energy, or whatever, this memory kill, out of memory kill still occurs. 12 00:01:56,314 --> 00:02:05,622 And what is an out of memory kill? Well, it's essentially the extreme event in which the machine doesn't have any more memory left. 13 00:02:05,756 --> 00:02:16,678 And so usually the operating system can start eventually swapping, which means using the SSD or the hard drive as a source of memory. 14 00:02:16,834 --> 00:02:19,100 But that, of course, will slow down a lot. 15 00:02:19,430 --> 00:02:45,210 And eventually when there is a bug or a memory leak, or if there are other applications running on the same machine, of course there is some kind of limiting factor that essentially kills the application, something that occurs from the operating system most of the time that kills the application in order to prevent the application from monopolizing the entire machine, the hardware of the machine. 16 00:02:45,710 --> 00:02:48,500 And so this is a very important problem. 17 00:02:49,070 --> 00:03:03,306 Also, it is important to have an episode about this because there are some strategies that I've used at Netflix that are pretty much in line with what I believe machine learning should be about. 18 00:03:03,368 --> 00:03:25,062 And usually people would go for the fancy solution there like this extremely accurate predictors or machine learning models, but you should have a massive number of parameters and that try to figure out whatever is happening on that machine that is running that application. 19 00:03:25,256 --> 00:03:29,466 While the solution at Netflix is pretty straightforward, it's pretty simple. 20 00:03:29,588 --> 00:03:33,654 And so one would say then why making an episode after this? Well. 21 00:03:33,692 --> 00:03:45,730 Because I think that we need more sobriety when it comes to machine learning and I believe we still need to spend a lot of time thinking about what data to collect. 22 00:03:45,910 --> 00:03:59,730 Reasoning about what is the problem at hand and what is the data that can actually tickle the particular machine learning model and then of course move to the actual prediction that is the actual model. 23 00:03:59,900 --> 00:04:15,910 That most of the time it doesn't need to be one of these super fancy things that you see on the news around chatbots or autonomous gaming agent or drivers and so on and so forth. 24 00:04:16,030 --> 00:04:28,518 So there are essentially two data sets that the people at Netflix focus on which are consistently different, dramatically different in fact. 25 00:04:28,604 --> 00:04:45,570 These are data about device characteristics and capabilities and of course data that are collected at Runtime and that give you a picture of what's going on in the memory of the device, right? So that's the so called runtime memory data and out of memory kills. 26 00:04:45,950 --> 00:05:03,562 So the first type of data is I would consider it very static because it considers for example, the device type ID, the version of the software development kit that application is running, cache capacities, buffer capacities and so on and so forth. 27 00:05:03,646 --> 00:05:11,190 So it's something that most of the time doesn't change across sessions and so that's why it's considered static. 28 00:05:12,050 --> 00:05:18,430 In contrast, the other type of data, the Runtime memory data, as the name says it's runtime. 29 00:05:18,490 --> 00:05:24,190 So it varies across the life of the session it's collected at Runtime. 30 00:05:24,250 --> 00:05:25,938 So it's very dynamic data. 31 00:05:26,084 --> 00:05:36,298 And example of these records are for example, profile, movie details, playback information, current memory usage, et cetera, et cetera. 32 00:05:36,334 --> 00:05:56,086 So this is the data that actually moves and moves in the sense that it changes depending on how the user is actually using the Netflix application, what movie or what profile description, what movie detail has been loaded for that particular movie and so on and so forth. 33 00:05:56,218 --> 00:06:15,094 So one thing that of course the first difficulty of the first challenge that the people at Netflix had to deal with was how would you combine these two things, very static and usually small tables versus very dynamic and usually large tables or views. 34 00:06:15,142 --> 00:06:36,702 Well, there is some sort of join on key that is performed by the people at Netflix in order to put together these different data resolutions, right, which is data of the same phenomenon but from different sources and with different carrying very different signals in there. 35 00:06:36,896 --> 00:06:48,620 So the device capabilities is captured usually by the static data and of course the other data, the Runtime memory and out of memory kill data. 36 00:06:48,950 --> 00:07:04,162 These are also, as I said, the data that will describe pretty accurately how is the user using that particular application on that particular hardware. 37 00:07:04,306 --> 00:07:17,566 Now of course, when it comes to data and deer, there is nothing new that people at Netflix have introduced dealing with missing data for example, or incorporating knowledge of devices. 38 00:07:17,698 --> 00:07:26,062 It's all stuff that it's part of the so called data cleaning and data collection strategy, right? Or data preparation. 39 00:07:26,146 --> 00:07:40,782 That is, whatever you're going to do in order to make that data or a combination of these data sources, let's say, compatible with the way your machine learning model will understand or will read that data. 40 00:07:40,916 --> 00:07:58,638 So if you think of a big data platform, the first step, the first challenge you have to deal, you have to deal with is how can I, first of all, collect the right amount of information, the right data, but also how to transform this data for my particular big data platform. 41 00:07:58,784 --> 00:08:12,798 And that's something that, again, nothing new, nothing fancy, just basics, what we have been used to, what we are used to seeing now for the last decade or more, that's exactly what they do. 42 00:08:12,944 --> 00:08:15,222 And now let me tell you something important. 43 00:08:15,416 --> 00:08:17,278 Cybercriminals are evolving. 44 00:08:17,374 --> 00:08:22,446 Their techniques and tactics are more advanced, intricate and dangerous than ever before. 45 00:08:22,628 --> 00:08:30,630 Industries and governments around the world are fighting back on dealing new regulations meant to better protect data against this rising threat. 46 00:08:30,950 --> 00:08:39,262 Today, the world of cybersecurity compliance is a complex one, and understanding the requirements your organization must adhere to can be a daunting task. 47 00:08:39,406 --> 00:08:42,178 But not when the pack has your best architect. 48 00:08:42,214 --> 00:08:53,840 Wolf, the leader in security operations, is on a mission to end cyber risk by giving organizations the protection, information and confidence they need to protect their people, technology and data. 49 00:08:54,170 --> 00:09:02,734 The new interactive compliance portal helps you discover the regulations in your region and industry and start the journey towards achieving and maintaining compliance. 50 00:09:02,902 --> 00:09:07,542 Visit Arcticwolves.com DataScience to take your first step. 51 00:09:07,676 --> 00:09:11,490 That's arcticwolf.com DataScience. 52 00:09:12,050 --> 00:09:18,378 I think that the most important part, though, I think are actually equally important. 53 00:09:18,464 --> 00:09:26,854 But the way they treat runtime memory data and out of memory kill data is by using sliding windows. 54 00:09:26,962 --> 00:09:38,718 So that's something that is really worth mentioning, because the way you would frame this problem is something is happening at some point in time and I have to kind of predict that event. 55 00:09:38,864 --> 00:09:49,326 That is usually an outlier in the sense that these events are quite rare, fortunately, because Netflix would not be as usable as we believe it is. 56 00:09:49,448 --> 00:10:04,110 So you would like to predict these weird events by looking at a historical view or an historical amount of records that you have before this particular event, which is the kill of the application. 57 00:10:04,220 --> 00:10:12,870 So the concept of the sliding window, the sliding window approach is something that comes as the most natural thing anyone would do. 58 00:10:13,040 --> 00:10:18,366 And that's exactly what the researchers and Netflix have done. 59 00:10:18,488 --> 00:10:25,494 So unexpectedly, in my opinion, they treated this problem as a time series, which is exactly what it is. 60 00:10:25,652 --> 00:10:26,190 Now. 61 00:10:26,300 --> 00:10:26,754 They. 62 00:10:26,852 --> 00:10:27,330 Of course. 63 00:10:27,380 --> 00:10:31,426 Use this sliding window with a different horizon. 64 00:10:31,558 --> 00:10:32,190 Five minutes. 65 00:10:32,240 --> 00:10:32,838 Four minutes. 66 00:10:32,924 --> 00:10:33,702 Two minutes. 67 00:10:33,836 --> 00:10:36,366 As close as possible to the event. 68 00:10:36,548 --> 00:10:38,886 Because maybe there are some. 69 00:10:39,008 --> 00:10:39,762 Let's say. 70 00:10:39,896 --> 00:10:45,678 Other dynamics that can raise when you are very close to the event or when you are very far from it. 71 00:10:45,704 --> 00:10:50,166 Like five minutes far from the out of memory kill. 72 00:10:50,348 --> 00:10:51,858 Might have some other. 73 00:10:51,944 --> 00:10:52,410 Let's say. 74 00:10:52,460 --> 00:10:55,986 Diagrams or shapes in the data. 75 00:10:56,168 --> 00:11:11,310 So for example, you might have a certain number of allocations that keep growing and growing, but eventually they grow with a certain curve or a certain rate that you can measure when you are five to ten minutes far from the out of memory kill. 76 00:11:11,420 --> 00:11:16,566 When you are two minutes far from the out of memory kill, probably this trend will change. 77 00:11:16,688 --> 00:11:30,800 And so probably what you would expect is that the memory is already half or more saturated and therefore, for example, the operating system starts swapping or other things are happening that you are going to measure in this. 78 00:11:31,550 --> 00:11:39,730 And that would give you a much better picture of what's going on in the, let's say, closest neighborhood of that event, the time window. 79 00:11:39,790 --> 00:11:51,042 The sliding window and time window approach is definitely worth mentioning because this is something that you can apply if you think pretty much anywhere right now. 80 00:11:51,116 --> 00:11:52,050 What they did. 81 00:11:52,160 --> 00:12:04,146 In addition to having a time window, a sliding window, they also assign different levels to memory readings that are closer to the out of memory kill. 82 00:12:04,208 --> 00:12:10,062 And usually these levels are higher and higher as we get closer and closer to the out of memory kill. 83 00:12:10,136 --> 00:12:15,402 So this means that, for example, we would have, for a five minute window, we would have a level one. 84 00:12:15,596 --> 00:12:22,230 Five minute means five minutes far from the out of memory kill, four minutes would be a level two. 85 00:12:22,280 --> 00:12:37,234 Three minutes it's much closer would be a level three, two minutes would be a level four, which means like kind of the severity of the event as we get closer and closer to the actual event when the application is actually killed. 86 00:12:37,342 --> 00:12:51,474 So by looking at this approach, nothing new there, even, I would say not even a seasoned data scientist would have understood that using a sliding window is the way to go. 87 00:12:51,632 --> 00:12:55,482 I'm not saying that Netflix engineers are not seasoned enough. 88 00:12:55,556 --> 00:13:04,350 Actually they do a great job every day to keep giving us video streaming platforms that actually never fail or almost never fail. 89 00:13:04,910 --> 00:13:07,460 So spot on there, guys, good job. 90 00:13:07,850 --> 00:13:27,738 But looking at this sliding window approach, the direct consequence of this is that they can plot, they can do some sort of graphical analysis of the out of memory kills versus the memory usage that can give the reader or the data scientist a very nice picture of what's going on there. 91 00:13:27,824 --> 00:13:39,330 And so you would have, for example, and I would definitely report some of the pictures, some of the diagrams and graphs in the show notes of this episode on the official website datascienceaton.com. 92 00:13:39,500 --> 00:13:48,238 But essentially what you can see there is that there might be premature peaks at, let's say, a lower memory reading. 93 00:13:48,334 --> 00:14:08,958 And usually these are some kind of false positives or anomalies that should not be there, then it's possible to set a threshold where the threshold to start lowering the memory usage because after that threshold something nasty can happen and usually happens according to your data. 94 00:14:09,104 --> 00:14:18,740 And then of course there is another graph about the Gaussian distribution or in fact no sharp peak at all. 95 00:14:19,250 --> 00:14:21,898 That is like kills or out of memory. 96 00:14:21,934 --> 00:14:33,754 Kills are more or less distributed in a normalized fashion and then of course there are the genuine peaks that indicate that kills near, let's say, the threshold. 97 00:14:33,802 --> 00:14:38,758 And so usually you would see that after that particular threshold of memory usage. 98 00:14:38,914 --> 00:14:42,142 You see most of the out of memory kills. 99 00:14:42,226 --> 00:14:45,570 Which makes sense because given a particular device. 100 00:14:45,890 --> 00:14:48,298 Which means certain amount of memories. 101 00:14:48,394 --> 00:14:50,338 Certain memory characteristics. 102 00:14:50,494 --> 00:14:53,074 Certain version of the SDK and so on and so forth. 103 00:14:53,182 --> 00:14:53,814 You can say. 104 00:14:53,852 --> 00:14:54,090 Okay. 105 00:14:54,140 --> 00:15:10,510 Well for this device type I have this memory memory usage threshold and after this I see that I have a relatively high number of out of memory kills immediately after this threshold. 106 00:15:10,570 --> 00:15:18,150 And this means that probably that is the threshold you would like to consider as the critical threshold you should never or almost never cross. 107 00:15:18,710 --> 00:15:38,758 So once you have this picture in front of you, you can start thinking of implementing some mechanisms that can monitor the memory usage and of course kind of preemptively dialocate things or keep that memory threshold as low as possible with respect to the critical threshold. 108 00:15:38,794 --> 00:15:53,446 So you can start implementing some logic that prevents the application from being killed by the operating system so that you would in fact reduce the rate of out of memory kills overall. 109 00:15:53,578 --> 00:16:11,410 Now, as always and as also the engineers state in their blog post, in the technical post, they say well, it's much more important for us to predict with a certain amount of false positive rather than false negatives. 110 00:16:11,590 --> 00:16:18,718 False negatives means missing an out of memory kill that actually occurred but got not predicted. 111 00:16:18,874 --> 00:16:40,462 If you are a regular listener of this podcast, that statement should resonate with you because this is exactly what happens, for example in healthcare applications, which means that doctors or algorithms that operate in healthcare would definitely prefer to have a bit more false positives rather than more false negatives. 112 00:16:40,486 --> 00:16:54,800 Because missing that someone is sick means that you are not providing a cure and you're just sending the patient home when he or she is sick, right? That's the false positive, it's the mess. 113 00:16:55,130 --> 00:16:57,618 So that's a false negative, it's the mess. 114 00:16:57,764 --> 00:17:09,486 But having a false positive, what can go wrong with having a false positive? Well, probably you will undergo another test to make sure that the first test is confirmed or not. 115 00:17:09,608 --> 00:17:16,018 So adding a false positive in this case is relatively okay with respect to having a false negative. 116 00:17:16,054 --> 00:17:19,398 And that's exactly what happens to the Netflix application. 117 00:17:19,484 --> 00:17:32,094 Now, I don't want to say that of course Netflix application is as critical as, for example, the application that predicts a cancer or an xray or something on an xray or disorder or disease of some sort. 118 00:17:32,252 --> 00:17:48,090 But what I'm saying is that there are some analogies when it comes to machine learning and artificial intelligence and especially data science, the old school data science, there are several things that kind of are, let's say, invariant across sectors. 119 00:17:48,410 --> 00:17:56,826 And so, you know, two worlds like the media streaming or video streaming and healthcare are of course very different from each other. 120 00:17:56,888 --> 00:18:05,274 But when it comes to machine learning and data science applications, well, there are a lot of analogies there. 121 00:18:05,372 --> 00:18:06,202 And indeed. 122 00:18:06,286 --> 00:18:10,234 In terms of the models that they use at Netflix to predict. 123 00:18:10,342 --> 00:18:24,322 Once they have the sliding window data and essentially they have the ground truth of where this out of memory kill happened and what happened before to the memory of the application or the machine. 124 00:18:24,466 --> 00:18:24,774 Well. 125 00:18:24,812 --> 00:18:30,514 Then the models they use to predict these things is these events is Artificial Neural Networks. 126 00:18:30,622 --> 00:18:31,714 Xg Boost. 127 00:18:31,822 --> 00:18:36,742 Ada Boost or Adaptive Boosting Elastic Net with Softmax and so on and so forth. 128 00:18:36,766 --> 00:18:39,226 So nothing fancy. 129 00:18:39,418 --> 00:18:45,046 As you can see, Xg Boost is probably one of the most used I would have expected even random forest. 130 00:18:45,178 --> 00:18:47,120 Probably they do, they've tried that. 131 00:18:47,810 --> 00:18:58,842 But XGBoost is probably one of the most used models on kaggle competitions for a reason, because it works and it leverages a lot. 132 00:18:58,916 --> 00:19:04,880 The data preparation step, that solves already more than half of the problem. 133 00:19:05,810 --> 00:19:07,270 Thank you so much for listening. 134 00:19:07,330 --> 00:19:11,910 I also invite you, as always, to join the Discord Channel. 135 00:19:12,020 --> 00:19:15,966 You will find a link on the official website datascience@home.com. 136 00:19:16,148 --> 00:19:17,600 Speak with you next time. 137 00:19:18,350 --> 00:19:21,382 You've been listening to Data Science at home podcast. 138 00:19:21,466 --> 00:19:26,050 Be sure to subscribe on itunes, Stitcher, or Pot Bean to get new, fresh episodes. 139 00:19:26,110 --> 00:19:31,066 For more, please follow us on Instagram, Twitter and Facebook or visit our website at datascienceathome.com   References https://netflixtechblog.com/formulating-out-of-memory-kill-prediction-on-the-netflix-app-as-a-machine-learning-problem-989599029109

API Intersection
Implementing “APIs as a Product” Strategy feat. Olivia Califano, Sr. Product Manager at Procore Technologies

API Intersection

Play Episode Listen Later Sep 15, 2022 25:42


This week on the API Intersection podcast, we spoke with Olivia Califano, Senior Product Manager, API & Developer Tools at Procore Technologies, a construction management software company. Procore is focused on connecting all the stakeholders in a construction project, from owners to general contractors. Olivia focuses on API governance and developer tools, ensuring that teams are building with consistency and following best practices and standards. She primarily focuses on internal tooling to support their larger API strategy and external tools such as their API reference documentation and SDKs to help developers onboard.To subscribe to the podcast, visit https://stoplight.io/podcastDo you have a question you'd like answered, or a topic you want to see in a future episode? Let us know here: stoplight.io/question/

Windows Weekly (MP3)
WW 794: The Faffinator - Notifications rant, Bonnie Ross leaves, Goldeneye 007, listener questions!

Windows Weekly (MP3)

Play Episode Listen Later Sep 14, 2022 137:22 Very Popular


Windows 11 Last week, Build 25197 came to the Dev channel and brings back the tablet-optimized Taskbar This week, we got build 25201 with an expanded widgets view that's almost full-screen and ISOs.   Separately, Dev channel Insiders got two updates: the Calculator app is now ARM64 native, and the Media Player app got a new shortcut so you can edit the current video in Clipchamp.  Builds 22621.598 and 22622.598 headed to the Beta channel earlier this week, removing the ability to uninstall apps with dependencies. Surface  Google kills Pixelbook as part of a cost-cutting measure. Should Microsoft kill Surface? Consider: Google needed to establish Chromebook as a viable laptop alternative. Microsoft did not.  Microsoft Ignite You can now register for Microsoft Ignite, which is happening October 12-14. Dev You can now install .NET runtimes and SDKs using the Windows Package Manager Xbox The head of 343 Industries abruptly steps down. Coincidence? Sony chief: Microsoft lied about Activision Blizzard and Call of Duty Microsoft is experimenting with the Xbox Dashboard Discord voice chat is now available on Xbox consoles Goldeneye 007, the original console first-person shooter, is coming to Xbox Game Pass Tips and picks Tip of the week: Remix your Teams ringtone App pick of the week: Voxel Doom Enterprise pick of the week: Azure Space: The family is expanding Enterprise pick No. 2 of the week: PatchTuesday.com Beer pick of the week: Grimm Festooning Hosts: Leo Laporte, Mary Jo Foley, and Paul Thurrott Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com Check out Mary Jo's blog at AllAboutMicrosoft.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: infrascale.com/TWIT CDW.com/LenovoClient ClickUp.com use code WINDOWS

All TWiT.tv Shows (MP3)
Windows Weekly 794: The Faffinator

All TWiT.tv Shows (MP3)

Play Episode Listen Later Sep 14, 2022 137:22 Very Popular


Windows 11 Last week, Build 25197 came to the Dev channel and brings back the tablet-optimized Taskbar This week, we got build 25201 with an expanded widgets view that's almost full-screen and ISOs.   Separately, Dev channel Insiders got two updates: the Calculator app is now ARM64 native, and the Media Player app got a new shortcut so you can edit the current video in Clipchamp.  Builds 22621.598 and 22622.598 headed to the Beta channel earlier this week, removing the ability to uninstall apps with dependencies. Surface  Google kills Pixelbook as part of a cost-cutting measure. Should Microsoft kill Surface? Consider: Google needed to establish Chromebook as a viable laptop alternative. Microsoft did not.  Microsoft Ignite You can now register for Microsoft Ignite, which is happening October 12-14. Dev You can now install .NET runtimes and SDKs using the Windows Package Manager Xbox The head of 343 Industries abruptly steps down. Coincidence? Sony chief: Microsoft lied about Activision Blizzard and Call of Duty Microsoft is experimenting with the Xbox Dashboard Discord voice chat is now available on Xbox consoles Goldeneye 007, the original console first-person shooter, is coming to Xbox Game Pass Tips and picks Tip of the week: Remix your Teams ringtone App pick of the week: Voxel Doom Enterprise pick of the week: Azure Space: The family is expanding Enterprise pick No. 2 of the week: PatchTuesday.com Beer pick of the week: Grimm Festooning Hosts: Leo Laporte, Mary Jo Foley, and Paul Thurrott Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com Check out Mary Jo's blog at AllAboutMicrosoft.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: infrascale.com/TWIT CDW.com/LenovoClient ClickUp.com use code WINDOWS

Radio Leo (Video HD)
Windows Weekly 794: The Faffinator

Radio Leo (Video HD)

Play Episode Listen Later Sep 14, 2022 138:01


Windows 11 Last week, Build 25197 came to the Dev channel and brings back the tablet-optimized Taskbar This week, we got build 25201 with an expanded widgets view that's almost full-screen and ISOs.   Separately, Dev channel Insiders got two updates: the Calculator app is now ARM64 native, and the Media Player app got a new shortcut so you can edit the current video in Clipchamp.  Builds 22621.598 and 22622.598 headed to the Beta channel earlier this week, removing the ability to uninstall apps with dependencies. Surface  Google kills Pixelbook as part of a cost-cutting measure. Should Microsoft kill Surface? Consider: Google needed to establish Chromebook as a viable laptop alternative. Microsoft did not.  Microsoft Ignite You can now register for Microsoft Ignite, which is happening October 12-14. Dev You can now install .NET runtimes and SDKs using the Windows Package Manager Xbox The head of 343 Industries abruptly steps down. Coincidence? Sony chief: Microsoft lied about Activision Blizzard and Call of Duty Microsoft is experimenting with the Xbox Dashboard Discord voice chat is now available on Xbox consoles Goldeneye 007, the original console first-person shooter, is coming to Xbox Game Pass Tips and picks Tip of the week: Remix your Teams ringtone App pick of the week: Voxel Doom Enterprise pick of the week: Azure Space: The family is expanding Enterprise pick No. 2 of the week: PatchTuesday.com Beer pick of the week: Grimm Festooning Hosts: Leo Laporte, Mary Jo Foley, and Paul Thurrott Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com Check out Mary Jo's blog at AllAboutMicrosoft.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: infrascale.com/TWIT CDW.com/LenovoClient ClickUp.com use code WINDOWS

All TWiT.tv Shows (Video LO)
Windows Weekly 794: The Faffinator

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Sep 14, 2022 138:01


Windows 11 Last week, Build 25197 came to the Dev channel and brings back the tablet-optimized Taskbar This week, we got build 25201 with an expanded widgets view that's almost full-screen and ISOs.   Separately, Dev channel Insiders got two updates: the Calculator app is now ARM64 native, and the Media Player app got a new shortcut so you can edit the current video in Clipchamp.  Builds 22621.598 and 22622.598 headed to the Beta channel earlier this week, removing the ability to uninstall apps with dependencies. Surface  Google kills Pixelbook as part of a cost-cutting measure. Should Microsoft kill Surface? Consider: Google needed to establish Chromebook as a viable laptop alternative. Microsoft did not.  Microsoft Ignite You can now register for Microsoft Ignite, which is happening October 12-14. Dev You can now install .NET runtimes and SDKs using the Windows Package Manager Xbox The head of 343 Industries abruptly steps down. Coincidence? Sony chief: Microsoft lied about Activision Blizzard and Call of Duty Microsoft is experimenting with the Xbox Dashboard Discord voice chat is now available on Xbox consoles Goldeneye 007, the original console first-person shooter, is coming to Xbox Game Pass Tips and picks Tip of the week: Remix your Teams ringtone App pick of the week: Voxel Doom Enterprise pick of the week: Azure Space: The family is expanding Enterprise pick No. 2 of the week: PatchTuesday.com Beer pick of the week: Grimm Festooning Hosts: Leo Laporte, Mary Jo Foley, and Paul Thurrott Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com Check out Mary Jo's blog at AllAboutMicrosoft.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: infrascale.com/TWIT CDW.com/LenovoClient ClickUp.com use code WINDOWS

Radio Leo (Audio)
Windows Weekly 794: The Faffinator

Radio Leo (Audio)

Play Episode Listen Later Sep 14, 2022 137:22


Windows 11 Last week, Build 25197 came to the Dev channel and brings back the tablet-optimized Taskbar This week, we got build 25201 with an expanded widgets view that's almost full-screen and ISOs.   Separately, Dev channel Insiders got two updates: the Calculator app is now ARM64 native, and the Media Player app got a new shortcut so you can edit the current video in Clipchamp.  Builds 22621.598 and 22622.598 headed to the Beta channel earlier this week, removing the ability to uninstall apps with dependencies. Surface  Google kills Pixelbook as part of a cost-cutting measure. Should Microsoft kill Surface? Consider: Google needed to establish Chromebook as a viable laptop alternative. Microsoft did not.  Microsoft Ignite You can now register for Microsoft Ignite, which is happening October 12-14. Dev You can now install .NET runtimes and SDKs using the Windows Package Manager Xbox The head of 343 Industries abruptly steps down. Coincidence? Sony chief: Microsoft lied about Activision Blizzard and Call of Duty Microsoft is experimenting with the Xbox Dashboard Discord voice chat is now available on Xbox consoles Goldeneye 007, the original console first-person shooter, is coming to Xbox Game Pass Tips and picks Tip of the week: Remix your Teams ringtone App pick of the week: Voxel Doom Enterprise pick of the week: Azure Space: The family is expanding Enterprise pick No. 2 of the week: PatchTuesday.com Beer pick of the week: Grimm Festooning Hosts: Leo Laporte, Mary Jo Foley, and Paul Thurrott Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com Check out Mary Jo's blog at AllAboutMicrosoft.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: infrascale.com/TWIT CDW.com/LenovoClient ClickUp.com use code WINDOWS

Windows Weekly (Video HI)
WW 794: The Faffinator - Notifications rant, Bonnie Ross leaves, Goldeneye 007, listener questions!

Windows Weekly (Video HI)

Play Episode Listen Later Sep 14, 2022 138:01


Windows 11 Last week, Build 25197 came to the Dev channel and brings back the tablet-optimized Taskbar This week, we got build 25201 with an expanded widgets view that's almost full-screen and ISOs.   Separately, Dev channel Insiders got two updates: the Calculator app is now ARM64 native, and the Media Player app got a new shortcut so you can edit the current video in Clipchamp.  Builds 22621.598 and 22622.598 headed to the Beta channel earlier this week, removing the ability to uninstall apps with dependencies. Surface  Google kills Pixelbook as part of a cost-cutting measure. Should Microsoft kill Surface? Consider: Google needed to establish Chromebook as a viable laptop alternative. Microsoft did not.  Microsoft Ignite You can now register for Microsoft Ignite, which is happening October 12-14. Dev You can now install .NET runtimes and SDKs using the Windows Package Manager Xbox The head of 343 Industries abruptly steps down. Coincidence? Sony chief: Microsoft lied about Activision Blizzard and Call of Duty Microsoft is experimenting with the Xbox Dashboard Discord voice chat is now available on Xbox consoles Goldeneye 007, the original console first-person shooter, is coming to Xbox Game Pass Tips and picks Tip of the week: Remix your Teams ringtone App pick of the week: Voxel Doom Enterprise pick of the week: Azure Space: The family is expanding Enterprise pick No. 2 of the week: PatchTuesday.com Beer pick of the week: Grimm Festooning Hosts: Leo Laporte, Mary Jo Foley, and Paul Thurrott Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com Check out Mary Jo's blog at AllAboutMicrosoft.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: infrascale.com/TWIT CDW.com/LenovoClient ClickUp.com use code WINDOWS

The Bike Shed
354: The History of Computing

The Bike Shed

Play Episode Listen Later Sep 13, 2022 31:16


Why does the history of computing matter? Joël and Developer at thoughtbot Sara Jackson, ponder this and share some cool stories (and trivia!!) behind the tools we use in the industry. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Sara on Twitter (https://twitter.com/csarajackson) UNIX philosophy (https://en.wikipedia.org/wiki/Unix_philosophy) Hillel Wayne on why we ask linked list questions (https://www.hillelwayne.com/post/linked-lists/) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by fellow thoughtboter, Team Lead, and Developer Sara Jackson. SARA: Hello, happy to be here. JOËL: Together, we're here to share a little bit of what we've learned along the way. So, Sara, what's new in your world? SARA: Well, Joël, you might know that recently our team had a small get-together in Toronto. JOËL: And our team, for those who are not aware, is fully remote distributed across multiple countries. So this was a chance to get together in person. SARA: Yes, correct. This was a chance for those on the Boost team to get together and work together as if we had a physical office. JOËL: Was this your first time meeting some members of the team? SARA: It was my second, for the most part. So I joined thoughtbot, but after thoughtbot had already gotten remote. Fortunately, I was able to meet many other thoughtboters in May at our summit. JOËL: Had you worked at a remote company before coming to thoughtbot? SARA: Yes, I actually started working remotely in 2019, but even then, that wasn't my first time working remotely. I actually had a full year of internship in college that was remote. JOËL: So you were a pro at this long before the pandemic made us all try it out. SARA: I don't know about that, but I've certainly dealt with the idiosyncrasies that come with remote work for longer. JOËL: What do you think are some of the challenges of remote work as opposed to working in person in an office? SARA: I think definitely growing and maintaining a culture. When you're in an office, it's easy to create ad hoc conversations and have events that are small that build on the culture. But when you're remote, it has to be a lot more intentional. JOËL: That definitely rings true for me. One of the things that I really appreciated about in-person office culture was the serendipity that you have those sort of random meetings at the water cooler, those conversations, waiting for coffee with people who are not necessarily on the same team or the same project as you are. SARA: I also really miss being able to have lunch in person with folks where I can casually gripe about an issue I might be having, and almost certainly, someone would have the answer. Now, if I'm having an issue, I have to intentionally seek help. [chuckles] JOËL: One of the funny things that often happened, at least the office where I worked at, was that lunches would often devolve into taxonomy conversations. SARA: I wish I had been there for that. [laughter] JOËL: Well, we do have a taxonomy channel on Slack to somewhat continue that legacy. SARA: Do you have a favorite taxonomy lunch discussion that you recall? JOËL: I definitely got to the point where I hated the classifying a sandwich. That one has been way overdone. SARA: Absolutely. JOËL: There was an interesting one about motorcycles, and mopeds, and bicycles, and e-bikes, and trying to see how do you distinguish one from the other. Is it an electric motor? Is it the power of the engine that you have? Is it the size? SARA: My brain is already turning on those thoughts. I feel like I could get lost down that rabbit hole very easily. [laughter] JOËL: Maybe that should be like a special anniversary episode for The Bike Shed, just one long taxonomy ramble. SARA: Where we talk about bikes. JOËL: Ooh, that's so perfect. I love it. One thing that I really appreciated during our time in Toronto was that we actually got to have lunch in person again. SARA: Yeah, that was so wonderful. Having folks coming together that had maybe never worked together directly on clients just getting to sit down and talk about our day. JOËL: Yeah, and talk about maybe it's work-related, maybe it's not. There's a lot of power to having some amount of deeper interpersonal connection with your co-workers beyond just the we work on a project together. SARA: Yeah, it's like camaraderie beyond the shared mission of the company. It's the shared interpersonal mission, like you say. Did you have any in-person pairing sessions in Toronto? JOËL: I did. It was actually kind of serendipitous. Someone was stuck with a weird failing test because somehow the order factories were getting created in was not behaving in the expected way, and we herd on it, dug into it, found some weird thing with composite primary keys, and solved the issue. SARA: That's wonderful. I love that. I wonder if that interaction would have happened or gotten solved as quickly if we hadn't been in person. JOËL: I don't know about you, but I feel like I sometimes struggle to ask for help or ask for a pair more when I'm online. SARA: Yeah, I agree. It's easier to feel like you're not as big of an impediment when you're in person. You tap someone on the shoulder, "Hey, can you take a look at this?" JOËL: Especially when they're on the same team as you, they're sitting at the next desk over. I don't know; it just felt easier. Even though it's literally one button press to get Tuple to make a call, somehow, I feel like I'm interrupting more. SARA: To combat that, I've been trying to pair more frequently and consistently regardless of if I'm struggling with a problem. JOËL: Has that worked pretty well? SARA: It's been wonderful. The only downside has been pairing fatigue. JOËL: Pairing fatigue is real. SARA: But other than that, problems have gotten solved quickly. We've all learned something for those that I've paired with. It goes faster. JOËL: So it was really great that we had this experience of doing our daily work but co-located in person; we have these experiences of working together. What would you say has been one of the highlights for you of that time? SARA: 100% karaoke. JOËL: [laughs] SARA: Only two folks did not attend. Many of the folks that did attend told me they weren't going to sing, but they were just going to watch. By the end of the night, everyone had sung. We were there for nearly three and a half hours. [laughs] JOËL: It was a good time all around. SARA: I saw a different side to Chad. JOËL: [laughs] SARA: And everyone, honestly. Were there any musical choices that surprised you? JOËL: Not particularly. Karaoke is always fun when you have a group of people that you trust to be a little bit foolish in front of to put yourself out there. I really appreciated the style that we went for, where we have a private room for just the people who were there as opposed to a stage in a bar somewhere. I think that makes it a little bit more accessible to pick up the mic and try to sing a song. SARA: I agree. That style of karaoke is a lot more popular in Asia, having your private room. Sometimes you can find it in major cities. But I also prefer it for that reason. JOËL: One of my highlights of this trip was this very sort of serendipitous moment that happened. Someone was asking a question about the difference between a Mac and Linux operating systems. And then just an impromptu gathering happened. And you pulled up a chair, and you're like, gather around, everyone. In the beginning, there was Multics. It was amazing. SARA: I felt like some kind of historian or librarian coming out from the deep. Let me tell you about this random operating system knowledge that I have. [laughs] JOËL: The ancient lore. SARA: The ancient lore in the year 1969. JOËL: [laughs] And then yeah, we had a conversation walking the history of operating systems, and why we have macOS and Linux, and why they're different, and why Windows is a totally different kind of family there. SARA: Yeah, macOS and Linux are sort of like cousins coming from the same tree. JOËL: Is that because they're both related through Unix? SARA: Yes. Linux and macOS are both built based off of different versions of Unix. Over the years, there's almost like a family tree of these different Nix operating systems as they're called. JOËL: I've sometimes seen asterisk N-I-X. This is what you're referring to as Nix. SARA: Yes, where the asterisk is like the RegEx catch-all. JOËL: So this might be Unix. It might be Linux. It might be... SARA: Minix. JOËL: All of those. SARA: Do you know the origin of the name Unix? JOËL: I do not. SARA: It's kind of a fun trivia piece. So, in the beginning, there was Multics spelled M-U-L-T-I-C-S, standing for the Multiplexed Information and Computing Service. Dennis Ritchie and Ken Thompson of Bell Labs famous for the C programming language... JOËL: You may have heard of it. SARA: You may have heard of it maybe on a different podcast. They were employees at Bell Labs when Multics was being created. They felt that Multics was very bulky and heavy. It was trying to do too many things at once. It did have a few good concepts. So they developed their own smaller Unix originally, Unics, the Uniplexed Information and Computing Service, Uniplexed versus Multiplexed. We do one thing really well. JOËL: And that's the Unix philosophy. SARA: It absolutely is. The Unix philosophy developed out of the creation of Unix and C. Do you know the four main points? JOËL: No, is it small sharp tools? It's the main one I hear. SARA: Yes, that is the kind of quippy version that has come out for sure. JOËL: But there is a formal four-point manifesto. SARA: I believe it's evolved over the years. But it's interesting looking at the Unix philosophy and seeing how relevant it is today in web development. The four points being make each program do one thing well. To this end, don't add features; make a new program. I feel like we have this a lot in encapsulation. JOËL: Hmm, maybe even the open-closed principle. SARA: Absolutely. JOËL: Similar idea. SARA: Another part of the philosophy is expecting output of your program to become input of another program that is yet unknown. The key being don't clutter your output; don't have extraneous text. This feels very similar to how we develop APIs. JOËL: With a focus on composability. SARA: Absolutely. Being able to chain commands together like you see in Ruby all the time. JOËL: I love being able to do this, for example, the enumerable API in Ruby and just being able to chain all these methods together to just very nicely do some pretty big transformations on an array or some other data structure. SARA: 100% agree there. That ability almost certainly came out of following the tenets of this philosophy, maybe not knowingly so but maybe knowingly so. [chuckles] JOËL: So is that three or four? SARA: So that was two. The third being what we know as agile. JOËL: Really? SARA: Yeah, right? The '70s brought us agile. Design and build software to be tried early, and don't hesitate to throw away clumsy parts and rebuild. JOËL: Hmmm. SARA: Even in those days, despite waterfall style still coming on the horizon. It was known for those writing software that it was important to iterate quickly. JOËL: Wow, I would never have known. SARA: It's neat having this history available to us. It's sort of like a lens at where we came from. Another piece of this history that might seem like a more modern concept but was a very big part of the movement in the '70s and the '80s was using tools rather than unskilled help or trying to struggle through something yourself when you're lightening a programming task. We see this all the time at thoughtbot. Folks do this many times there is an issue on a client code. We are able to generalize the solution, extract into a tool that can then be reused. JOËL: So that's the same kind of genesis as a lot of thoughtbot's open-source gems, so I'm thinking of FactoryBot, Clearance, Paperclip, the old-timey file upload gem, Suspenders, the Rails app generator, and the list goes on. SARA: I love that in this last point of the Unix philosophy, they specifically call out that you should create a new tool, even if it means detouring, even if it means throwing the tools out later. JOËL: What impact do you think that has had on the way that tooling in the Unix, or maybe I should say *Nix, ecosystem has developed? SARA: It was a major aspect of the Nix environment community because Unix was available, not free, but very inexpensively to educational institutions. And because of how lightweight it was and its focus on single-use programs, programs that were designed to do one thing, and also the way the shell was allowing you to use commands directly and having it be the same language as the shell scripting language, users, students, amateurs, and I say that in a loving way, were able to create their own tools very quickly. It was almost like a renaissance of Homebrew. JOËL: Not Homebrew as in the macOS package manager. SARA: [laughs] And also not Homebrew as in the alcoholic beverage. JOËL: [laughs] So, this kind of history is fun trivia to know. Is it really something valuable for us as a jobbing developer in 2022? SARA: I would say it's a difficult question. If you are someone that doesn't dive into the why of something, especially when something goes wrong, maybe it wouldn't be important or useful. But what sparked the conversation in Toronto was trying to determine why we as thoughtbot tend to prefer using Macs to develop on versus Linux or Windows. There is a reason, and the reason is in the history. Knowing that can clarify decisions and can give meaning where it feels like an arbitrary decision. JOËL: Right. We're not just picking Macs because they're shiny. SARA: They are certainly shiny. And the first thing I did was to put a matte case on it. JOËL: [laughs] So no shiny in your office. SARA: If there were too many shiny things in my office, boy, I would never get work done. The cats would be all over me. MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! JOËL: So we've talked a little bit about Unix or *Nix, this evolution of systems. I've also heard the term POSIX thrown around when talking about things that seem to encompass both macOS and Linux. How does that fit into this history? SARA: POSIX is sort of an umbrella of standards around operating systems that was based on Unix and the things that were standard in Unix. It stands for the Portable Operating System Interface. This allowed for compatibility between OSs, very similar to USB being the standard for peripherals. JOËL: So, if I was implementing my own Unix-like operating system in the '80s, I would try to conform to the POSIX standard. SARA: Absolutely. Now, not every Nix operating system is POSIX-compliant, but most are or at least 90% of the way there. JOËL: Are any of the big ones that people tend to think about not compliant? SARA: A major player in the operating system space that is not generally considered POSIX-compliant is Microsoft Windows. JOËL: [laughs] It doesn't even try to be Unix-like, right? It's just its own thing, SARA: It is completely its own thing. I don't think it even has a standard necessarily that it conforms to. JOËL: It is its own standard, its own branch of the family tree. SARA: And that's what happens when your operating system is very proprietary. This has caused folks pain, I'm sure, in the past that may have tried to develop software on their computers using languages that are more readily compatible with POSIX operating systems. JOËL: So would you say that a language like Ruby is more compatible with one of the POSIX-compatible operating systems? SARA: 100% yes. In fact, to even use Ruby as a development tool in Windows, prior to Windows 10, you needed an additional tool. You needed something like Cygwin or MinGW, which were POSIX-compliant programs that it was almost like a shell in your Windows computer that would allow you to run those commands. JOËL: Really? For some reason, I thought that they had some executables that you could run just on Windows by itself. SARA: Now they do, fortunately, to the benefit of Ruby developers everywhere. As of Windows 10, we now have WSL, the Windows Subsystem for Linux that's built-in. You don't have to worry about installing or configuring some third-party software. JOËL: I guess that kind of almost cheats by just having a POSIX system embedded in your non-POSIX system. SARA: It does feel like a cheat, but I think it was born out of demand. The Windows NT kernel, for example, is mostly POSIX-compliant. JOËL: Really? SARA: As a result of it being used primarily for servers. JOËL: So you mentioned the Ruby tends and the Rails ecosystem tends to run better and much more frequently on the various Nix systems. Did it have to be that way? Or is it just kind of an accident of history that we happen to end up with Ruby and Rails in this ecosystem, but just as easily, it could have evolved in the Windows world? SARA: I think it is an amalgam of things. For example, Unix and Nix operating systems being developed earlier, being widely spread due to being license-free oftentimes, and being widely used in the education space. Also, because it is so lightweight, it is the operating system of choice. For most servers in the world, they're running some form of Unix, Linux, or macOS. JOËL: I don't think I've ever seen a server that runs macOS; exclusively seen it on dev machines. SARA: If you go to an animation company, they have server farms of macOS machines because they're really good at rendering. This might not be the case anymore, but it was at one point. JOËL: That's a whole other world that I've not interacted with a whole lot. SARA: [chuckles] JOËL: It's a fun intersection between software, and design, and storytelling. That is an important part for the software field. SARA: Yeah, it's definitely an aspect that deserves its own deep dive of sorts. If you have a server that's running a Windows-based operating system like NT and you have a website or a program that's designed to be served under a Unix-based server, it can easily be hosted on the Windows server; it's not an issue. The reverse is not true. JOËL: Oh. SARA: And this is why programming on a Nix system is the better choice. JOËL: It's more broadly compatible. SARA: Absolutely. Significantly more compatible with more things. JOËL: So today, when I develop, a lot of the tooling that I use is open source. The open-source movement has created a lot of the languages that we know and love, including Ruby, including Rails. Do you think there's some connection between a lot of that tooling being open source and maybe some of the Unix family of operating systems and movements that came out of that branch of the operating system family tree? SARA: I think that there is a lot of tie-in with today's open-source culture and the computing history that we've been talking about, for example, people finding something that they dislike about the tools that are available and then rolling their own. That's what Ken Thompson and Dennis Ritchie did. Unix was not an official Bell development. It was a side project for them. JOËL: I love that. SARA: You see this happen a lot in the software world where a program gets shared widely, and due to this, it gains traction and gains buy-in from the community. If your software is easily accessible to students, folks that are learning, and breaking things, and rebuilding, and trying, and inventing, it's going to persist. And we saw that with Unix. JOËL: I feel like this background on where a lot of these operating systems came but then also the ecosystems, the values that evolved with them has given me a deeper appreciation of the tooling, the systems that we work with today. Are there any other advantages, do you think, to trying to learn a little bit of computing history? SARA: I think the main benefit that I mentioned before of if you're a person that wants to know why, then there is a great benefit in knowing some of these details. That being said, you don't need to deep dive or read multiple books or write papers on it. You can get enough information from reading or skimming some Wikipedia pages. But it's interesting to know where we came from and how it still affects us today. Ruby was written in C, for example. Unix was written in C as well, originally Assembly Language, but it got rewritten in C. And understanding the underlying tooling that goes into that that when things go wrong, you know where to look. JOËL: I guess that that is the next question is where do you look if you're kind of interested? Is Wikipedia good enough? You just sort of look up operating system, and it tells you where to go? Or do you have other sources you like to search for or start pulling at those threads to understand history? SARA: That's a great question. And Wikipedia is a wonderful starting point for sure. It has a lot of the abbreviated history and links to better references. I don't have them off the top of my head. So I will find them for you for the show notes. But there are some old esoteric websites with some of this history more thoroughly documented by the people that lived it. JOËL: I feel like those websites always end up being in HTML 2; your very basic text, horizontal rules, no CSS. SARA: Mm-hmm. And those are the sites that have many wonderful kernels of knowledge. JOËL: Uh-huh! Great pun. SARA: [chuckles] Thank you. JOËL: Do you read any content by Hillel Wayne? SARA: I have not. JOËL: So Hillel produces a lot of deep dives into computing history, oftentimes trying to answer very particular questions such as when and why did we start using reversing a linked list as the canonical interview question? And there are often urban legends around like, oh, it's because of this. And then Hillel will do some research and go through actual archives of messages on message boards or...what is that protocol? SARA: BBS. JOËL: Yes. And then find the real answer, like, do actual historical methodology, and I love that. SARA: I had not heard of this before. I don't know how. And that is all I'm going to be doing this weekend is reading these. That kind of history speaks to my heart. I have a random fun fact along those lines that I wanted to bring to the show, which was that the echo command that we know and love in the terminal was first introduced by the Multics operating system. JOËL: Wow. So that's like the most common piece of Multics that as an everyday user of a modern operating system that we would still touch a little bit of that history every day when we work. SARA: Yeah, it's one of those things that we don't think about too much. Where did it come from? How long has it been around? I'm sure the implementation today is very different. But it's like etymology, and like taxonomy, pulling those threads. JOËL: Two fantastic topics. On that wonderful little nugget of knowledge, let's wrap up. Sara, where can people find you online? SARA: You can find me on Twitter at @csarajackson. JOËL: And we will include a link to that in the show notes. SARA: Thank you so much for having me on the show and letting me nerd out about operating system history. JOËL: It's been a pleasure. The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes. It really helps other folks find the show. If you have any feedback, you can reach us at @_bikeshed or reach me @joelquen on Twitter or at hosts@bikeshed.fm via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeee!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Quarkus Insights
Quarkus Insights #102: Quarkiverse Extension Spotlight: Operator SDK

Quarkus Insights

Play Episode Listen Later Sep 12, 2022 58:19


Chris Laprun & Attila Mészáros join us for "Quarkiverse Extension Spotlight: Operator SDK". Get an introduction to Java operators and how the extension leverages them in Quarkus.

Windows Weekly (MP3)
WW 793: AMD's Circular Slide Rule - Mica Alt, USB4 Version 2.0, Teams Rooms Pro, Halo Infinite Co-op

Windows Weekly (MP3)

Play Episode Listen Later Sep 7, 2022 118:34 Very Popular


Mica Alt, USB4 Version 2.0, Teams Rooms Pro, Halo Infinite Co-op Windows 11 Windows 11 22H2 inches towards release with SDK, WDK/EWDK releases Windows 11 to get new visual effect Xbox subscription into coming to Settings in Windows 11 PCs/hardware PC sales will fall in 2022 and 2023 now AMD announces new naming convention for PC mobile chips Lenovo announces 16-inch ThinkPad X1 Fold Here comes USB4 Version 2.0  Army to get its first HoloLens delivery Microsoft 365 Microsoft splits Teams Rooms into free/paid tiers Microsoft is killing its Scheduler meeting coordination service Xbox UK CDMA complains about Activision Blizzard acquisition, Microsoft makes concessions Microsoft announces Elite Controller Core ... which explains the white version rumors Good news/bad news for Halo Infinite Xbox Game Pass Friends & Family is real. Just not here.  Here are the first Game Pass titles for September Xbox party chat gets AI-based noise reduction Tips & Picks Tip of the week: Customize your Windows 11 privacy settings App pick of the week: PowerToys 0.62 Enterprise pick of the week: Microsoft Stream mobile app gets a do-over Enterprise pick No. 2 of the week: Microsoft eCDN Beer pick of the week: Apple2 (squared) Hosts: Leo Laporte, Mary Jo Foley, and Paul Thurrott Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com Check out Mary Jo's blog at AllAboutMicrosoft.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: canary.tools/twit - use code: TWIT UnifyMeeting.com code WW tanium.com/twit

All TWiT.tv Shows (MP3)
Windows Weekly 793: AMD's Circular Slide Rule

All TWiT.tv Shows (MP3)

Play Episode Listen Later Sep 7, 2022 118:34 Very Popular


Mica Alt, USB4 Version 2.0, Teams Rooms Pro, Halo Infinite Co-op Windows 11 Windows 11 22H2 inches towards release with SDK, WDK/EWDK releases Windows 11 to get new visual effect Xbox subscription into coming to Settings in Windows 11 PCs/hardware PC sales will fall in 2022 and 2023 now AMD announces new naming convention for PC mobile chips Lenovo announces 16-inch ThinkPad X1 Fold Here comes USB4 Version 2.0  Army to get its first HoloLens delivery Microsoft 365 Microsoft splits Teams Rooms into free/paid tiers Microsoft is killing its Scheduler meeting coordination service Xbox UK CDMA complains about Activision Blizzard acquisition, Microsoft makes concessions Microsoft announces Elite Controller Core ... which explains the white version rumors Good news/bad news for Halo Infinite Xbox Game Pass Friends & Family is real. Just not here.  Here are the first Game Pass titles for September Xbox party chat gets AI-based noise reduction Tips & Picks Tip of the week: Customize your Windows 11 privacy settings App pick of the week: PowerToys 0.62 Enterprise pick of the week: Microsoft Stream mobile app gets a do-over Enterprise pick No. 2 of the week: Microsoft eCDN Beer pick of the week: Apple2 (squared) Hosts: Leo Laporte, Mary Jo Foley, and Paul Thurrott Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com Check out Mary Jo's blog at AllAboutMicrosoft.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: canary.tools/twit - use code: TWIT UnifyMeeting.com code WW tanium.com/twit

Syntax - Tasty Web Development Treats
Potluck - How to Pick a Tech Stack × useEffect × setTimeout × Staying Focused

Syntax - Tasty Web Development Treats

Play Episode Listen Later Sep 7, 2022 62:45 Very Popular


In this episode of Syntax, Wes and Scott answer your questions about picking the right tech stack, whether useEffect is still useful, benefit to use uses setTimeout, and more! Linode - Sponsor Whether you're working on a personal project or managing enterprise infrastructure, you deserve simple, affordable, and accessible cloud computing solutions that allow you to take your project to the next level. Simplify your cloud infrastructure with Linode's Linux virtual machines and develop, deploy, and scale your modern applications faster and easier. Get started on Linode today with a $100 in free credit for listeners of Syntax. You can find all the details at linode.com/syntax. Linode has 11 global data centers and provides 24/7/365 human support with no tiers or hand-offs regardless of your plan size. In addition to shared and dedicated compute instances, you can use your $100 in credit on S3-compatible object storage, Managed Kubernetes, and more. Visit linode.com/syntax and click on the “Create Free Account” button to get started. LogRocket - Sponsor LogRocket lets you replay what users do on your site, helping you reproduce bugs and fix issues faster. It's an exception tracker, a session re-player and a performance monitor. Get 14 days free at logrocket.com/syntax. Auth0 - Sponsor Auth0 is the easiest way for developers to add authentication and secure their applications. They provides features like user management, multi-factor authentication, and you can even enable users to login with device biometrics with something like their fingerprint. Not to mention, Auth0 has SDKs for your favorite frameworks like React, Next.js, and Node/Express. Make sure to sign up for a free account and give Auth0 a try with the link below. a0.to/syntax Show Notes 00:23 Welcome 02:39 What's the best way of comparing the efficiency of object literals created from a factory function vs objects created by new'ing a class. Perf.link 06:54 How can I always see the full signature in VS Code? 10:40 What's your process for picking a stack when starting a project? 14:41 Sponsor: Linode 15:23 Is snapshot testing really worth it? TS QuickFixes 20:54 What are your thoughts on ISR Incremental Static Regeneration? 25:20 Is useEffect public enemy #1? Goodbye, useEffect: David Khourshid 29:02 Sponsor: LogRocket 30:17 Is there any benefit to use uses setTimeout instead of setInterval? MongoDB Prisma 37:13 HTML to PDF a great solution I use is gotenberg.dev gotenberg.dev 40:12 Although async/await might make for code that is easier to grok, I find it worse for chaining functions. Pipeline Operator proposal 45:07 How do you guys stay focused for meaningful periods of time? 48:36 How should code formatters be configured and combined? Prettier ES Lint Editor Config No-Sweat™ Eslint and Prettier Setup 51:56 What's your opinion on the latest Sveltekit changes with load, file based routing, and more? Major Svelte Kit API Change - Fixing load, and tightening up SvelteKit's design before 1.0 Astro Nano Store 55:53 Sponsor: Auth0 56:47 SIIIIICK ××× PIIIICKS ×××

Radio Leo (Video HD)
Windows Weekly 793: AMD's Circular Slide Rule

Radio Leo (Video HD)

Play Episode Listen Later Sep 7, 2022 119:08


Mica Alt, USB4 Version 2.0, Teams Rooms Pro, Halo Infinite Co-op Windows 11 Windows 11 22H2 inches towards release with SDK, WDK/EWDK releases Windows 11 to get new visual effect Xbox subscription into coming to Settings in Windows 11 PCs/hardware PC sales will fall in 2022 and 2023 now AMD announces new naming convention for PC mobile chips Lenovo announces 16-inch ThinkPad X1 Fold Here comes USB4 Version 2.0  Army to get its first HoloLens delivery Microsoft 365 Microsoft splits Teams Rooms into free/paid tiers Microsoft is killing its Scheduler meeting coordination service Xbox UK CDMA complains about Activision Blizzard acquisition, Microsoft makes concessions Microsoft announces Elite Controller Core ... which explains the white version rumors Good news/bad news for Halo Infinite Xbox Game Pass Friends & Family is real. Just not here.  Here are the first Game Pass titles for September Xbox party chat gets AI-based noise reduction Tips & Picks Tip of the week: Customize your Windows 11 privacy settings App pick of the week: PowerToys 0.62 Enterprise pick of the week: Microsoft Stream mobile app gets a do-over Enterprise pick No. 2 of the week: Microsoft eCDN Beer pick of the week: Apple2 (squared) Hosts: Leo Laporte, Mary Jo Foley, and Paul Thurrott Download or subscribe to this show at https://twit.tv/shows/windows-weekly Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Check out Paul's blog at thurrott.com Check out Mary Jo's blog at AllAboutMicrosoft.com The Windows Weekly theme music is courtesy of Carl Franklin. Sponsors: canary.tools/twit - use code: TWIT UnifyMeeting.com code WW tanium.com/twit