Podcasts about pragmatic programmers

  • 30PODCASTS
  • 42EPISODES
  • 50mAVG DURATION
  • ?INFREQUENT EPISODES
  • Dec 17, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about pragmatic programmers

Latest podcast episodes about pragmatic programmers

República Web
Desplegando y escalando aplicaciones Elixir con Ellie Fairholm y Josep Giralt D'Lacoste

República Web

Play Episode Listen Later Dec 17, 2024 67:00


Para este episodio del podcast Andros Fenollosa tiene la suerte de contar con Ellie Fairholm y Josep Giralt D'Lacoste, autores del libro recientemente publicado por la editorial The Pragmatic Programmers, titulado Engineering Elixir Applications. Ellie y Josep además de ser dos auténticos apasionados del lenguaje Elixir, están detrás de la consultora de software BeamOps, especializada en simplificar los proyectos BEAM. Los proyectos BEAM en el entorno de Elixir están directamente relacionados con la máquina virtual BEAM, que es la base sobre la que se ejecutan los lenguajes de programación Erlang y Elixir. El libro de Ellie y Josep está dirigido a principiantes avanzados o programadores intermedios familiarizados con Elixir pero que se sienten estancados cuando se trata de implementar y escalar sus aplicaciones. Presentan el paradigma BEAMOps (una versión más especializada de DevOps) que se centra en el desarrollo de aplicaciones BEAM. Los principios básicos de BEAMOps incluyen integridad del entorno, escalabilidad y infraestructura como código. Sin duda es un libro ambicioso pero que los autores han querido hacer accesible a las personas que desarrollan aplicaciones en Elixir. Su objetivo es que tomes las riendas de todo lo necesario para desplegar y escalar aplicaciones Elixir en un entorno moderno de desarrollo y aprovechando las mejores herramientas para conseguir la máxima eficiencia para ti o tu equipo de desarrollo. Los autores han querido compartir con nosotros un código promocional (disponible a partir del 22/12/2024) para adquirir su libro en la web de The Pragmatic Programmers con el código engineer2024 Ya que el libro se encuentra en pre-lanzamiento y está limitado a la venta en el mercado norteamericano os recomendamos la versión digital que con la promoción se queda en un precio muy interesante.

GOTO - Today, Tomorrow and the Future
Effective Haskell • Rebecca Skinner & Emily Pillmore

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Apr 5, 2024 44:24 Transcription Available


This interview was recorded for the GOTO Book Club.http://gotopia.tech/bookclubRead the full transcription of the interview hereRebecca Skinner - Author of "Effective Haskell", Lead Software Engineer at Mercury & Member of the Haskell.org CommitteeEmily Pillmore - Head of Core Engineering at Kadena & Board Member of the Haskell FoundationRESOURCESRebeccahttps://twitter.com/cercerillahttps://www.linkedin.com/in/

Hacking Postgres
S2E1: Andrew Atkinson, Software Engineer and Author

Hacking Postgres

Play Episode Listen Later Apr 4, 2024 35:32


Andrew Atkinson is a Software Engineer who specializes in building high-performance web applications using PostgreSQL and Ruby on Rails. He wrote the book ‘High-Performance PostgreSQL for Rails', published by Pragmatic Programmers in 2024.Our discussion with Andrew spans the technical challenges of sharding and the concurrent evolution of Rails and Postgres. We'll pay homage to influential resources like Railscast, debate Rails' database tooling limitations, and discover tips from Andrew's new book.In this episode we explore:Why newer developers favor Postgres over MySQLHow Postgres might become a multi-primary database in the futureThe complexities of database decisions in a Rails environmentPostgres innovations, such as composite primary keys and common table expressions, being supported from Active Record – the ORM for Ruby on RailsKey insights from writing ‘High Performance PostgreSQL for Rails'Links mentioned:Andrew Atkinson on LinkedInAndrew's BlogNewsletterPGCastsRailsCasts‘High Performance PostgreSQL for Rails' by Andrew AtkinsonGithub ridesharePostgres FMAndrew Aktkinson's interview on Postgres FMAndrew Atkinson's interview on Remote RubyRemote Ruby PodcastGitHub doc: clarify logical decoding's deadlock of system tablesGitHub doc: Doc: fix grammatical errors for enable_partitionwise_aggregate GitHub Convert README to Markdown

GOTO - Today, Tomorrow and the Future
Become an Effective Software Engineering Manager • James Stanier & Gergely Orosz

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Mar 29, 2024 46:48 Transcription Available


This interview was recorded for the GOTO Book Club.http://gotopia.tech/bookclubRead the full transcription of the interview hereJames Stanier - Director of Engineering at Shopify & Author of "Become an Effective Software Engineering Manager"Gergely Orosz - Writing The Pragmatic Engineer & Author of "The Software Engineer's Guidebook"RESOURCESJameshttps://twitter.com/jstanierhttps://www.linkedin.com/in/jstanierhttps://github.com/jstanierhttps://www.theengineeringmanager.comhttps://www.theengineeringmanager.com/management-101/contractingGergelyhttps://twitter.com/gergelyoroszhttps://www.linkedin.com/in/gergelyoroszhttps://www.pragmaticengineer.comhttps://github.com/gergelyoroszDESCRIPTIONSoftware startups make global headlines every day. As technology companies succeed and grow, so do their engineering departments.In your career, you'll may suddenly get the opportunity to lead teams: to become a manager. But this is often uncharted territory.How do you decide whether this career move is right for you?And if you do, what do you need to learn to succeed?Where do you start?How do you know that you're doing it right?What does “it” even mean?And isn't management a dirty word?This book will share the secrets you need to know to manage engineers successfully.* Book description: © Pragmatic Programmers:RECOMMENDED BOOKSJames Stanier • Become an Effective Software Engineering ManagerJames Stanier • Effective Remote WorkGergely Orosz • The Software Engineer's GuidebookGergely Orosz • Building Mobile Apps at ScaleDavid Farley • Modern Software EngineeringWilliam B. Irvine • A Guide to the Good LifeTwitterInstagramLinkedInFacebookLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!

GOTO - Today, Tomorrow and the Future
Programming Phoenix LiveView • Sophie DeBenedetto, Bruce Tate & Steven Nunez

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Mar 8, 2024 43:25 Transcription Available


This interview was recorded for the GOTO Book Club.http://gotopia.tech/bookclubRead the full transcription of the interview hereSophie DeBenedetto - Staff Software Engineer at GitHub & Co-Author of "Programming Phoenix LiveView"Bruce Tate - President at Groxio & Co-Author of "Programming Phoenix LiveView" & many more BooksSteven Nunez - Staff Software Engineer at GitHubRESOURCESSophiehttp://sophiedebenedetto.nychttps://twitter.com/sm_debenedettohttps://linkedin.com/in/sophiedebenedettohttps://github.com/SophieDeBenedettoBrucehttps://grox.iohttp://twitter.com/redrapidshttps://www.linkedin.com/in/bruce-tateStevenhttp://hostiledeveloper.comhttps://www.linkedin.com/in/steven-nunez-6947817http://twitter.com/_StevenNunezhttps://github.com/octostevehttps://www.twitch.tv/octostevehttps://genserver.social/StevenDESCRIPTIONThe days of the traditional request-response web application are long gone, but you don't have to wade through oceans of JavaScript to build the interactive applications today's users crave. The innovative Phoenix LiveView library empowers you to build applications that are fast and highly interactive, without sacrificing reliability. This definitive guide to LiveView isn't a reference manual. Learn to think in LiveView. Write your code layer by layer, the way the experts do. Explore techniques with experienced teachers to get the best possible performance.* Book description: © Pragmatic Programmers:The interview is based on the book " Programming Phoenix LiveView"RECOMMENDED BOOKSSophie DeBenedetto & Bruce Tate • Programming Phoenix LiveViewSean Moriarity • Genetic Algorithms in ElixirSean Moriarity • Machine Learning in ElixirBruce Tate • Programmer Passport: ElixirBruce Tate • Programmer Passport: PrologBruce Tate,  Ian Dees, Frederic Daoud & Jack Moffitt • Seven More Languages in Seven WeeksBruce Tate • Seven Languages in Seven WeeksTwitterInstagramLinkedInFacebookLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!

Arguing Agile Podcast
AA137 - Estimation is Evil by Ron Jeffries

Arguing Agile Podcast

Play Episode Listen Later Nov 8, 2023 54:08 Transcription Available


We're reviewing Ron Jeffries article: "Estimation is Evil" from the February 2013 issue of the Pragmatic Programmers magazine.This article is mostly known for being the source most people quote when they say the inventor of story points is sorry he created them, or that the inventor of story points doesn't like story points. We dig into this claim, reading the article and summarizing and pontificating on key points.Join, Enterprise Agility Coach Om Patel and Product Manager Brian Orlando as they time-travel back to the not-so-good old days of 2013 (and before)!The source article:https://ronjeffries.com/articles/021-01ff/estimation-is-evil/0:00 Topic Intro: Estimation is Evil by Ron Jeffries1:14 Article Opening: A Historical Snapshot7:07 Undistinguished Teams8:45 "All Our Requirements" is Wrong11:06 80/20 Rule in Requirements14:11 Forcing the Answer16:05 Contracts in Agile18:11 Forecasting & Stretch Goals24:52 Asking for "Better" Estimates28:28 Financing Software31:41 How Much Do You Want to Spend?35:24 Relative Estimation (the Soundbite Section)40:17 Two Ways to Go44:58 Article Conclusion52:33 Summary54:02 Wrap-Up= = = = = = = = = = = =Watch it on YouTubePlease Subscribe to our YouTube Channel:https://www.youtube.com/@arguingagile= = = = = = = = = = = =Apple Podcasts:https://podcasts.apple.com/us/podcast/agile-podcast/id1568557596Spotify:https://open.spotify.com/show/362QvYORmtZRKAeTAE57v3Amazon Music:https://music.amazon.com/podcasts/ee3506fc-38f2-46d1-a301-79681c55ed82/Agile-Podcast= = = = = = = = = = = = 

GOTO - Today, Tomorrow and the Future
Genetic Algorithms in Elixir • Sean Moriarity & Bruce Tate

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Oct 20, 2023 41:42 Transcription Available


This interview was recorded for the GOTO Book Club.gotopia.tech/bookclubRead the full transcription of the interview hereSean Moriarity - Author of "Genetic Algorithms in Elixir" & "Machine Learning in Elixir"  Bruce Tate - President at Groxio & Author of many BooksRESOURCESSeanseanmoriarity.com@sean_moriaritygithub.com/seanmor5Brucegrox.io@redrapidslinkedin.com/in/bruce-tateDESCRIPTIONFrom finance to artificial intelligence, genetic algorithms are a powerful tool with a wide array of applications. But you don't need an exotic new language or framework to get started; you can learn about genetic algorithms in a language you're already familiar with. Join us for an in-depth look at the algorithms, techniques, and methods that go into writing a genetic algorithm. From introductory problems to real-world applications, you'll learn the underlying principles of problem solving using genetic algorithms.* Book description: © The Pragmatic BookshelfThe interview is based on the book "Genetic Algorithms in Elixir"RECOMMENDED BOOKSSean Moriarity • Genetic Algorithms in ElixirSean Moriarity • Machine Learning in ElixirBruce Tate • Programmer Passport: ElixirBruce Tate • Programmer Passport: PrologBruce Tate,  Ian Dees, Frederic Daoud & Jack Moffitt • Seven More Languages in Seven WeeksBruce Tate • Seven Languages in Seven WeeksSvilen Gospodinov • Concurrent Data Processing in ElixirIan Goodfellow, Yoshua Bengio & Aaron Courville • Deep LearningFrancois Chollet • Deep Learning with PythonTwitterInstagramLinkedInFacebookLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily

The Technically Human Podcast
Designing Data Governance

The Technically Human Podcast

Play Episode Listen Later Sep 8, 2023 50:30


In this episode of the show, I continue my deep dive into data, human values, and governance with an interview featuring Lauren Maffeo. We talk about the future of data governance, the possibilities of, and the catastrophe that Lauren thinks our society may need to experience in order to turn the corner on an data governance and ethics. Lauren Maffeo is an award-winning designer and analyst who currently works as a service designer at Steampunk, a human-centered design firm serving the federal government. She is also a founding editor of Springer's AI and Ethics journal and an adjunct lecturer in Interaction Design at The George Washington University. Her first book, Designing Data Governance from the Ground Up, is available from The Pragmatic Programmers.   Lauren has written for Harvard Data Science Review, Financial Times, and The Guardian, among other publications. She is a fellow of the Royal Society of Arts, a former member of the Association for Computing Machinery's Distinguished Speakers Program, and a member of the International Academy of Digital Arts and Sciences, where she helps judge the Webby Awards.

Giant Robots Smashing Into Other Giant Robots
475: Designing Data Governance From the Ground Up with Lauren Maffeo

Giant Robots Smashing Into Other Giant Robots

Play Episode Listen Later May 18, 2023 48:45


Lauren Maffeo is the author of Designing Data Governance from the Ground Up. Victoria talks to Lauren about human-centered design work, data stewardship and governance, and writing a book anybody can use regardless of industry or team size. Designing Data Governance from the Ground Up (https://www.amazon.com/Designing-Data-Governance-Ground-Data-Driven/dp/1680509802) Follow Lauren Maffeo on LinkedIn (https://www.linkedin.com/in/laurenmaffeo/) or Twitter (https://twitter.com/LaurenMaffeo). Follow thoughtbot on Twitter (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: VICTORIA: Hey there. It's your host Victoria. And I'm here today with Dawn Delatte and Jordyn Bonds from our Ignite team. We are thrilled to announce the summer 2023 session of our new incubator program. If you have a business idea that involves a web or mobile app, we encourage you to apply for our 8-week program. We'll help you validate the market opportunity, experiment with messaging and product ideas, and move forward with confidence towards an MVP. Learn more and apply at tbot.io/incubator. Dawn and Jordyn, thank you for joining and sharing the news with me today. JORDYN: Thanks for having us. DAWN: Yeah, glad to be here. VICTORIA: So, tell me a little bit more about the incubator program. This will be your second session, right? JORDYN: Indeed. We are just now wrapping up the first session. We had a really great 8 weeks, and we're excited to do it again. VICTORIA: Wonderful. And I think we're going to have the person from your program on a Giant Robots episode soon. JORDYN: Wonderful. VICTORIA: Maybe you can give us a little preview. What were some of your main takeaways from this first round? JORDYN: You know, as ever with early-stage work, it's about identifying your best early adopter market and user persona, and then learning as much as you possibly can about them to inform a roadmap to a product. VICTORIA: What made you decide to start this incubator program this year with thoughtbot? DAWN: We had been doing work with early-stage products and founders, as well as some innovation leads or research and development leads in existing organizations. We had been applying a lot of these processes, like the customer discovery process, Product Design Sprint process to validate new product ideas. And we've been doing that for a really long time. And we've also been noodling on this idea of exploring how we might offer value even sooner to clients that are maybe pre-software product idea. Like many of the initiatives at thoughtbot, it was a little bit experimental for us. We decided to sort of dig into better understanding that market, and seeing how the expertise that we had could be applied in the earlier stage. It's also been a great opportunity for our team to learn and grow. We had Jordyn join our team as Director of Product Strategy. Their experience with having worked at startups and being an early-stage startup founder has been so wonderful for our team to engage with and learn from. And we've been able to offer that value to clients as well. VICTORIA: I love that. So it's for people who have identified a problem, and they think they can come up with a software solution. But they're not quite at the point of being ready to actually build something yet. Is that right? DAWN: Yeah. We've always championed the idea of doing your due diligence around validating the right thing to build. And so that's been a part of the process at thoughtbot for a really long time. But it's always been sort of in the context of building your MVP. So this is going slightly earlier with that idea and saying, what's the next right step for this business? It's really about understanding if there is a market and product opportunity, and then moving into exploring what that opportunity looks like. And then validating that and doing that through user research, and talking to customers, and applying early product and business strategy thinking to the process. VICTORIA: Great. So that probably sets you up for really building the right thing, keeping your overall investment costs lower because you're not wasting time building the wrong thing. And setting you up for that due diligence when you go to investors to say, here's how well I vetted out my idea. Here's the rigor that I applied to building the MVP. JORDYN: Exactly. It's not just about convincing external stakeholders, so that's a key part. You know, maybe it's investors, maybe it's new team members you're looking to hire after the program. It could be anyone. But it's also about convincing yourself. Really, walking down the path of pursuing a startup is not a small undertaking. And we just want to make sure folks are starting with their best foot forward. You know, like Dawn said, let's build the right thing. Let's figure out what that thing is, and then we can think about how to build it right. That's a little quote from a book I really enjoy, by the way. I cannot take credit for that. [laughs] There's this really great book about early-stage validation called The Right It by Alberto Savoia. He was an engineer at Google, started a couple of startups himself, failed in some ways, failed to validate a market opportunity before marching off into building something. And the pain of that caused him to write this book about how to quickly and cheaply validate some market opportunity, market assumptions you might have when you're first starting out. The way he frames that is let's figure out if it's the right it before we build it right. And I just love that book, and I love that framing. You know, if you don't have a market for what you're building, or if they don't understand that they have the pain point you're solving for, it doesn't matter what you build. You got to do that first. And that's really what the focus of this incubator program is. It's that phase of work. Is there a there there? Is there something worth the hard, arduous path of building some software? Is there something there worth walking that path for before you start walking it? VICTORIA: Right. I love that. Well, thank you both so much for coming on and sharing a little bit more about the program. I'm super excited to see what comes out of the first round, and then who gets selected for the second round. So I'm happy to help promote. Any other final takeaways for our listeners today? DAWN: If this sounds intriguing to you, maybe you're at the stage where you're thinking about this process, I definitely encourage people to follow along. We're trying to share as much as we can about this process and this journey for us and our founders. So you can follow along on our blog, on LinkedIn. We're doing a LinkedIn live weekly with the founder in the program. We'll continue to do that with the next founders. And we're really trying to build a community and extend the community, you know, that thoughtbot has built with early-stage founders, so please join us. We'd love to have you. VICTORIA: Wonderful. That's amazing. Thank you both so much. INTRO MUSIC: VICTORIA: This is the Giant Robots Smashing Into Other Giant Robots Podcast, where we explore the design, development, and business of great products. I'm your host, Victoria Guido. And with me today is Lauren Maffeo, Author of Designing Data Governance from the Ground Up. Lauren, thank you for joining us. LAUREN: Thanks so much for having me, Victoria. I'm excited to be here. VICTORIA: Wonderful. I'm excited to dive right into this topic. But first, maybe just tell me what led you to start writing this book? LAUREN: I was first inspired to write this book by my clients, actually. I was working as a service designer at Steampunk, which is a human-centered design firm serving the federal government. I still do work for Steampunk. And a few years ago, I was working with a client who had a very large database containing millions of unique data points going back several centuries. And I realized throughout the course of my discovery process, which is a big part of human-centered design work, that most of their processes for managing the data in this database were purely manual. There was no DevSecOps integrated into their workflows. These workflows often included several people and took up to a week to complete. And this was an organization that had many data points, as mentioned, in its purview. They also had a large team to manage the data in various ways. But they still really struggled with an overall lack of processes. And really, more importantly, they lacked quality standards for data, which they could then automate throughout their production processes. I realized that even when organizations exist to have data in their purview and to share it with their users, that doesn't necessarily mean that they actually have governance principles that they abide by. And so that led me to really consider, more broadly, the bigger challenges that we see with technology like AI, machine learning, large language models. We know now that there is a big risk of bias within these technologies themselves due to the data. And when I dug deeper, first as a research analyst at Gartner and then as a service designer at Steampunk, I realized that the big challenge that makes this a reality is lack of governance. It's not having the quality standards for deciding how data is fit for use. It's not categorizing your data according to the top domains in your organization that produce data. It's lack of clear ownership regarding who owns which data sets and who is able to make decisions about data. It's not having things like a data destruction policy, which shows people how long you hold on to data for. So that knowledge and seeing firsthand how many organizations struggle with that lack of governance that's what inspired me to write the book itself. And I wanted to write it from the lens of a service designer. I have my own bias towards that, given that I am a practicing service designer. But I do believe that data governance when approached through a design thinking lens, can yield stronger results than if it is that top-down IT approach that many organizations use today unsuccessfully. VICTORIA: So let me play that back a little bit. So, in your experience, organizations that struggle to make the most out of their data have an issue with defining the authority and who has that authority to make decisions, and you refer to that as governance. So that when it comes down to it, if you're building things and you want to say, is this ethical? Is this right? Is this secure? Is it private enough? Someone needs to be responsible [laughs] for answering that. And I love that you're bringing this human-centered design approach into it. LAUREN: Yeah, that's exactly right. And I would say that ownership is a big part of data governance. It is one of the most crucial parts. I have a chapter in my book on data stewards, what they are, the roles they play, and how to select them and get them on board with your data governance vision. The main thing I want to emphasize about data stewardship is that it is not just the technical members of your team. Data scientists, data architects, and engineers can all be exceptional data stewards, especially because they work with the data day in and day out. The challenge I see is that these people typically are not very close to the data, and so they don't have that context for what different data points mean. They might not know offhand what the definitions per data piece are. They might not know the format that the data originates in. That's information that people in non-technical roles tend to possess. And so, data stewardship and governance is not about turning your sales director into a data engineer or having them build ETL pipelines. But it is about having the people who know that data best be in positions where they're able to make decisions about it, to define it, to decide which pieces of metadata are attached to each piece of data. And then those standards are what get automated throughout the DevSecOps process to make better life cycles that produce better-quality data faster, at speed with fewer resources. VICTORIA: So, when we talk about authority, what we really mean is, like, who has enough context to make smart decisions? LAUREN: Who has enough context and also enough expertise? I think a big mistake that we as an industry have made with data management is that we have given the responsibility for all data in an organization to one team, sometimes one person. So, typically, what we've done in the past is we've seen all data in an organization managed by IT. They, as a department, make top-down decisions about who has access to which data, what data definitions exist, where the data catalog lives, if it exists in an organization at all. And that creates a lot of blockers for people if you always have to go through one team or person to get permission to use data. And then, on top of that, the IT team doesn't have the context that your subject matter experts do about the data in their respective divisions. And so it really is about expanding the idea of who owns data and who is in a position of authority to make decisions about it by collaborating across silos. This is very challenging work to do. But I would actually say that for smaller organizations, they might lack the resources in, time, and money, and people to do data governance at scale. But what they can do is start embedding data governance as a core principle into the fabric of their organizations. And ultimately, I think that will power them for success in a way that larger organizations were not able to because there is a lot of technical debt out there when it comes to bad data. And one way to avoid that in the future or to at least mitigate it is to establish data governance standards early on. VICTORIA: Talk me through what your approach would be if you were working with an organization who wants to build-in this into the fabric of how they work. What would be your first steps in engaging with them and identifying where they have needs in part of that discovery process? LAUREN: In human-centered design, the discovery process occurs very early in a project. This is where you are working hand in hand with your client to figure out what their core needs are and how you can help them solve those core needs. And this is important to do because it's not always obvious what those needs are. You might get a contract to work on something very specific, whether it's designing the user interface of a database or it's migrating a website. Those are technical challenges to solve. And those are typically the reason why you get contracted to work with your client. But you still have to do quite a bit of work to figure out what the real ask is there and what is causing the need for them to have hired you in the first place. And so, the first thing I would do if I was walking a client through this is I would start by asking who the most technical senior lead in the organization is. And I would ask how they are managing data today. I think it's really important, to be honest about the state of data in your organization today. The work that we do designing data governance is very forward-thinking in a lot of ways, but you need a foundation to build upon. And I think people need to be honest about the state of that foundation in their organization. So the first thing I would do is find that most-senior data leader who is responsible for making decisions about data and owns the data strategy because that person is tasked with figuring out how to use data in a way that is going to benefit the business writ large. And so, data governance is a big part of what they are tasked to do. And so, in the first instance, what I would do is I would host a workshop with the client where I would ask them to do a few things. They would start by answering two questions: What is my company's mission statement, and how do we use data to fulfill that mission statement? These are very baseline questions. And the first one is so obvious and simple that it might be a little bit off-putting because you're tempted to think, as a senior leader, I already know what my company does. Why do I need to answer it like this? And you need to answer it like this because just like we often get contracts to work on particular technical problems, you'd be surprised by how many senior leaders cannot articulate their company's mission statements. They'll talk to you about their jobs, the tools they use to do their jobs, who they work with on a daily basis. But they still aren't ultimately answering the question of how their job, how the technology they use fulfills a bigger organizational need. And so, without understanding what that organizational need is, you won't be able to articulate how data fulfills that mission. And if you're not able to explain how data fulfills your company's mission, I doubt you can explain which servers your data lives on, which file format it needs to be converted to, who owns which data sets, where they originate, what your DevSecOps processes are. So answering those two questions about the company mission and how data is used to fulfill that mission is the first step. The second thing I would do is ask this senior leader, let's say the chief data officer, to define the data domains within their organization. And when we talk about data domains, we are talking about the areas of the business that are the key areas of interest. This can also be the problem spaces that your organization addresses. It also can have a hand in how your organization is designed as is; in other words, who reports to whom? Do you have sales and marketing within one part of the organization, or are they separate? Do you have customer success as its own wing of the organization separate from product? However your organization is architected, you can draw lines between those different teams, departments, and the domains that your organization works in. And then, most importantly, you want to be looking at who leads each domain and has oversight over the data in that domain. This is a really important aspect of the work because, as mentioned, stewards play a really key role in upholding and executing data governance. You need data stewards across non-technical and technical roles. So defining not just what the data domains are but who leads each domain in a senior role is really important to mapping out who your data stewards will be and to architect your first data governance council. And then, finally, the last thing I would have them do in the first instance is map out a business capability map showing not only what their data domains are but then the sub-domains underneath. So, for example, you have sales, and that can be a business capability. But then, within the sales data domain, you're going to have very different types of sales data. You're going to have quarterly sales, bi-annual sales, inbound leads versus outbound leads. You're going to have very different types of data within that sales data domain. And you want to build those out as much as you possibly can across all of your data domains. If you are a small organization, it's common to have about four to six data domains with subdomains underneath, each of those four to six. But it varies according to each startup and organization and how they are structured. Regardless of how your organization is structured, there's always value in doing those three things. So you start by identifying what your organization does and how data fulfills that goal. You define the core data domains in your organization, including who owns each domain. And then, you take that information about data domains, and you create a capability map showing not just your core data domains but the subdomains underneath because you're going to use all of that information to architect a future data governance program based on what you currently have today. VICTORIA: I think that's a great approach, and it makes a lot of sense. Is that kind of, like, the minimum that people should be doing for a data governance program? Like, what's the essentials to do, like, maybe even your due diligence, say, as a health tech startup company? LAUREN: This is the bare minimum of what I think every organization should do. The specifics of that are different depending on industry, depending on company size, organizational structure. But I wrote this book to be a compass that any organization can use. There's a lot of nuance, especially when we get into the production environment an organization has. There's a lot of nuance there depending on tools, all of that. And so I wanted to write a book that anybody could use regardless of industry size, team size, all of that information. I would say that those are the essential first steps. And I do think that is part of the discovery process is figuring out where you stand today, and no matter how ugly it might be. Because, like we've mentioned, there is more data produced on a daily basis than ever before. And you are not going into this data governance work with a clean slate. You already have work in your organization that you do to manage data. And you really need to know where there are gaps so that you can address those gaps. And so, when we go into the production environment and thinking about what you need to do to be managing data for quality on a regular basis, there are a couple of key things. The first is that you need a plan for how you're going to govern data throughout each lifecycle. So you are very likely not using a piece of data once and never again. You are likely using it through several projects. So you always want to have a plan for governance in production that includes policies on data usage, data archiving, and data destruction. Because you want to make sure that you are fulfilling those principles, whatever they are, throughout each lifecycle because you are managing data as a product. And that brings me to the next thing that I would encourage people working in data governance to consider, which is taking the data mesh principle of managing data as a product. And this is a fundamental mind shift from how big data has been managed in the past, where it was more of a service. There are many detriments to that, given the volume of data that exists today and given how much data environments have changed. So, when we think about data mesh, we're really thinking about four key principles. The first is that you want to manage your data according to specific domains. So you want to be creating a cloud environment that really accounts for the nuance of each data domain. That's why it's so important to define what those data domains are. You're going to not just document what those domains are. You're going to be managing and owning data in a domain-specific way. The second thing is managing data as a product. And so, rather than taking the data as a service approach, you have data stewards who manage their respective data as products within the cloud environment. And so then, for instance, rather than using data about customer interactions in a single business context, you can instead use that data in a range of ways across the organization, and other colleagues can use that data as well. You also want to have data available as a self-service infrastructure. This is really important in data mesh. Because it emphasizes keeping all data on a centralized platform that manages your storage, streaming, pipelines, and anything else, and this is crucial because it prevents data from leaving in disparate systems on various servers. And it also erases or eases the need to build integrations between those different systems and databases. And it also gives each data steward a way to manage their domain data from the same source. And then the last principle for data mesh is ecosystem governance. And really, what we're talking about here is reinforcing the data framework and mission statement that you are using to guide all of your work. It's very common in tech for tech startups to operate according to a bigger vision and according to principles that really establish the rationale for why that startup deserves to exist in the world. And likewise, you want to be doing all of your production work with data according to a bigger framework and mission that you've already shared. And you want to make sure that all of your data is formatted, standardized, and discoverable against equal standards that govern the quality of your data. VICTORIA: That sounds like data is your biggest value as a company and your greatest source of liability [laughs] and in many ways. And, I'm curious, you mentioned just data as a product, if you can talk more about how that fits into how company owners and founders should be thinking about data and the company they're building. LAUREN: So that's a very astute comment about data as a liability. That is absolutely true. And that is one of the reasons why governance is not just nice to have. It's really essential, especially in this day and age. The U.S. has been quite lax when it comes to data privacy and protection standards for U.S. citizens. But I do think that that will change over the next several years. I think U.S. citizens will get more data protections. And that means that organizations are going to have to be more astute about tracking their data and making sure that they are using it in appropriate ways. So, when we're talking to founders who want to consider how to govern data as a product, you're thinking about data stewards taking on the role of product managers and using data in ways that benefits not just them and their respective domains but also giving it context and making it available to the wider business in a way that it was not available before. So if you are architecting your data mesh environment in the cloud, what you might be able to do is create various domains that exist on their own little microservice environments. And so you have all of these different domains that exist in one environment, but then they all connect to this bigger data mesh catalog. And from the catalog, that is where your colleagues across the business can access the data in your domain. Now, you don't want to necessarily give free rein for anybody in your organization to get any data at any time. You might want to establish guardrails for who is able to access which data and what those parameters are. And the data as a product mindset allows you to do that because it gives you, as the data steward/pseudo pm, the autonomy to define how and when your data is used, rather than giving that responsibility to a third-party colleague who does not have that context about the data in your domain. VICTORIA: I like that about really giving the people who have the right context the ability to manage their product and their data within their product. That makes a lot of sense to me. Mid-Roll Ad: As life moves online, bricks-and-mortar businesses are having to adapt to survive. With over 18 years of experience building reliable web products and services, thoughtbot is the technology partner you can trust. We provide the technical expertise to enable your business to adapt and thrive in a changing environment. We start by understanding what's important to your customers to help you transition to intuitive digital services your customers will trust. We take the time to understand what makes your business great and work fast yet thoroughly to build, test, and validate ideas, helping you discover new customers. Take your business online with design-driven digital acceleration. Find out more at tbot.io/acceleration or click the link in the show notes for this episode. VICTORIA: What is it like to really bring in this culture of design-thinking into an organization that's built a product around data? LAUREN: It can be incredibly hard. I have found that folks really vary in their approach to this type of work. I think many people that I talk to have tried doing data governance to some degree in the past, and, for various reasons, it was not successful. So as a result, they're very hesitant to try again. I think also for many technical leaders, if they're in CIO, CDO, CTO roles, they are not used to design thinking or to doing human-centered design work. That's not the ethos that was part of the tech space for a very long time. It was all about the technology, building what you could, experimenting and tinkering, and then figuring out the user part later. And so this is a real fundamental mindset shift to insist on having a vision for how data benefits your business before you start investing money and people into building different data pipelines and resources. It's also a fundamental shift for everyone in an organization because we, in society writ large, are taught to believe that data is the responsibility of one person or one team. And we just can't afford to think like that anymore. There is too much data produced and ingested on a daily basis for it to fall to one person or one team. And even if you do have a technical team who is most adept at managing the cloud environment, the data architecture, building the new models for things like fraud detection, that's all the purview of maybe one team that is more technical. But that does not mean that the rest of the organization doesn't have a part to play in defining the standards for data that govern everything about the technical environment. And I think a big comparison we can make is to security. Many of us… most of us, even if we work in tech, are not cybersecurity experts. But we also know that employees are the number one cause of breaches at organizations. There's no malintent behind that, but people are most likely to expose company data and cause a breach from within the company itself. And so organizations know that they are responsible for creating not just secure technical environments but educating their employees and their workforce on how to be stewards of security. And so, even at my company, we run constant tests to see who is going to be vulnerable to phishing? Who is going to click on malicious links? They run quarterly tests to assess how healthy we are from a cybersecurity perspective. And if you click on a phishing attempt and you fall for it, you are directed to a self-service education video that you have to complete, going over the aspects of this phishing test, what made it malicious. And then you're taught to educate yourself on what to look for in the future. We really need to be doing something very similar with data. And it doesn't mean that you host a two-hour training and then never talk about data again. You really need to look at ways to weave data governance into the fabric of your organization so that it is not disruptive to anybody's day. It's a natural part of their day, and it is part of working at your organization. Part of your organizational goals include having people serve as data stewards. And you emphasize that stewardship is for everyone, not just the people in the technology side of the business. VICTORIA: I love that. And I think there's something to be said for having more people involved in the data process and how that will impact just the quality of your data and the inclusivity of what you're building to bring those perspectives together. LAUREN: I agree. And that's the real goal. And I think this is, again, something that's actually easier for startups to do because startups are naturally more nimble. They find out what works, what doesn't work. They're willing to try things. They have to be willing to try things. Because, to use a really clichéd phrase, if they're not innovating, then they're going to get stale and go out of business. But the other benefit that I think startups have when they're doing this work is the small size. Yes, you don't have the budget or team size of a company like JP Morgan, that is enormous, or a big bank. But you still have an opportunity to really design a culture, an organizational culture that puts data first, regardless of role. And then you can architect the structure of every role according to that vision. And I think that's a really exciting opportunity for companies, especially if they are selling data or already giving data as a product in some way. If they're selling, you know, data as a product services, this is a really great approach and a unique approach to solving data governance and making it everyone's opportunity to grow their own roles and work smarter. VICTORIA: Right. And when it's really the core of your business, it makes sense to pay more attention to that area [laughs]. It's what makes it worthwhile. It's what makes potential investors know that you're a real company who takes things seriously. [laughs] LAUREN: That's true. That's very true. VICTORIA: I'm thinking, what questions...do you have any questions for me? LAUREN: I'm curious to know, when you talk to thoughtbot clients, what are the main aspects of data that they struggle with? I hear a variety of reasons for data struggles when I talk to clients, when I talk to people on the tech side, either as engineers or architects. I'm curious to hear what the thoughtbot community struggles with the most when it comes to managing big data. VICTORIA: I think, in my experience, in the last less than a year that I've been with thoughtbot, one challenge which is sort of related to data...but I think for many small companies or startups they don't really have an IT department per se. So, like, what you mentioned early on in the discovery process as, like, who is the most senior technical person on your team? And that person may have little to no experience managing an IT operations group. I think it's really bringing consulting from the ground up for an organization on IT operations, data management, user and access management. Those types of policies might just be something they hadn't considered before because it's not in their background and experience. But maybe once they've gotten set up, I think the other interesting part that happens is sometimes there's just data that's just not being managed at all. And there are processes and bits and pieces of code in app that no one really knows what they are, who they're used for, [laughs] where the data goes. And then, you know, the connections between data. So everything that you're mentioning that could happen when you don't do data governance, where it can slow down deployment processes. It can mean that you're giving access to people who maybe shouldn't have access to production data. It can mean that you have vulnerabilities in your infrastructure. That means someone could have compromised your data already, and you just don't know about it. Just some of the issues that we see related to data across the spectrum of people in their lifecycle of their startups. LAUREN: That makes total sense, I think, especially when you are in a startup. If you're going by the typical startup model, you have that business-minded founder, and then you likely have a more technical co-founder. But we, I think, make the assumption that if you are, quote, unquote, "technical," you, therefore, know how to do anything and everything about every system, every framework, every type of cloud environment. And we all know that that's just not the case. And so it's easy to try to find the Chief Technology Officer or the Chief Information Officer if one exists and to think, oh, this is the right person for the job. And they might be the most qualified person given the context, but that still doesn't mean that they have experience doing this work. The reality is that very few people today have deep hands-on experience making decisions about data with the volume that we see today. And so it's a new frontier for many people. And then, on top of that, like you said as well, it's really difficult to know where your data lives and to track it. And the amount of work that goes into answering those very basic questions is enormous. And that's why documentation is so important. That's why data lineage in your architecture is so important. It really gives you a snapshot of which data lives where, how it's used. And that is invaluable in terms of reducing technical debt. VICTORIA: I agree. And I wonder if you have any tips for people facilitating conversations in their organization about data governance. What would you tell them to make it less scary and more fun, more appealing to work on? LAUREN: I both love and hate the term data governance. Because it's a word that you say, and whether you are technical or not, many people tune out as soon as they hear it because it is, in a way, a scary word. It makes people think purely of compliance, of being told what they can't do. And that can be a real challenge for folks. So I would say that if you are tasked with making a data governance program across your organization, you have to invest in making it real for people. You have to sell them on stewardship by articulating what folks will gain from serving as stewards. I think that's really critical because we are going to be asking folks to join a cause that they're not going to understand why it affects them or why it benefits them at first. And so it's really your job to articulate not only the benefits to them of helping to set up this data stewardship work but also articulating how data governance will help them get better at their jobs. I also think you have to create a culture where you are not only encouraging people to work across party lines, so to speak, to work across silos but to reward them for doing so. You are, especially in the early months, asking a lot of people who join your data stewardship initiatives and your data governance council you're asking them to build something from the ground up, and that's not easy work. So I think any opportunity you can come up with to reward stewards in the form of bonuses or in terms of giving them more leeway to do their jobs more of a title bump than they might have had otherwise. Giving them formal recognition for their contributions to data governance is really essential as well. Because then they see that they are rewarded for contributing to the thought leadership that helps the data governance move forward. VICTORIA: I'm curious, what is your favorite way to be rewarded at work, Lauren? LAUREN: So I am a words person. When we talk about love languages, one of them is words of affirmation. And I would say that is the best way to quote, unquote, "reward me." I save emails and screenshots of text messages and emails that have really meant a lot to me. If someone sends me a handwritten card that really strikes a chord, I will save that card for years. My refrigerator is filled with holiday cards and birthday cards, even from years past. And so any way to recognize people for the job they're doing and to let someone know that they're seen, and their work is seen and valued really resonates with me. I think this is especially important in remote environments because I love working from home, and I am at home alone all day. And so, especially if you are the only person of your kind, of your role on your team, it's very easy to feel insular and to wonder if you're hitting the mark, if you're doing a good job. I think recognition, whether verbally or on Slack, of a job well done it really resonates with me. And that's a great way to feel rewarded. VICTORIA: I love that. And being fully remote with thoughtbot, I can feel that as well. We have a big culture of recognizing people. At least weekly, we do 15Five as a tool to kind of give people high-fives across the company. LAUREN: Yep, Steampunk does...we use Lattice. And people can submit praise and recognition for their colleagues in Lattice. And it's hooked up to Slack. And so then, when someone submits positive feedback or a kudos to a colleague in Lattice, then everyone sees it in Slack. And I think that's a great way to boost morale and give people a little visibility that they might not have gotten otherwise, especially because we also do consulting work. So we are knee-deep in our projects on a daily basis, and we don't always see or know what our colleagues are working on. So little things like that go a long way towards making people feel recognized and valued as part of a bigger company. But I'm also curious, Victoria, what's your favorite way to get rewarded and recognized at work? VICTORIA: I think I also like the verbal. I feel like I like giving high-fives more than I like receiving them. But sometimes also, like, working at thoughtbot, there are just so many amazing people who help me all throughout the day. I start writing them, and then I'm like, well, I have to also thank this person, and then this person. And then I just get overwhelmed. [laughs] So I'm trying to do more often so I don't have a backlog of them throughout the week and then get overwhelmed on Friday. LAUREN: I think that's a great way to do it, and I think it's especially important when you're in a leadership role. Something that I'm realizing more and more as I progress in my career is that the more senior you are, the more your morale and attitude sets the tone for the rest of the team. And that's why I think if you are in a position to lead data governance, your approach to it is so crucial to success. Because you really have to get people on board with something that they might not understand at first, that they might resent it first. This is work that seems simple on the surface, but it's actually very difficult. The technology is easy. The people are what's hard. And you really have to come in, I think, emphasizing to your data stewards and your broader organization, not just what governance is, because, frankly, a lot of people don't care. But you really have to make it tangible for them. And you have to help them see that governance affects everyone, and everyone can have a hand in co-creating it through shared standards. I think there's a lot to be learned from the open-source community in this regard. The open-source community, more than any other I can think of, is the model of self-governance. It does not mean that it's perfect. But it does mean that people from all roles, backgrounds have a shared mission to build something from nothing and to make it an initiative that other people will benefit from. And I think that attitude is really well-positioned for success with data governance. VICTORIA: I love that. And great points all around on how data governance can really impact an organization. Are there any final takeaways for our listeners? LAUREN: The biggest takeaway I would say is to be thoughtful about how you roll out data governance in your organization. But don't be scared if your organization is small. Again, it's very common for people to think my business is too small to really implement governance. We don't have the budget for, you know, the AWS environment we might need. Or we don't have the right number of people to serve as stewards. We don't actually have many data domains yet because we're so new. And I would say start with what you have. If you are a business in today's day and age, I guarantee that you have enough data in your possession to start building out a data governance program that is thoughtful and mission-oriented. And I would really encourage everyone to do that, regardless of how big your organization is. And then the other takeaway I would say is, if you remember nothing else about data governance, I would say to remember that you automate your standards. Your standards for data quality, data destruction, data usage are not divorced from your technical team's production environments; it's the exact opposite. Your standards should govern your environment, and they should be a lighthouse when you are doing that work. And so you always want to try to integrate your standards into your production environment, into your ETL pipelines, into your DevSecOps. That is where the magic happens. Keeping them siloed won't work. And so I'd love for people, if you really enjoyed this episode and the conversation resonated with you, too, get a copy of the book. It is my first book. And I was really excited to work with the Pragmatic Programmers on it. So if readers go to pragprog.com, they can get a copy of the book directly through the publisher. But the book is also available at Target, Barnes & Noble, Amazon, and local bookstores. So I am very grateful as a first-time author for any and all support. And I would really also love to hear from thoughtbot clients and podcast listeners what you thought of the book because version two is not out of the question. VICTORIA: Well, looking forward to it. Thank you again so much, Lauren, for joining us today. You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. And you can find me on Twitter @victori_ousg. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com. Special Guest: Lauren Maffeo.

More Than Just Code podcast - iOS and Swift development, news and advice
Episode 360: The Curious Case of Daniel Steinberg

More Than Just Code podcast - iOS and Swift development, news and advice

Play Episode Listen Later Apr 15, 2023 64:38


This week Tim sits down with Daniel Steinberg to talk about his work, teaching, and his new book The Curious Case of the Async Cafe. The book explores Swift concurrency. They also discuss Daniel's experience as a broadcast radio host, his upcoming Top 40 format app, has his thoughts on WWDC 2023. Special Guest: Daniel Steinberg.

Software Engineering Radio - The Podcast for Professional Software Developers
SE Radio 558: Michael Fazio on Modern Android Development

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Apr 5, 2023 70:53


Michael Fazio, Engineering Manager (Android) at Albert and author of Kotlin and Android Development featuring Jetpack from the Pragmatic Programmers, speaks with SE Radio's Gavin Henry about how the Android ecosystem looks today, and why it's an excellent time to write native Android apps. They explore a wide range of topics about modern Android development, including when to go native, how to keep a lot of decisions in your back-end API, Kotlin co-routines, Jetpack and Jetpack Compose, the MVVM design pattern, and threads, as well as activities, fragments, Dagger, room, navigation, Flutter, and improvements in simulators. They also examine details such as IDEs, API selection, how to choose a list of support devices, Java vs Kotlin, handset manufacturers, XML layouts, and why Jetpack is a safe bet for all your future Android development.

Wealth, Actually
EP.105 THE PROGRAMMER’S JOURNEY with ANDY HALL

Wealth, Actually

Play Episode Listen Later Feb 20, 2022 43:05


All of us have become more reliant on technology and software to solve problems and improve our lives.  However, many of us understand less and less about the programming that goes into these solutions.  Further, we forget or ignore the human element of this problem solving.   In this episode, I speak with ANDY HUNT who has devoted his life to these issues. Andy Hunt is a programmer turned consultant, author and publisher. He's authored a dozen books including the best-selling “THE PRAGMATIC PROGRAMMER,” was one of the 17 authors of the AGILE MANIFESTO and founders of the Agile Alliance, and co-founded the PRAGMATIC BOOKSHELF, publishing award-winning and critically acclaimed books for software developers. He's currently writing science fiction (see conglommora.com) and experimenting with The GROWS Method®. We talk about his early days of programming and the power of community and process in the world of software design as we go into his experience with broader consulting at the enterprise level. Further on, Andy dives into what he considers to be important in the future of programming and advice for young programmers. Finally, as a consummate Renaissance Man, Andy discusses how his hobbies in writing science fiction, music production and woodworking excite his brain and inform his problem solving ability. Andy started in the do-it-yourself days of CP/M and the S100 bus, of Heathkits and Radio Electronics. Andy wrote his first real program, a combination text editor and database manager, for an Ohio Scientific Challenger 4P. It was a great era for tinkering. Andy started hacking in 6502 assembler, modifying operating systems, and wrote his first commercial program (a Manufacturing Resources Planning system) in 1981. He taught himself Unix and C, and began to design and architect larger, more connected systems. Working at large companies, Andy kept an ear on Usenet, and started his early email habit via a direct bang-path to ihnp4. Next he settled into electronic pre-press and computer graphics, and worked on that wondrous eye-candy that was Silicon Graphics machines. By now a firm command of several flavors of Unix, from BSD to System V, led Andy to try consulting in the early 1990's. His knack for stirring things up really began to come in handy, and it soon became obvious that many of his clients each suffered similar problems—problems that Andy had already seen and fixed before. Andy joined up with Dave Thomas and they wrote the seminal software development book, The Pragmatic Programmer, followed a year later by the original Programming Ruby: The Pragmatic Programmer's Guide, which introduced the Western world to this new language from Japan. Together they founded The Pragmatic Programmers and are well known as founders of the agile movement and authors of the Agile Manifesto, as well as proponents of Ruby and more flexible programming paradigms. They founded the Pragmatic Bookshelf publishing business in 2003, helping keep developers at the top of their game. Andy is a founder of the Pragmatic Programmers, founder of the Agile Alliance and one of the 17 authors of the Agile Manifesto, and author of a dozen or so books on programming, agile methods and learning, as well as science fiction and adventure. He is an active musician and woodworker, and continues looking for new areas where he can stir things up. Comments from Andy on his early career- -Where did your initial interest in programming develop? (The Do-It-Yourself days) -What were the types of problems that attracted your attention? -What languages did you gravitate toward?  Working for larger, more complex situations -What were the big lessons you pick up in those work environments? -Programming considered by laymen as a solitary pursuit- was there an adjustment in delegating work or collobroating?  When does management of strategy get in the way of execution?

Expedition Arbeit
Expedition Arbeit #78 - kne:buster >> Lernen und Kompetenz mit Jungwirth & Knecht

Expedition Arbeit

Play Episode Listen Later Nov 4, 2021 31:59


Lernen und Kompetenz: Können ist wie Machen mit Ahnung In dieser kne:buster-Folge »Lernen und Kompetenz« sprechen Alexander Jungwirth und Stefan Knecht über ebendas: wie Kompetenzen entstehen und aus Einsteigern Experten werden. Experten werden können — wenn ein paar Randbedingungen passen. »Kompetenz entwickelt sich nicht durch Einsicht sondern durch emotionale Labilisierung: in Beziehung gehen, sich öffnen -- Vertrauen haben etwas zu tun, was man sonst nicht tut.« — (Arnold 2015) TL;DR? ( »too long, didn't read«) kleiner Anreisser-Beitrag auf digitalien.org: Die Guglhupf-Analogie: wie lernen Menschen?   Quellen, Literatur Arnold, Rolf. 2015. “Wie man führt, ohne zu dominieren - Wie man lehrt, ohne zu belehren” https://youtu.be/5CdcCFd7JGY 48min, Vortrag am 5. KATA-Praktikertag am 20.11.2015 Stuttgart   Benner, Patricia E., Christine Tanner, and Catherine Chesla. 1992. ‘From Novice to Expert: Excellence and Power in Clinical Nursing Practice'. Advanced Nursery Science, 14(3), , 13–28. Bloom, Benjamin S., and Lauren A. Sosniak, eds. 1985. Developing Talent in Young People. 1st ed. New York: Ballantine Books. 978-0-345-31951-7 978-0-345-31509-0 Dreyfus, H. & Dreyfus, St. (1986/87). Künstliche Intelligenz. Von den Grenzen der Denkmaschine und dem Wert der Intuition. Reinbek b. Hamburg: Rowohlt. (Orig.: Mind Over Machine. The Power of Human Intuition and Expertise in the Era of the Computer. New York: The Free Press, 1986).   Gobet, F. & Charness, N. (2018). Expertise in chess. In K. A. Ericsson, R. R. Hoffman, A. Kozbelt & A. M. Williams (Hg.), The Cambridge Handbook of Expertise and Expert Performance. 2. Auflage (597–615). Cambridge: Cambridge University Press.   Gruber, H., Harteis, C. & Rehrl, M. (2006). Professional Learning: Erfahrung als Grundlage von Handlungskompetenz. Bildung und Erziehung, 59, 193–203   Hakkarainen, K., Palonen, T., Paavola, S. & Lehtinen, E. (2004). Communities of Networked Expertise: Educational and Professional Perspectives. Amsterdam: Elsevier Hayes, John R. 1981. The Complete Problem Solver. Philadelphia, Pa: Franklin Institute Press. 978-0-89168-028-4 Honecker, Erich — zitiert in der Festansprache zum 40. Jahrestag der DDR, 7. Oktober 1989, glasnost.de -- siehe auch https://www.youtube.com/watch?v=VphPebctAsM Hunt, Andrew. 2008. Pragmatic Thinking and Learning: Refactor Your ‘Wetware'. Pragmatic Programmers. Raleigh: Pragmatic. (daraus stammt die Dreyfus-Geschichte, p22ff)   “In the 1970s, the brothers Dreyfus (Hubert and Stuart) began doing their seminal research on how people attain and master skills.”   “Once upon a time, two researchers (brothers) wanted to advance the state of the art in artificial intelligence. They wanted to write software that would learn and attain skills in the same manner that humans learn and gain skill (or prove that it couldn't be done). To do that, they first had to study how humans learn.”   “The Dreyfus brothers looked at highly skilled practitioners, including commercial airline pilots and world-renowned chess masters. Their research showed that quite a bit changes as you move from novice to expert. You don't just “know more” or gain skill. Instead, you experience fundamental differences in how you perceive the world, how you approach problem solving, and the mental models you form and use. How you go about acquiring new skills changes. External factors that help your performance — or hinder it — change as well.   Unlike other models or assessments that rate the whole person, the Dreyfus model is applicable per skill. In other words, it's a situational model and not a trait or talent model.”   Kruger, Justin, and David Dunning. n.d. ‘Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments (PDF Download Available)'. ResearchGate. Accessed 9 February 2017. https://www.researchgate.net/publication/12688660_Unskilled_and_Unaware_of_It_How_Difficulties_in_Recognizing_One's_Own_Incompetence_Lead_to_Inflated_Self-Assessments.   Lehmann, A. C. & Gruber, H. (2006). Music. In K. A. Ericsson, N. Charness, R. R. Hoffman & P. J. Feltovich (Hg.), The Cambridge Handbook of Expertise and Expert Performance (457–470). Cambridge: Cambridge University Press.   Neuweg, Georg Hans. 2020. “Etwas können. Ein Beitrag zu einer Phänomenologie der Könnerschaft” in: Georg Hans Neuweg; Rico Hermkes; Tim Bonowski (Hg.)Implizites Wissen Berufs- und wirtschaftspädagogische Annäherungen. 2020. ISBN: 9783763965953 -- E-Book (PDF):ISBN: 9783763965953 -- DOI: 10.3278/6004682w - wbv-open-access.de   Schön, D. A. (1983). The Reflective Practitioner. How Professionals Think in Action. New York: Basic Books. Williams, A. M., Ford, P. R., Hodges, N. J. & Ward, P. (2018). Expertise in sport: Specificity, plasticity, and adaptability in high-performance athletes. In K. A. Ericsson, R. R. Hoffman, A. Kozbelt & A. M. Williams (Hg.), The Cambridge Handbook of Expertise and Expert Performance. 2. Auflage (653–673). Cambridge: Cambridge University Press.

Educative Sessions
#72: "Stumbling into Dev Management" with James Stanier of Shopify | Educative Sessions

Educative Sessions

Play Episode Listen Later Aug 16, 2021 15:18


Dr. James Stanier never intended to become a manager, let alone a Director or VP. He certainly never intended to publish a book about it either! Dr. Stanier shares his story from doing a Ph.D. to failing to get an academic job to joining a start-up which grew rapidly through numerous venture capital investment rounds. After reading lots of books on management in the "business" world, he applied what he knew to help shape the Engineering department at Brandwatch and now Shopify. Watch the YouTube HERE: https://youtu.be/ud3yST_f0qQ ABOUT OUR GUEST   Dr. James Stanier is (soon to be) Director of Engineering @ Shopify, and before that was SVP Engineering at Brandwatch. He runs theengineeringmanager.com and has published Become An Effective Software Engineering Manager with Pragmatic Programmers, which will soon be a course on Educative. There's another book on remote working in the works too. Don't forget to subscribe to Educative Sessions on YouTube! ►► https://www.youtube.com/c/EducativeSessions   ABOUT EDUCATIVE   Educative (educative.io) provides interactive and adaptive courses for software developers. Whether it's beginning to learn to code, grokking the next interview, or brushing up on frontend coding, data science, or cybersecurity, Educative is changing how developers continue their education. Stay relevant through our pre-configured learning environments that adapt to match a developer's skill level. Educative provides the best author platform for instructors to create interactive and adaptive content in only a few clicks.   More Videos from Educative Sessions: https://www.youtube.com/c/EducativeSessions/   Episode 72: "Stumbling into Dev Management" with James Stanier of Shopify | Educative Sessions

The Bike Shed
294: Perfect Duplication

The Bike Shed

Play Episode Listen Later May 25, 2021 45:31


On this week's episode, Steph and Chris respond to a listener question about how to know if we're improving as developers. They discuss the heuristics they think about when it comes to improving, how they've helped the teams they've worked with plan for and measure their growth, and some specific tips for improving. Rails Autoscale (https://railsautoscale.com/) Rubular regex playground (https://rubular.com/) The Pragmatic Programmer (https://pragprog.com/titles/tpp20/the-pragmatic-programmer-20th-anniversary-edition/) Go Ahead, Make a Mess by Sandi Metz (https://www.youtube.com/watch?v=xi3DClfGuqQ) Confident Code - Avdi Grimm (https://www.youtube.com/watch?v=T8J0j2xJFgQ) Therapeutic Refactoring - Katrina Owen (https://www.youtube.com/watch?v=J4dlF0kcThQ) Refactoring, Good to Great - Ben Orenstein (https://www.youtube.com/watch?v=DC-pQPq0acs) Transcript CHRIS: There's something intriguing about the fact that we're having this conversation, but the thing that's recorded just starts at this arbitrary point in time, and it's usually us rambling about golden roads. But, I don't know; there's something existential about that. STEPH: It's usually when someone says something very funny or starts singing [laughs], and then that's when we immediately: record, record! CHRIS: I've never sung on the mic. That doesn't sound like a thing I would do. STEPH: [laughs] CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So Steph, how's your week going? STEPH: Hey Chris, it's going really well. Normally I'm always like, wow, it's been such an exciting week, and it's been a pretty calm, chill week. It's been lovely. CHRIS: That sounds nice actually in contrast to the "Well, it's been a week," that sort of intro of "I don't know, it's been fine. It can be really nice." STEPH: By the time we get to this moment of the week, I either have stuff that I'm so excited to talk about and have a little bit of a therapy session with you or share something new that I've learned. I agree; it's nice to be like, yeah, it's been smooth sailing this whole week. In fact, it was smooth sailing enough that I decided to take on something that I've been meaning to tackle for a while but have just been avoiding it because I have strong feelings about this, which you know but we haven't talked about yet. But it comes down to managing emails and how many emails one should have that are either unread that are just existing. And I fall into the category of where I am less scrupulous about how many unread or managed emails that I have. But I decided that I'd had enough. So I used a really nice filter in Gmail where I said I want all emails that are before 2021 and also don't have a user label, so it's has:nouserlabels because then I know those are all the emails that I haven't labeled or assigned to a particular...I want to say folder, but they're not truly folders; they just look like folders. So they're essentially like untriaged or just emails that I've left hanging out in the ether. And then I just started deleting, and I got rid of all of those that hadn't been organized up until that point. And I was just like yep, you know if I haven't looked at it, it's that old, and I haven't given a label by this point, I'm just going to move on. If it's important, it will bubble back up. And I feel really good about it. CHRIS: Wow, that is -- I like how you backed me into a corner. Obviously, I'm on the other side where I'm fastidiously managing my email, which I am, but you backed me into that corner here. So, yeah, that's true. Although the approach that you're taking of just deleting all the old email that's a different one than I would have taken [chuckles] so, I like it. It's the nuclear option. STEPH: Okay, so now I need to qualify. When you delete an email, initially, I'm thinking it's going to trash, and so it's still technically there if I need to retrieve it and go back and find it. But you just said nuclear option, so maybe they're actually getting deleted. CHRIS: They're going into the trash for 30 days; I think is the timeline. But after that, they will actually delete them. The archive is supposed to be the place where you put stuff I don't want to see you anymore. But did you archive or delete? STEPH: Oh, I deleted. CHRIS: Oh, wow. Yeah. All right, you went for it. [laughter] STEPH: Yeah, and that's cool. And it's in trash. So I basically have a 30-day window where I'm like, oh, I made a mistake, and I need to search for something and find something and bring it back into my world; I can find it. If I haven't searched for it by then in 30 days, then I say, you know, thanks for the email, goodbye. [chuckles] And it'll come back if it needs to. CHRIS: I like the approach. It would not be my approach, but I like the commitment to the cause. Although you still have...how many emails are still in your inbox now? STEPH: Why do we have to play the numbers game? CHRIS: [laughs] STEPH: Can't we just talk about the progress that I have made? CHRIS: What wonderful progress you've made, Steph. [laughter] Like, it doesn't matter what I think. What do you think about this? Are you happy with this? Does this make you feel more joy when you look into your email in the Marie Kondo sense? STEPH: It does. I am excited that I went ahead and cleared all this because it just felt like craft. So I have taken what may be a very contentious approach to my email, where I treat it as this searchable space. So as things come in, I triage them, and I will label them, I will star them. I will either snooze them to make sure I don't miss the high actionable emails or something that's very important to me to act on quickly. But for the most part, then a lot of stuff will sit in that inbox area. So it becomes like this junk drawer. It's a very searchable junk drawer, thanks to Google. They've done a great job with that. And it feels nice to clear out that junk drawer. But I do have such an aversion to that very strong email inbox zero. I respect the heck out of it, but I have an aversion; I think from prior jobs where I was on a team, and we could easily get like 800 emails a day. My day all day was just triaging and responding to emails and writing emails. And so I think that just left a really bitter experience where now I just don't want to have to live that life where I'm constantly catering to what's in my inbox. CHRIS: That's so many emails. STEPH: It was so many emails. We were a team. It was a team inbox. So there were three of us managing this inbox. So if someone stepped away or if someone was away on vacation, we all had access to the same emails. But still, it was a lot of emails. CHRIS: Yeah, inbox zero in a shared inbox that is a level that I have not gotten to but getting to inbox zero and actually maintaining that is very much a labor of love and something that I've had to invest in. And it's probably not worth it for most people. You could convince me that it is not worth it for me, that the effort I'm putting in is too much effort for not enough reward. Well, it's one of those things where I find the framing that it puts on it, like, okay, I need to process my email and get it to zero at least once a day. Having that lens makes me think about email in a different way. I unsubscribe from absolutely everything. The only things that are allowed to come into my email are things that I will act on that actually deserve my attention, and so it forces that, which I really like. And then it forces me to think about things. I have a tendency to really hold off on decisions. So I'm like, ah, okay. I can go see friends on Saturday or I can do something else. Friends like actual humans, not the TV show, although for the past year, it's definitely more of the TV show than the real people. But let's say there's a potential thing that I could do on the weekend and I have to decide on that. I have a real tendency to drag my feet and to wait for some magical information from the universe to help this decision be obvious to me. But it's never going to be obvious, and at some point, I just need to pick. And so for inbox zero, one of the things that comes out of it for me is that pressure and just forcing me to be like, dude, there's no perfect answer here, just pick something. You got to just pick something and not wasting multiple cycles rethinking the same decision over and over because that's my natural tendency. So in a way, it's, I don't know, almost like a meditative practice sort of thing. There's utility there for me, but it is an effort, and it's, again, arguably not worth it. Still, I do it. I like it. I'm a fan. I think it's worth it. STEPH: I like how you argued both sides. I'm with you. I think it depends on the value that you get out of it. And then, as long as you are effective with whichever strategy you take, then that's really what matters. And I do appreciate the lens that it applies where if you are getting to inbox zero every day, then you are going to be very strict about who can send you emails about notifications that you're going to receive because you are trying to reduce the work that then you have to get to inbox zero. So I do very much admire that because there are probably -- I'm wasting a couple of minutes each day deleting notifications from chats or stuff that I know I'm not necessarily directly involved in and don't need action from me. And then I do get frustrated when I can't adjust those notification settings for that particular application, and I'm just subscribed to all of it. So some of it I feel like I can't change, and then some of it, I probably am wasting a few minutes. So I think there's totally value in both approaches. And I'm also saying that to try to justify my approach of my searchable inbox. [laughs] CHRIS: There are absolutely reasons to go either way. And also, to come back to what I was saying a minute ago, it may have sounded like I'm a person who's just on top of this. I may have given that impression briefly. I think the only time this has actually worked in my life is when Gmail introduced snooze both in the mobile app and on the desktop. So this is sometime after Google's inbox product came out, and that was eventually shut down. So it's relatively recent because, man, I just snooze everything. That is the actual secret to achieving inbox zero, just to reach the end of the day and be like, nah, and just send all the emails to future me. And then future me wakes up and is like, "You know, it's first thing in the morning. I got a nice cup of coffee, and this is what you're going to do to me, past me?" So there's a little bit of internal strife there within my one human. But yeah, the snoozing is actually incredibly useful and probably the only way that I actually get things done and the same within any task management system that I have; maybe future me will do this. STEPH: I think you and I both subscribed to the that's a future me problem. We just do it in very different ways. But switching gears a bit, how's your week been? CHRIS: It's been good, pretty normal, doing some coding, normal developer things. Actually, there's one tool that I was revisiting this week that I'm not sure that we've actually talked about on the show before, but it's Rails Autoscale. Have you used that before? STEPH: I don't think I have. It sounds very familiar, but I don't think I've used it. CHRIS: It's a very nice, straightforward Heroku add-on that does exactly what you want it to do. It monitors your web and worker dynos and will scale up. But it uses a different heuristic than -- So Heroku has built-in autoscaling, but theirs is based on response time, which is, I think, a little bit laggier of a metric. Like if your response time has gotten bad, then you're already in trouble, whereas Rails Autoscale uses queue time. So how long is a request waiting before? I think it's at the Heroku router; it goes onto the dyno that's actually going to process the request? So I think that's what they're monitoring. I may be wrong on that. But from the website, they're looking at that, and you can configure it. They actually have a really nice configuration dashboard for configure between this range, so one to five dynos at most, and scale in this way up and in this way down. So like, how long should it wait? What's the threshold of queue time? Those sorts of things. So they have a default like just do the smart thing for me, and then they give you more control if your app happens to have a different shape of data, which is all really nice. And then I've been using that for a while, but I recently this week actually just turned on the worker side. And so now the workers will autoscale up and down as the Sidekiq queue -- I think for the Sidekiq side, it's also the queue time, so how long a job sits in the queue before getting picked up. And there are some extra niceties. It can actually infer the different queue names that you have. So if you have a critical, and then a mailer, and then a general as the three queues that Sidekiq is managing, you really want critical to not back up. So you can tell it to watch that one but ignore the normal one and only use -- Like, when critical is actually getting backed up, and all the other stuff is taken over then -- Again, it's got nice knobs and things, but mostly you can just say, "Turn it on and do the normal thing," and it'll do a very smart thing." STEPH: That does sound really helpful. Just to revisit, so Heroku for autoscaling, when you turn that on, I think Heroku does it based on response times. So if you get into a specific percentile, then Heroku is going to scale up for you to then bring down that response time. But it sounds like with this tool, with Rails autoscaling, then you have additional knobs like the Sidekiq timing that you'd referenced. Are there some other knobs that you found really helpful? CHRIS: Basically, there are two different sides of it. So web and background jobs are going to be handled differently within this tool, and you can actually turn them on or off individually, and you can also, within them, the configurations are specific to that type of thing. So for the web side, you have different values that you can set as the thresholds than you do on the Sidekiq side. Overall, the queue name only makes sense on the Sidekiq side, whereas on the web side, it's just like the web requests all of them 'Please make sure they're not spending too much time waiting for a dyno to actually start processing them.' But yeah, again, it's just a very straightforward tool that does the thing that it says on the tin. I enjoy it. It's one of those simple additions where it's like, yeah, I think I'm happy to pay for this because you're just going to save me a bunch of money every month, in theory. And actually, that side of it is certainly interesting, but more of my app will be responsive if there is any spike in traffic. There's still plenty of other performance things under the hood that I need to make better, but it was nice to just turn those on and be like, yeah, okay. I think everything's going to run a little better now. That seems nice. But yeah, otherwise, for me, a very straightforward week. So I think actually shifting gears again, we have a listener question that we wanted to chat about. And this is one that both of us got very interested to chat about because there's a lot to this topic, but I'm happy to read it here. So the overall topic is improving as a developer, and the question goes, "How do you know you're improving as developers? Is your improvement consistent? Are there regressions? I find myself having very different views about code than I did even a year ago. In some cases, I write code now in a way that I would have criticized not too long ago. For example, I started writing a lot more comments. I used to think a well-named variable obviated the need for comments. While it feels like I'm improving, I have no way of measuring the improvement. It's only a gut feeling. Thanks. Love the show." And this comes from Tom. Thank you, Tom. Glad you enjoy the show. So, Steph, are you improving as a developer? STEPH: I love this question. Thanks, Tom, for sending it in because it is one that I think about but haven't really verbalized, and so I'm really excited to dive into this. So am I improving as a developer? It comes down to, I mean, we first have to talk through definitions. Like, what does it mean to become a better developer? And then, we can talk through metrics and understanding how we're getting there. I also love the other questions, which I know we'll get to. I'm just excited. But are there any regressions? And also, in my mind, they already answered their own question. But I'm getting ahead of myself. So let me actually back up. So how do you know you're improving as a developer? There are a couple of areas that come to mind. And for me, these are probably more in that space of they still have a little bit of a gut feeling to them, but I'm going to try hard to walk that back into a more measurable state. So one of them could be that you're becoming more comfortable with the work that you're doing, so if you are implementing a new email flow or running task on production or writing tests that become second nature, those types of activities are starting to feel more comfortable. To me, that is already a sign of progress, that you are getting more comfortable in that area. It could be that time estimates are becoming more accurate. So perhaps, in the beginning, they're incredibly -- like, you don't have any idea. But as you are gaining experience and you're improving as a developer, you can provide more accurate estimates. I also like to use the metric of how many people are coming to you for help, not necessarily in hard numbers, but I tend to notice when someone on a team is the person that everybody else goes to for help, maybe it's just on a specific topic, maybe it's for the application in general. But I take that as a sign that someone is becoming very knowledgeable in the area, and that way, they're showing that they're improving as a developer, and other people are noticing that and then going to them for help. Those are a couple of the ones that I have. I have some more, but I'd love to hear your thoughts. CHRIS: I think if nothing else, starting with how would we even measure this? Because I do agree it's going to be a bit loose. Unfortunately, I don't believe that there are metrics that we can use for this. So the idea of how many thousand lines of codes do you write a month? Like, that's certainly not the one I want to go with. Or, how many pull requests? Anything like that is going to get gamified too quickly. And so it's really hard to actually define truly quantifiable metrics. I have three in mind that scale the feedback loop length of time. So the first is just speed. Like, how quickly are you able to do the same tasks? So I need to build out a page in Rails. I need a route; I need a controller. I need a feature spec, those sort of things. Those tasks that come up over and over: are you getting faster with those? That's a way to measure. And there's an adage that I think comes from biking, professional cycling, that it never gets any easier; you just go faster. And so the idea is you're doing the same work over time, but you just get a little bit faster, and you're always trying that edge of your capabilities. And so that idea of it never gets any easier, but you are getting faster. I like that framing. We should be doing the same work. We should never get too good for building a crud app. That's my official stance on the matter; thank you very much. But yeah, so that's speed. I think that is a meaningful thing to keep an eye on and your ability to actually deliver features in a timely fashion. The next one would be how robust are the things that you're building? What's the bug count? How regularly do you have to revisit something that you've built to change it, to tweak it either because it doesn't exactly match the intent of the feature that you're developing or because there's an actual bug in it? It turns out this thing that we do is very hard. There are so many moving pieces and getting the design right and getting the functionality just right and handling user input, man, that's tricky. Users will just send anything. And so that core idea of robustness that's going to be more on a week scale sort of thing. So there's a little bit of latency in that measure, whereas speed that's a pretty direct measure. The third one is…I don't know how to frame this, but the idea of being able to revisit your code either yourself or someone else. So if you've written some code, you tried to solve a problem; you tried to encode whatever knowledge you had at the given time in the code. And then when you come back three months later, how easy is it to revisit that code, to change it, to extend it either for yourself (because at that point you've forgotten everything) or for someone else on the team? And so the more that you're writing code that is very easy to extend, that is very easy to revisit and reload that context into your head, how closely the code maps to the actual domain context I think that's a measure as well that I'm really interested in, but there's the most lag in that one. It's like, yeah, months later, did you do a good job? And so the more time you spend, the more you'll have a measure of that, but that's definitely the laggiest of the measures that I have in mind. STEPH: I love that adage that you shared that it never gets easier, but you get faster. That feels so relevant. I really like that. And then I hadn't considered the robustness. That's a really nice one, too, in terms of how often do you have to go back and revisit issues that you've added? CHRIS: You just write code without bugs; that's why you don't think about it. STEPH: [laughs] Oh, if only that were true. CHRIS: Yeah, if only that were true of any of us. STEPH: To keep adding to the list, there are a couple more that come to mind too. I'd mentioned the idea that certain tasks become easier. There's also the capability or the level of comfort in taking on that new, big, scary, unknown task. So there is something on the Teams' board where you're like, I have no idea how to do that, but I have confidence that I can figure it out. I think that is a really big sign that you are growing as a developer because you understand the tools that'll get you to that successful point. And maybe that means persuading someone else to help you; maybe it means looking elsewhere for resources. But you at least know how to get there, which then follows up on your ability to unblock yourself. So if you are in that state of I just don't know what to do next, maybe it's Googling, or maybe it is reaching out for help, but either way, you keep something moving forward instead of just letting it sit there. Another area that I've seen myself and other people grow as developers is our ability to reason about quality and speed. It's something that I feel you, and I talk about pretty often here on the show, but it comes down to our ability to not just write code but then to also make good decisions on behalf of the company that we are working for and the team that we're working with and understanding what matters in terms of what features really need to be part of this MVP? Where can we make compromises? And then figuring out where can we make compromises to get this out to market? But what's really important then for circling back to your idea of revisiting the code, we want code that we can still come back and trust and then easily maintain and make updates to. And then I feel like I'm rambling, but I have a couple more. Shall I keep going? CHRIS: Keep going. Those are great. STEPH: All right. So for the others, there's an increase in responsibilities that I notice. So, in addition to people coming to you more often for help, then it could be that you are receiving more responsibilities. Maybe you are taking on specific ownership of the codebase or a particular part of the team processes. Then that also shows that you are improving and that people would like you to take leadership or ownership of certain areas. And then this one, I am throwing it in here, but your ability to run a meeting. Because I think that's an important part of being a good developer is to also be able to run a meeting with your colleagues and for that to be a productive meeting. CHRIS: Cool. I like that one. I think I want to build on that because I think the core idea of being able to run a meeting well is communication. And I think there's one level of doing this job where it's just about doing the job. It's just about writing the code, maybe some amount of translating a specification or a ticket or whatever it is into the actual code that you need to write. But then how well can you communicate back out? How well when someone in project management says, "Hey, we want to build an aggregated search across the system that searches across our users, and our accounts, and our products, and our orders, and our everything." And you're like, "Okay. We can do that, but it will be hard. And let's talk about the trade-offs inherent in that and the different approaches and why we might pick one versus the other," being able to have that conversation requires a depth of knowledge in the technical but then also being able to understand the business needs and communicate across that boundary. And I think that's definitely an axis on which I enjoy pushing on as I'm continuing to work as a developer. STEPH: Yeah, I'm with you. And I think being a consultant and working at thoughtbot heavily influences my concept of improving as a developer because as developers, it's not just our job to write code but to also be able to communicate and help make good decisions for the team and then collaborate with everyone else in the company versus just implement certain features as they come down the pipeline. So communication is incredibly important. And so I love that that's one of the areas that you highlighted. CHRIS: Actually speaking of the communication thing, there's obviously the very human-centric part of that, but there's, I think, another facet of technical communication that is API design. When you're writing your code, what do you choose to expose and make accessible to collaborators? And I don't just mean API in the terms of a REST API that people are heading, but I mean a class that you have in your system. What are the private methods, and what are the public methods? And how do you think about the shape of it? What data do you expose? What do you not expose? And that can be really impactful because it allows how can you change things over time? The more that you hide, the more you can change. But then, if you don't allow your collaborators to access the bits that they need to be able to work with your system, that's an interesting one that comes to mind. It also aligns with, I don't think you were saying this exactly, but the idea of taking on more amorphous projects. So like, are you working within a system and adding a new feature, or are you designing a system? Are you architecting? The word architect that role can sometimes be complicated within organizations, but that idea of I'm starting fresh, and I'm building a system that others will then work within I think this idea of API design becomes really interesting in that context. What shape do you give to the system that we're working within, and what affordances? And all of that. And that's a very hard thing to get right. So it comes from experience of being like, I used some stuff in the past, and I hated it, so when I am the architect, I will build it better. And then you try, and you fail, and you're like, well, okay, but now I've learned. And then you try it, and then you fail for different reasons. But the seventh time you try, it may be just that time you get the public API just right on the first go. STEPH: Seven times's a charm. That's how that goes, right? CHRIS: That is my understanding, yes. STEPH: I think something that is related to the idea of are you working in a structured space versus working in a new space and then how you develop that API for other people to work with. And then how do you identify when to write a test and what to test? That's another area that you were just making me think of is that I can tell when someone has experience with testing because they know what to test and what feels important to test. And essentially, it comes down to can I deploy with confidence? But there are a lot of times, especially if you're new to testing, that you're going to test everything, and you're going to have a lot of probably useless slow tests. But over time, you will start to realize what's really important. And I think that's one of the areas where then it does start to get harder to measure yourself as a developer because all of our jobs are different, and we work with different tech stacks, and we all have our unique responsibilities and goals. So it may be hard to say specifically like, "Oh, you're really good at X, Y, and Z, and that's how you know that you're improving as a developer." But I have more thoughts on that, which we'll get to in a moment where Tom mentioned that they don't have a way of measuring improvement. Shall I go ahead and jump ahead to I have no way of measuring that improvement, or shall we talk about regressions next? CHRIS: I'm interested in your thoughts on the regressions question because it's not something that I've really thought about. But now that he's asked the question, I'm thinking about it. So yeah, what are your thoughts on that? STEPH: My very quick answer is yes, [laughs] that there are regressions mainly because I respect that our brain can only make so much knowledge readily available to us, and then everything else goes into long-term storage. We can access it at some point, but it takes additional time, or maybe it takes some practice to recall that skill. So I do think there are regressions, and I think that's totally fine that we should be focused on what is serving us most at the moment and be okay with letting go of some of those other skills until we need to refine them again. CHRIS: Yeah. I think there's definitely a truth to true knowledge and experience with, say, a framework or a language that can fade. So if I spend a lot of time away from JavaScript, and then I come back, I'm going to hit my head on a few low ceilings every once in a while for the first couple of days or weeks or whatever it is. It was interesting actually that Tom highlighted the idea of he used to not write comments, and now he writes more comments, and so that transition -- I think we've talked about comments enough so our general thinking on it. But I think it's totally reasonable for there to be a pendulum swing, and maybe there's a slight overcorrection. And you read some blog posts that tell you the truth of the world, and suddenly, absolutely no comments ever that's the rule. And then, later on, you're like, you know, I could really use a comment here. And so you go that way, and then you decide you know what? Comments are good, and you start writing a bunch of them. And so it's sort of weaving back and forth. Ideally, you're honing in on your own personal truth about comments. But that's just an interesting example to me because I certainly wouldn't consider that one a regression. But then there's the bigger story of like, how do we approach building software? Ideally, that's what this podcast does at its best. We're not really a podcast about Rails or JavaScript or whatever it is we're talking about that week, but we're talking about how to build software well. And I think those core ideas feel like they're more permanent for me, or I feel like I'm changing those less. If anything, I feel like I'm ratcheting in on what I believe about good software. And there are some core ideas that I'm just refining over time, not done by any means, but it's that I don't feel like I'm fundamentally reevaluating those core ideas. Whereas I am picking up a new language and approaching a new framework and taking a different approach to what tools I'm using, that sort of thing. STEPH: Yeah, I agree. The core concepts definitely feel more important and more applicable to all the future situations that we're going to be in. So those skills that may fall into the regression category feel appropriate because we are focused on the bigger picture versus how well do I remember this rejects library or something that won't serve us as well? So I agree. I am often focused more on how can I take this lesson and then apply it to other tech stacks or other teams and keep that with me? And I don't want that to regress. But it's okay if those other smaller, easily Google-able skills fall to the side. [laughs] CHRIS: Wait, are you implying that you can't write rejects just off the top of your head or what's…? STEPH: I don't think I could write any rejects off the top of my head. [laughter] CHRIS: Fair. All right. You just go to rubular.com, hit enter, and then we iterate. STEPH: Oh yeah. I don't want to use up valuable space for maintaining that sort of information. Rubular has it for me. I'm just going to go there. CHRIS: I mean, as long as you have the index of the places you go on the internet to find the truth, then you don't need to store that truth. STEPH: A moment ago, you mentioned where Tom highlights that they have different views about code that they wrote, even code that they wrote just like a year ago. And to me, that's a sign of growth in terms that you can look back on code that you have written and be like, well, maybe this would be different, or maybe this is still a good idea, but the fact that you are changing and then reevaluating, I think that is awesome because otherwise, if we aren't able to do that, then that is just a sign of being stagnant to me. We are sticking to the knowledge that we had a year ago, and we haven't grown since then versus that already shows that they have taken in new knowledge. So then that way, they can assess should I be adding comments? When should I add comments? Maybe I should swing away from that idea of this is a hard line of don't ever do this. I think I just have to mention it because there is one that I always feel so deeply about, DRY. DRY is the concept that gives me the most grief in terms that people just overuse it to the point that they do make code very hard to change. All right, that's my bit. I'll get off my pedestal. But DRY and comments are two things [chuckles] that both have their places. CHRIS: I don't know if your experience was similar, but around DRY, I definitely have had the pendulum swing of how I feel about it. And I think again, that honing in thing. But initially, I think I read The Pragmatic Programmers, and they told me that DRY is important. And then I was like, absolutely, there will be no duplication anywhere, and then I felt some pain from that. And I've been in other systems and experienced places where people did remove duplication. I was like, oh, maybe it would have been better, and so I slowly got out of that mindset. But now I'm just in the place of like, I don't know, copy and paste not now, there was a period where I was like, just copy and paste everything. And then I was like, all right, I think there's a subtle line. There's a perfect amount of duplication, and that's the goal is to figure out that just perfect level. But for me, it really has been that evolution, and I was on one side, and then I was on the other side, and then I'm honing back in. And now I have my personal truth about duplication. STEPH: Oh, me too. And I feel like I can be a little more negative about it because I was in the same spot. Because it's a rule, it's a rule that you can apply that when you are new to software development, there aren't that many rules that are so easy to apply to your codebase, but DRY is one of them. You can say, oh, that is duplication. I know exactly what that is, and I can extract it. And then it takes time for you to realize, okay, I can identify it, but just because it's there, it doesn't mean it's a bad thing. Perfect duplication, I like it. CHRIS: Coming back to the idea of when we look back on our code six months, a year later, something like that, I think I believe the statement that we should always look back on our code and be like, oh, what was I doing there? But I think that arc should change over time. So early on in my career, six months later, I look back at my code, and I'm like, oh, goodness, what was happening there? I was very much a self-taught or blog internet-taught programmer just working on my own. I had no one else to talk to. So the stuff that I wrote early on was not good is how I will describe it. And then I got better, and then I got better, and I hope that I'm still getting better. And it's something that probably draws me to software development is I feel like there's always room to get a little bit better. Again, even back to that adage of it doesn't get any easier; you just go faster. Like, that's a version of getting better in my mind. So I hope that I can continue to feel that improvement and that ratcheting up. But I also hope that that arc is leveling off. There is an asymptotic approach to "good software developer." People in the audience, you can't see my air quotes, but I made air quotes there around good software developer. But that idea of I shouldn't look back probably this far into my career and look back at code from three months ago and be like, that's awful. That dude should be fired. I hope I'm not there. And so if you're measuring over time, what does your three months ago look back feel like? Oh, I feel like it's a little better. Still, you should look back and be like, oh, I probably would do that a little bit different given what I know now, what I've learned, but less so, I think. I don't know, what do you think about that? STEPH: Yeah, that makes sense. And I'm also realizing I haven't looked back at my code that much since I am changing projects, and then I don't always have the opportunity to go back to that project and then revisit some of the code. But I do agree with the idea that if you're looking back at code that you've written a couple of months ago that you can see areas that you would improve, but I agree that you wouldn't want it to be something drastic. Like, you wouldn't want to see something that was more of an obvious security hole or performance issue. I think there are maybe certain metrics that I would use. I think they can still happen for sure because we're always learning, but there's also -- I may be taking this in a slightly different direction than you meant, but there's also a kindness filter that I also want us to apply to ourselves where if you're looking back three months ago to six years ago and you're like, oh, that's some rough code, Stephanie. But it's also like, yeah, but that code got me to where I am today, and I'm continuing to progress. So I appreciate who I was in the past, and I have continued to progress to who I am today and then who I will be. CHRIS: What a wonderfully positive lens to put on it. Actually, that makes me think of one of -- We may be getting into rant territory here, but we talk a lot about imposter syndrome in the software development world. And I think there's a lot of utility because this is something that almost everyone experiences. But I think there's a corollary to it that we should talk about, which is a lot of people are coming into this industry, and they're like one year in, and the expectation that one year into a career that -- The thing that we do is not easy as far as I can tell. I haven't figured out how to make it easy. And the expectation that someone's going to be an expert that early on is just completely unreasonable in my mind. In my previous career, I was a mechanical engineer, and I went to school for four years. I actually went to school for five years, not because I was bad at school, but because I went to a place that had a co-op. And so I had both three different six months experiences working and four years of classroom education before I even got any job. And then I started doing things, and that's normal in that world. Whereas in the development world, it is so accessible, and I really feel like that's an absolutely wonderful thing. But the counterpoint of that is folks can jump into this career path very early on in their learning, and the expectation that they can immediately become experts or even in the short order I don't think is realistic. I think sometimes, when we talk about imposter syndrome, we may do a disservice. Like, it's not imposter syndrome. You're just new, and that's totally fine. And I hope you're working in an organization that is supportive of that and that has space for that and can help you grow in a purposeful way. In my mind, it's not realistic to expect everyone to be an expert a year in—end rant. STEPH: Well, I would love to plus-one your rant and add to it a little bit because I completely agree. I also love the phrasing that you just said where it's not that you have imposter syndrome; it's just that you are new and that team should be supportive of people that are new and helping them grow and level up. I also think that's true for senior developers in terms that you are very good at certain skills, but there's always going to be some area of the web or some area of software development that you are new to, and that is also not imposter syndrome. But it's fine to assess your own skills and say, "That's something that I don't know how to do." And sometimes, I think that gets labeled as imposter syndrome, but it's not. It's someone just being genuine and reflecting on their current skills and saying, "I am good at a lot of stuff, but I don't know this one, and I am new to this area." And I think that's an important distinction to make because I still want -- even if you are not new in the sense that you are new to being a software engineer, but you still have that space to be new to something. CHRIS: Yeah, it's an interesting, constantly evolving space. And so giving ourselves a little bit of permission to be beginners on various topics and for me, that's been an experience that's been continual. I think being a consultant, being a freelancer that impacts it a little bit. But nonetheless, even when I go into organizations, I'm like, oh, years in technology that only came out two years ago. That's pretty fresh. And so it's really hard to be an expert on something that's that new. STEPH: Yeah. I think being new to a team has its own superpower. I don't know if we've talked about that before; if we haven't, we should talk about, it but I won't do that now. But being new is its own superpower. But I do want to pivot back to where Tom mentioned that I have no way of measuring that improvement. And I think that's a really great thing to recognize that you're not sure how to measure something. And my very first honest suggestion if you are feeling that way is to go ask your manager and ask them how they are measuring your improvement because that is their job is to understand where you're at and to understand your path as a developer on the team and then helping you set goals. So since I'm a manager at thoughtbot, I'll go first, and I can share some ways that I help my team measure their own improvement. So one of the ways is that each time that we meet to discuss work, I listen to their challenges, and I take notes; I'm a heavy note-taker. And so once I have all those notes, then I can see are there any particular challenges that resurface? Are there any patterns, any areas where they continuously get stuck on? Or are they actually gaining confidence, and maybe something that would have given them trouble a couple of weeks ago is suddenly no big deal? And then I also see if they're able to unblock themselves. So a lot of what I do is far more listening, and I'm happy to then provide suggestions. But I am often just a space for someone to share what they are thinking, what they're going through, and then to walk through ideas and then provide suggestions if they would like some, and then they choose a suggestion that works best for them. And then we can revisit how did it go? So their ability to unblock themselves is also something that I'm looking for in terms of growth. And then together, we also set goals together, and then we measure that progress together. So it's all very transparent. And what areas would you like to improve, and then what areas would it be helpful for thoughtbot or as a consultant for you to improve? And then if I am fortunate enough to be on a project with them and see how they reason about quality and speed, how they communicate the type of features they're most comfortable to work on, and which tasks are more challenging for them, I also look to see do people enjoy working with them? That's a big area of growth and reflects communication, and reliability, and trust. And those are important areas for us to grow as developers. So those are some of the areas that I look to when I'm helping someone else measure their own improvement. CHRIS: I really like that, the structured framing of it, and the way that you're able to give feedback and have that as a constant, continuous way to evaluate, define, measure, and then try and drive towards it. Flipping things around, I want to offer a slightly different thing, which isn't necessarily specifically in the question, but I think it's very close to the question of how do we actually improve as developers? What are the specific things that we can try and do? I'm going to offer a handful of ideas. I'd be super interested to hear what your ideas are. But one of the things that has been really valuable for me is exploring different languages and frameworks. I, without fail, find something in every new language or framework that I then bring back to the core things that I'm working with. And I've continued to work with Rails basically throughout my career, but everything else that I'm doing has informed the way that I work with Rails and the way that I think about building code. As specific examples, functional programming is a really interesting frame of mind, and Elm as a language is such a wonderful, gentle, friendly, fun introduction to functional programming because functional programming can get very abstract very easily. I've also worked with Haskell and Scala and other languages like that, and I find them much more difficult to work with. But Elm has a set of constraints and a user-centric approach that is just absolutely wonderful. So even if you never plan to build a production Elm application, I recommend Elm to absolutely everyone. In terms of frameworks, depending on what you're using, maybe try and find the thing that's the exact opposite. If you're in the JavaScript space, I highly recommend Svelte. I think it's been very informative to me and altered a number of my opinions. A lot of those opinions were formed by React. And it's been interesting to observe my own thinking evolve in that space. But yeah, I think exploring, trying out, -- Have you ever used Lisp? Personally, I haven't, but that's one of the things that's on my list of that seems like it's got some different ideas in it. I wonder what I would learn from that. And so continually pushing on those edges and then bringing that back to the core work you're doing that's one of my favorite things. Another is… It's actually two-fold here. Teaching is one, and I don't mean that in the grand sense; you don't have to be an instructor at a bootcamp or anything like that but even just within your organization trying to host a lunch and learn and teach a concept. Without fail, you have to understand something all the better to be able to teach it. Or as you try and teach something, someone may ask you a question that just shakes the foundation of what you know, and you're like, wow, I hadn't thought about it that way. And so teaching for me has just been this absolutely incredible forcing function for understanding something and being able to communicate about it again, that being one of the core things that I'm thinking about. And then the other facet sort of a related idea is pairing, pair with another developer, pair with a developer who is more senior than you on the team, pair with someone who is more junior than you, pair with someone who's at the same level, pair with the designer, pair with the developer, pair with a product manager, pair with everyone. I cannot get enough pairing. Well, I can, actually. I read a blog post recently about 100% pairing, and I've never gotten anywhere close to that number. But I think a better way to put it is I think pairing applies in so many more contexts than people may traditionally think of it. People sometimes like to compartmentalize and like, pairing is great for big architecture design, but that's about it. And my stance would be pairing is actually great at everything. It is very high bandwidth. It is exhausting, but I have found immense value in every pairing session I've ever had. So, yeah, those are some loose thoughts off the top of my head. Do you have any how to get better protips? STEPH: Yeah, that's a wonderful list. And I'm not sure if this exactly applies because it's been a while since I have seen this talk, but there is a wonderful talk by Sandi Metz. I mean, all of her talks are wonderful, but this one is Go Ahead, Make a Mess. And I believe that Sandi refers to or highlights the idea of trying something new and then reflecting on how did it go? And that was one of the areas that I learned early on, one of the ways to help me progress quickly as a developer. Outside of the suggestions that you've already shared around lots of pairing that was one of the ways that I leveled up quickly is to iterate quickly. So I used to really focus on the code that I was writing, and I thought it needed to be perfect before my colleagues could review it. But then I realized that the sooner that I would push something out for feedback, then the faster I would get other more experienced developers' input, and then that helped me learn at an accelerated rate and then also ship more frequently. So I'd also encourage you to just go ahead and iterate quickly. We talk about with software in general, we want to iterate on the code that we are pushing up for other people to look at and then give us feedback on and then reflect on how did it go? What did we learn? What are some areas that we can improve? I feel like that self-evaluation is huge, and it's something that I know that I frankly don't do enough because one, it also prompts us to appreciate the progress that we have made but then also highlights areas where I feel strong in this area, but these are other areas that I want to work on. CHRIS: While we're on the topic of talks that have been impactful in our journeys of leveling up as developers, I want to quickly list three that just always come to mind for me: Avdi Grimm's Confident Code, Katrina Owen's Therapeutic Refactoring, and Ben Orenstein's Refactoring from Good to Great. There's a theme if you look across those three talks. They're all about refactoring, which is interesting. That tells you some stories about what I believe about how good software is made. It's not made; it's refactored. That's my official belief, but yeah. STEPH: Love it. That's also another great list. [laughs] For additional ways to level up, there are some very specific areas where it could be maybe do code katas or code exercises, or maybe you subscribe to certain newsletters, stay up to date with a language, new features that are being released. But outside of those very specific things, and if folks find this helpful, then maybe you and I can make a fun list, and then we could share that on Twitter as well. But I always go back to the idea of regardless of what level you're at in your career is to think about your specific goals, maybe if you are new to a team and you're new to software development, then maybe you just have very incremental goals of like, I want to learn how to write a test, or I want to learn how to get better at PR review or something very specific. But to have real growth, I think you have to first consider where it is that you want to go and then figure out a way to measure to get there. Circling back to some of the ways that I help my teammates measure that growth, that's one of the things that we talk about. If someone says, "Well, I want to get better at PR review," I'm like, "Great. What does that mean to you? Like, how do you get better at PR review? How can we actually measure this and make it something actionable versus just having this vague feeling of am I better?" I think I've ended up taking this a bit more broad as you were providing more specific examples on how to level up. But I like the examples that you've already provided around education and then trying something outside of your comfort zone. So what's coming to mind are more of those broad strategies of goal setting. CHRIS: I think generally, you need that combination. You need how do I set the measure? How do I think about improvement? And then also ideally a handful of tactics that you can try out. So hopefully, we provided a nice balanced summary here in this episode. And hopefully, Tom, if you're listening, you have gotten some useful things out of this conversation. STEPH: Yeah, this was fun. We managed to take this topic and make a whole episode out of this. So thanks, Tom, for sending in such a great topic. CHRIS: Frankly, when I saw the topic, I was certain this was going to happen. [chuckles] This was an obvious one that was going to fill up the time for us. But yeah, with that, I think we've probably covered plenty here. Should we wrap up? STEPH: I'm sure there's more, but sure, let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed or reach me @SViccari on Twitter. CHRIS: And I'm @christoomey. STEPH: Or hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. Both: Byeeeeeee. Announcer: This podcast was brought to you by thoughtbot. Thoughtbot is your expert design and development partner. Let's make your product and team a success.

All JavaScript Podcasts by Devchat.tv
JSJ 479: Practical Microservices with Ethan Garofolo

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Apr 13, 2021 77:49


Ethan Garofolo is the author of Practical Microservices with Pragmatic Programmers. He starts out debunking the ideas behind pulling parts of a monolith into a different services and change function calls into HTTP calls. Instead, it's an approach that keeps things moving for development teams that solves several productivity issues. He breaks down the ways to move functionality around and which approaches make sense for breaking your application up into pieces that are easy to work on and approachable for multiple teams. Panel Aimee Knight AJ O'Neal Steve Edwards Guest Ethan Garofolo Sponsors Dev Influencers Accelerator JavaScript Error and Performance Monitoring | Sentry Links Super Guitar Bros Under Desk UREVO Treadmill Practical Microservices by Ethan Garofolo XKCD Flow Charts Picks Aimee- The 3 Mindsets to Avoid as a Senior Software Developer AJ- The Movie Great Pyramid K 2019 AJ- Postgres Cheat Sheet AJ- Jim Kwik 10 Morning Habits Ethan- GitHub | message-db/message-db Ethan- Eventide Project Ethan- GitHub | mpareja/gearshaft Ethan- Unlock | Space Cowboys Ethan- Practical Microservices by Ethan Garofolo Ethan- Practical Microservices Steve- Bytes by U;

JavaScript Jabber
JSJ 479: Practical Microservices with Ethan Garofolo

JavaScript Jabber

Play Episode Listen Later Apr 13, 2021 77:49


Ethan Garofolo is the author of Practical Microservices with Pragmatic Programmers. He starts out debunking the ideas behind pulling parts of a monolith into a different services and change function calls into HTTP calls. Instead, it's an approach that keeps things moving for development teams that solves several productivity issues. He breaks down the ways to move functionality around and which approaches make sense for breaking your application up into pieces that are easy to work on and approachable for multiple teams. Panel Aimee Knight AJ O'Neal Steve Edwards Guest Ethan Garofolo Sponsors Dev Influencers Accelerator JavaScript Error and Performance Monitoring | Sentry Links Super Guitar Bros Under Desk UREVO Treadmill Practical Microservices by Ethan Garofolo XKCD Flow Charts Picks Aimee- The 3 Mindsets to Avoid as a Senior Software Developer AJ- The Movie Great Pyramid K 2019 AJ- Postgres Cheat Sheet AJ- Jim Kwik 10 Morning Habits Ethan- GitHub | message-db/message-db Ethan- Eventide Project Ethan- GitHub | mpareja/gearshaft Ethan- Unlock | Space Cowboys Ethan- Practical Microservices by Ethan Garofolo Ethan- Practical Microservices Steve- Bytes by U;

Devchat.tv Master Feed
JSJ 479: Practical Microservices with Ethan Garofolo

Devchat.tv Master Feed

Play Episode Listen Later Apr 13, 2021 77:49


Ethan Garofolo is the author of Practical Microservices with Pragmatic Programmers. He starts out debunking the ideas behind pulling parts of a monolith into a different services and change function calls into HTTP calls. Instead, it's an approach that keeps things moving for development teams that solves several productivity issues. He breaks down the ways to move functionality around and which approaches make sense for breaking your application up into pieces that are easy to work on and approachable for multiple teams. Panel Aimee Knight AJ O'Neal Steve Edwards Guest Ethan Garofolo Sponsors Dev Influencers Accelerator JavaScript Error and Performance Monitoring | Sentry Links Super Guitar Bros Under Desk UREVO Treadmill Practical Microservices by Ethan Garofolo XKCD Flow Charts Picks Aimee- The 3 Mindsets to Avoid as a Senior Software Developer AJ- The Movie Great Pyramid K 2019 AJ- Postgres Cheat Sheet AJ- Jim Kwik 10 Morning Habits Ethan- GitHub | message-db/message-db Ethan- Eventide Project Ethan- GitHub | mpareja/gearshaft Ethan- Unlock | Space Cowboys Ethan- Practical Microservices by Ethan Garofolo Ethan- Practical Microservices Steve- Bytes by U;

practical panel mindsets github bytes jim kwik microservices morning habits steve edwards senior software developer aimee knight pragmatic programmers super guitar bros dev influencers accelerator performance monitoring sentry
Smart Software with SmartLogic
Sophie DeBenedetto on Programming Phoenix LiveView

Smart Software with SmartLogic

Play Episode Listen Later Mar 4, 2021 48:20


As users increasingly demand interactivity from their web experiences, Phoenix LiveView is becoming the dominant way of writing interactive Elixir applications without a loss to reliability. Today we welcome back an old friend of the show and GitHub engineer Sophie DeBenedetto to talk about her upcoming book, Programming Phoenix LiveView. We open our conversation with Sophie by hearing about her work at GitHub, as well as what we can expect from the Code BEAM V conference. As she doesn’t always get to use Elixir at her job, Sophie then discusses how coders can sharpen their Elixir skills when not at work. After exploring how companies can begin adopting Elixir through event-driven design, we dive into Sophie’s book. She unpacks the value of LiveView when building efficient web applications before touching on how her book can best help people to learn LiveView. We ask Sophie how intertwined the future of Elixir is to the success of LiveView and how this might impact Phoenix. Her answers highlight LiveView’s role in pushing Elixir adoption while also making Elixir easier to learn. We wrap up our discussion by chatting about the challenges of technical writing and Sophie’s experience working with the wonderful Pragmatic Programmers publishing house. Tune in to hear more insights into programming LiveView; if you believe the hype, it’s “one of the most important new frameworks of our generation.” Key Points From This Episode: We catch up with guest Sophie DeBenedetto and hear about the Code BEAM V conference. Sophie shares her feelings on coding in Go. How Sophie engages with Elixir when it’s not a key part of her day job. What Flatiron School did to work towards Elixir adoption. Exploring the concept of event-driven design. Insights into the eventing system used at GitHub. Sophie talks about her experience as an engineering manager. Why Sophie transitioned from being a manager to being an individual contributor. How work-from-home has impacted meeting expectations. Hear the elevator pitch for Sophie’s upcoming book. Thoughts on how beginner-friendly Elixir is as a language. Whether Phoenix LiveView is the future of Elixir. How the attention placed on LiveView limits access to Phoenix resources and tutorials. We ask Sophie if LiveView will make it easier or more difficult to learn Elixir. Advice on writing technical books and the writing support offered by Pragmatic Programmers. Links Mentioned in Today’s Episode: SmartLogic — https://smartlogic.io/ Elixir Wizards Discord — https://smr.tl/wizards-discord Elixir Wizards Email — podcast@smartlogic.io Sophie DeBenedetto — http://sophiedebenedetto.nyc/ Sophie DeBenedetto on LinkedIn — https://www.linkedin.com/in/sophiedebenedetto/ Sophie DeBenedetto on Twitter — https://twitter.com/smdebenedetto Programming Phoenix LiveView — https://www.pragprog.com/titles/liveview/programming-phoenix-liveview/ Beam Radio — https://www.beamrad.io/ Code BEAM V — https://codesync.global/conferences/code-beam-sto/ Bruce Tate — https://twitter.com/redrapids José Valim — https://twitter.com/josevalim Nx — https://dashbit.co/blog/nx-numerical-elixir-is-now-publicly-available Alex Koutmos — https://twitter.com/akoutmos EMPEX — https://empex.co/nyc.html Flatiron School — https://flatironschool.com/ ‘What is the difference between Event Driven and Event Sourcing?’ — https://softwareengineering.stackexchange.com/questions/385375/what-is-the-difference-between-event-driven-and-event-sourcing Chris Keithley — https://twitter.com/chriskeathley GitHub — https://github.com/ Steven Nuñez — https://twitter.com/StevenNunez ‘Shipping Greenfield Elixir in a Legacy World’ — https://codesync.global/conferences/code-beam-v-america-2021/training/#145shipping-greenfield-elixir-in-a-legacy-world Ruby on Rails Tutorial: Learn Web Development with Rails — https://www.amazon.com/Ruby-Rails-Tutorial-Addison-Wesley-Professional-ebook/dp/B01N779HKK Toran Billups — https://twitter.com/toranb The Pragmatic Programmers — https://pragprog.com/ Chris McCord — https://twitter.com/chris_mccord/ Dave Thomas — https://twitter.com/pragdave/ Andy Hunt — https://twitter.com/PragmaticAndy/ Zenni — https://www.zennioptical.com/ Warby Parker — https://www.warbyparker.com/ Special Guests: Sophie DeBenedetto and Sundi Myint.

Software Process and Measurement Cast
SPaMCAST 639 - Chaos Engineering, A Conversation With Mikolaj Pawlikowski

Software Process and Measurement Cast

Play Episode Listen Later Feb 21, 2021 34:07


Chaos Engineering comes to the Software Process and Measurement Cast this week delivered (chaotically) by Mikolaj Pawlikoski.  Miko and I talked about the definition of chaos engineering, why chaos is not scary, and most importantly his new book Chaos Engineering, Site reliability through controlled disruption.  One of the most important side effects of chaos engineering is uninterrupted sleep caused by things not going bump in the night! Mikolaj Pawlikowski is a recognized authority on chaos engineering. He is the creator of the Kubernetes chaos engineering tool PowerfulSeal, and the networking visibility tool Goldpinger. https://mikolajpawlikowski.com/ https://twitter.com/mikopawlikowski https://www.linkedin.com/in/mikolajpawlikowski/ I have discount codes for the listeners of the Software Process and Measurement Cast, ping me @tcagley@tomcagley.com and I will share them with you! The Software Process and Measurement Cast is a proud media sponsor of the DevOps Online Summit.  Not to put too fine a point on it, one of the best ways to get your message heard is to speak.  The crew at the DevOps Online Summit provides a phenomenal platform to network with fellow practitioners from all over the world. Start the journey to speaking at the DevOps Online Summit 2021 by submitting at https://bit.ly/3syp2c5 Re-Read Saturday News  Today we dive into the main part of Fixing Your Scrum, Practical Solutions to Common Scrum Problems, by Ryan Ripley and Todd Miller, published in 2020 by The Pragmatic Programmers.  In this installment of Re-read Saturday, we tackle both Chapter 1: A Brief Introduction To Scrum and Chapter 2: Why Scrum Goes Bad.  If you have not bought your copy -- what are you waiting for? Fixing Your Scrum: Practical Solutions to Common Scrum Problems  This Week’s Installment  Week 1: Re-read Logistics and Front Matter - https://bit.ly/3mgz9P6  Week 2: A Brief Introduction To Scrum, and Why Scrum Goes Bad - https://bit.ly/37w4Dv9  Next SPaMCAST Next week, we will explore the goals of Communities of Practice. Organizations are increasingly becoming more diverse and distributed, while at the same time pursuing mechanisms to increase collaboration between groups and consistency of knowledge and practice hence the rapid growth of Communities of Practice.  What are their goals? Also, Tony Timbol brings his brand new column to the podcast. “To Tell A Story” will explore the nuances of User Stories led by a practitioner, consultant, entrepreneur, and science fiction author.  Tony does it all. 

Software Process and Measurement Cast
SPaMCAST 638 - Cybersecurity and IoT, A Conversation With Paul Clayson

Software Process and Measurement Cast

Play Episode Listen Later Feb 14, 2021 31:07


This week we connect with Paul Clayson to talk about cybersecurity in the IoT space. When you are dealing with extreme constraints being secure is complicated and doing nothing is a really bad option. Paul also weighs in on the CMMC (new security model) and serial entrepreneurism.     Paul’s Bio: CEO of AgilePQ. 98% of IoT devices are currently unsecured. On average a data breach costs $8.2 million. AgilePQ was born to address these issues. AgilePQ has raised a total of $10M. Paul has extensive experience in launching and growing early-stage companies. He has launched four disruptive technology companies and served as CEO to five early-stage companies in nanotechnology, automotive, graphene, carbon nanotube, PCB, microprocessor, and other advanced technologies. Connect with Paul at linkedin.com/in/paulclayson The Software Process and Measurement Cast is a proud media sponsor of the DevOps Online Summit.  Not to put too fine a point on it, one of the best ways to get your message heard is to speak.  The crew at the DevOps Online Summit provides a phenomenal platform to network with fellow practitioners from all over the world. Start the journey to speaking at the DevOps Online Summit 2021 by submitting at https://bit.ly/3syp2c5 https://bit.ly/3syp2c5 Re-Read Saturday News  Today we begin the re-read of Fixing Your Scrum, Practical Solutions to Common Scrum Problems, by Ryan Ripley and Todd Miller, published in 2020 by The Pragmatic Programmers. Over the next 14 or 15 weeks we will have fun exploring the concepts in the book and sharing ideas on how to use the information Ryan and Todd deliver.  If you have not bought your copy -- what are you waiting for? Fixing Your Scrum: Practical Solutions to Common Scrum Problems  This Week’s Installment  Week 1: Re-read Logistics and Front Matter - https://bit.ly/3mgz9P6  Next SPaMCAST In the next podcast, I talk to Mikolaj Pawlikowski.  Mikolaj and I discuss his new book from Manning Publications, Chaos Engineering, Site reliability through controlled disruption. Our discussion covered the definition of chaos engineering and how it can be applied to improve value and reliability.  

The Cynical Developer
Episode 123 - "20yrs of Pragmatic Programmers

The Cynical Developer

Play Episode Listen Later Sep 16, 2019 46:19


In this episode we talk to Andy Hunt and Dave Thomas about Pragmatic Programming, and the release of the 20th Anniversary Edition of the Pragmatic Programmer   Contact Andy Hunt: Twitter: Website:   Contact Dave Thomas: Website: Twitter:   Other Links: The Pragmatic Bookshelf: The Pragmatic Programmer, 20th Anniversary Edition:

Ruby Rogues
RR 423: The Well-Grounded Rubyist with David A. Black & Joseph Leo III

Ruby Rogues

Play Episode Listen Later Jul 30, 2019 49:12


Sponsors Sentry use code “devchat” for $100 credit Cloud 66 - Pain Free Rails Deployments: Try Cloud 66 Rails for FREE & get $66 free credits with promo code RubyRogues Panel Charles Max Wood Andrew Mason With Special Guests: David A. Black and Joseph Leo III Episode Summary David A. Black has been a Ruby user for 19 years and has been writing books about Ruby for the last 14 years. Joseph spent 12 years in software and started the company Def Method Inc. Together, they co-authored the book The Well-Grounded Rubyist, which will soon have its third edition released. They give some of the history behind The Well-Grounded Rubyist. Joseph talks about his experience being brought into the project. David and Joseph talk about how The Well-Grounded Rubyist is different from other books on Ruby. This book is helpful because a lot of people begin by understanding Ruby more than Rails, and this book talks about ways to think about Ruby and understand how it’s structure. Joseph and David talk about how The Well-Grounded Rubyist 3rd edition differs from the 2nd edition. The book has been updated so that a lot of the code and solutions for the exercises are available online and there is an additional chapter in part 3 about Ruby dynamics and how one would write functional programming with Ruby The panel discusses how important it is to learn Ruby before starting a job in Rails 2. They agree that if you are a Ruby developer, even if you’re working on Rails apps, so you should know your tools. They discuss how far down that road The Well Grounded Rubyist would get readers. They panelists talk about other books that are a natural prequel or sequel to the The Well-Grounded Rubyist. Joseph and David talk about their approach to reading books and how The Well-Grounded Rubyist should be read. Their goal in making the book was not to have people work on an overarching application while reading the book, but rather there are exercises and examples that you are encouraged to work through. There are some lessons in the book that you won’t write often, but you still need to know how to do it. While the book doesn’t have everything about Ruby, but the examples are designed to give you the best returns for you study. David and Joseph conclude by giving their final thoughts on the book. Links The Well-Grounded Rubyist, Third Edition Perl Programming Ruby 1.9 & 2.0: The Pragmatic Programmers' Guide (The Facets of Ruby) 4th Edition Practical Object-Oriented Design: An Agile Primer Using Ruby (2nd Edition) by Sandi Metz String mutability Follow DevChat on Facebook and Twitter Picks Andrew Mason: Default Gems Charles Max Wood: Good to Great: Why Some Companies Make The Leap and Others Don't by Jim Collins David A. Black: Pragmatic Programmer 2nd edition Davidablack.net and @david_a_black on Twitter Joseph Leo III:  Barbarians at the Gate Firehydrant.io @jleo3 and defmethod.com

Devchat.tv Master Feed
RR 423: The Well-Grounded Rubyist with David A. Black & Joseph Leo III

Devchat.tv Master Feed

Play Episode Listen Later Jul 30, 2019 49:12


Sponsors Sentry use code “devchat” for $100 credit Cloud 66 - Pain Free Rails Deployments: Try Cloud 66 Rails for FREE & get $66 free credits with promo code RubyRogues Panel Charles Max Wood Andrew Mason With Special Guests: David A. Black and Joseph Leo III Episode Summary David A. Black has been a Ruby user for 19 years and has been writing books about Ruby for the last 14 years. Joseph spent 12 years in software and started the company Def Method Inc. Together, they co-authored the book The Well-Grounded Rubyist, which will soon have its third edition released. They give some of the history behind The Well-Grounded Rubyist. Joseph talks about his experience being brought into the project. David and Joseph talk about how The Well-Grounded Rubyist is different from other books on Ruby. This book is helpful because a lot of people begin by understanding Ruby more than Rails, and this book talks about ways to think about Ruby and understand how it’s structure. Joseph and David talk about how The Well-Grounded Rubyist 3rd edition differs from the 2nd edition. The book has been updated so that a lot of the code and solutions for the exercises are available online and there is an additional chapter in part 3 about Ruby dynamics and how one would write functional programming with Ruby The panel discusses how important it is to learn Ruby before starting a job in Rails 2. They agree that if you are a Ruby developer, even if you’re working on Rails apps, so you should know your tools. They discuss how far down that road The Well Grounded Rubyist would get readers. They panelists talk about other books that are a natural prequel or sequel to the The Well-Grounded Rubyist. Joseph and David talk about their approach to reading books and how The Well-Grounded Rubyist should be read. Their goal in making the book was not to have people work on an overarching application while reading the book, but rather there are exercises and examples that you are encouraged to work through. There are some lessons in the book that you won’t write often, but you still need to know how to do it. While the book doesn’t have everything about Ruby, but the examples are designed to give you the best returns for you study. David and Joseph conclude by giving their final thoughts on the book. Links The Well-Grounded Rubyist, Third Edition Perl Programming Ruby 1.9 & 2.0: The Pragmatic Programmers' Guide (The Facets of Ruby) 4th Edition Practical Object-Oriented Design: An Agile Primer Using Ruby (2nd Edition) by Sandi Metz String mutability Follow DevChat on Facebook and Twitter Picks Andrew Mason: Default Gems Charles Max Wood: Good to Great: Why Some Companies Make The Leap and Others Don't by Jim Collins David A. Black: Pragmatic Programmer 2nd edition Davidablack.net and @david_a_black on Twitter Joseph Leo III:  Barbarians at the Gate Firehydrant.io @jleo3 and defmethod.com

All Ruby Podcasts by Devchat.tv
RR 423: The Well-Grounded Rubyist with David A. Black & Joseph Leo III

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Jul 30, 2019 49:12


Sponsors Sentry use code “devchat” for $100 credit Cloud 66 - Pain Free Rails Deployments: Try Cloud 66 Rails for FREE & get $66 free credits with promo code RubyRogues Panel Charles Max Wood Andrew Mason With Special Guests: David A. Black and Joseph Leo III Episode Summary David A. Black has been a Ruby user for 19 years and has been writing books about Ruby for the last 14 years. Joseph spent 12 years in software and started the company Def Method Inc. Together, they co-authored the book The Well-Grounded Rubyist, which will soon have its third edition released. They give some of the history behind The Well-Grounded Rubyist. Joseph talks about his experience being brought into the project. David and Joseph talk about how The Well-Grounded Rubyist is different from other books on Ruby. This book is helpful because a lot of people begin by understanding Ruby more than Rails, and this book talks about ways to think about Ruby and understand how it’s structure. Joseph and David talk about how The Well-Grounded Rubyist 3rd edition differs from the 2nd edition. The book has been updated so that a lot of the code and solutions for the exercises are available online and there is an additional chapter in part 3 about Ruby dynamics and how one would write functional programming with Ruby The panel discusses how important it is to learn Ruby before starting a job in Rails 2. They agree that if you are a Ruby developer, even if you’re working on Rails apps, so you should know your tools. They discuss how far down that road The Well Grounded Rubyist would get readers. They panelists talk about other books that are a natural prequel or sequel to the The Well-Grounded Rubyist. Joseph and David talk about their approach to reading books and how The Well-Grounded Rubyist should be read. Their goal in making the book was not to have people work on an overarching application while reading the book, but rather there are exercises and examples that you are encouraged to work through. There are some lessons in the book that you won’t write often, but you still need to know how to do it. While the book doesn’t have everything about Ruby, but the examples are designed to give you the best returns for you study. David and Joseph conclude by giving their final thoughts on the book. Links The Well-Grounded Rubyist, Third Edition Perl Programming Ruby 1.9 & 2.0: The Pragmatic Programmers' Guide (The Facets of Ruby) 4th Edition Practical Object-Oriented Design: An Agile Primer Using Ruby (2nd Edition) by Sandi Metz String mutability Follow DevChat on Facebook and Twitter Picks Andrew Mason: Default Gems Charles Max Wood: Good to Great: Why Some Companies Make The Leap and Others Don't by Jim Collins David A. Black: Pragmatic Programmer 2nd edition Davidablack.net and @david_a_black on Twitter Joseph Leo III:  Barbarians at the Gate Firehydrant.io @jleo3 and defmethod.com

devpath.fm
Pragmatic Programmers Andy Hunt and Dave Thomas

devpath.fm

Play Episode Listen Later Jul 12, 2019 61:20


Andy Hunt and Dave Thomas are the authors of The Pragmatic Programmers, one of the most influential software engineering books of all time. Andy and Dave are also original members of the agile movement and have been writing code for several decades. They are programmers. Andy and Dave's internet home: https://pragprog.com/

The Changelog
The Pragmatic Programmers

The Changelog

Play Episode Listen Later Jul 11, 2019 78:40 Transcription Available


Dave Thomas and Andy Hunt, best known as the authors of The Pragmatic Programmer and founders of The Pragmatic Bookshelf, joined the show today to talk about the 20th anniversary edition of The Pragmatic Programmer. This is a beloved book to software developers all over the world, so we wanted to catch up with Andy and Dave to talk about how this book came to be, some of the wisdom shared in its contents, as well as the impact it’s had on the world of software. Also, the beta book is now “fully content complete” and is going to production. If you decide to pick up the ebook, you’ll get a coupon for 50% off the hardcover when it comes out this fall.

Changelog Master Feed
The Pragmatic Programmers (The Changelog #352)

Changelog Master Feed

Play Episode Listen Later Jul 11, 2019 78:40 Transcription Available


Dave Thomas and Andy Hunt, best known as the authors of The Pragmatic Programmer and founders of The Pragmatic Bookshelf, joined the show today to talk about the 20th anniversary edition of The Pragmatic Programmer. This is a beloved book to software developers all over the world, so we wanted to catch up with Andy and Dave to talk about how this book came to be, some of the wisdom shared in its contents, as well as the impact it’s had on the world of software. Also, the beta book is now “fully content complete” and is going to production. If you decide to pick up the ebook, you’ll get a coupon for 50% off the hardcover when it comes out this fall.

CoRecursive - Software Engineering Interviews
Learning to Think with Andy Hunt - Pragmatic Programmers guide to being productive

CoRecursive - Software Engineering Interviews

Play Episode Listen Later Apr 15, 2019 54:51


Andy Hunt is a celebrity in the world of software development. Or at least he is one to me. The Pragmatic Programmer is a classic book on software development book. He is an author of the agile manifesto and started the book company that has published many great books, including several by recent guests. Today I talk to Andy about how software engineers can get better at thinking and learning. How can we develop this meta-skill and how can being aware of common mistakes our brain make us more productive? Show notes: The Pragmatic Programmer  Pragmatic Thinking and Learning  Conglommora  Webpage for Episode  

Test & Code - Python Testing & Development
69: Andy Hunt - The Pragmatic Programmer

Test & Code - Python Testing & Development

Play Episode Listen Later Mar 21, 2019 48:34


Andy Hunt and Dave Thomas wrote the seminal software development book, The Pragmatic Programmer. Together they founded The Pragmatic Programmers and are well known as founders of the agile movement and authors of the Agile Manifesto. They founded the Pragmatic Bookshelf publishing business in 2003. The Pragmatic Bookshelf published it's most important book, in my opinion, in 2017 with the first pytest book (https://pragprog.com/book/bopytest/python-testing-with-pytest) available from any publisher. Topics: * The Pragmatic Programmer (https://pragprog.com/book/tpp/the-pragmatic-programmer), the book * The Manifesto for Agile Software Development (https://agilemanifesto.org/) * Agile methodologies and lightweight methods * Some issues with "Agile" as it is now. * The GROWS Method (https://growsmethod.com/) * Pragmatic Bookshelf (https://pragprog.com/), the publishing company * How Pragmatic Bookshelf is different, and what it's like to be an author (http://write-for-us.pragprog.com/) with them. * Reading and writing sci-fi novels, including Conglommora (https://conglommora.com/), Andy's novels. * Playing music (https://andyhunt.bandcamp.com/). Special Guest: Andy Hunt.

iteration
Pragmatic Paranoia

iteration

Play Episode Listen Later Jun 8, 2018 32:46


Chapter 4 - Pragmatic Paranoia Tip 30: You Can't Write Perfect Software perfect software doesn't exist "defensive driving" analogy for a programmer, you shouldn't trust YOURSELF either, haha “Pragmatic Programmers code in defenses against their own mistakes.” John: To me this means testing and never assuming the user is wrong. Tip 31: Design with Contracts (long section alert) https://github.com/egonSchiele/contracts.ruby "You can think of contracts as assert on steroids" This says that double expects a number and returns a number. Here's the full code: require 'contracts' class Example include Contracts::Core include Contracts::Builtin Contract Num => Num def double(x) x * 2 end end puts Example.new.double("oops") be strict in what you will accept before you begin, and promise as little as possible in return. Remember, if your contract indicates that you'll accept anything and promise the world in return, then you've got a lot of code to write! What is a "correct" program? “What is a correct program? One that does no more and no less than it claims to do. Documenting and verifying that claim is the heart of Design by Contract” idea of "designing by contract" - a program should do more and no less than promised this is kind of like testing. Ruby doesn't have a "contract" system built into its design obviously, we have a Ruby gem for it! hah the reason this is supposedly more powerful than plain ol assertions is that contracts can propagate down the inheritance hierarchy given some precondition that must be true (i.e. must be a positive integer) -> the postcondition will be satisifed Tip 32: Crash Early don't have, "it can't happen mentality" code defensively a pragmatic progammer tells themself that if there is an error, something very bad has happened err on the side of crashing earlier! - when you don't, your program may continue with corrupted data “It's much easier to find and diagnose the problem by crashing early, at the site of the problem.” John: In ruby using rescue to aggressively just pushes the problem up. Not crashing but not working properly. When your code discovers that something that was something to be impossible just happened, your program is no longer viable A dead program normally does a lot less damage than a crippled one this brings into discussion being able to handle errors gracefully - this is very much a UX question as well Tip 33: If it can't happen, use assertions to ensure that it won't "This application will never be used abroad, so why internationalize it?" Let's not practice this kind of self-deception, particularly when coding this cuts me deep when you're this confident, you should write tests to absolutely ensure that you're right John: Write tests prove it won't be used in a certain way. I assumed there would always be money in the stripe account. Think through how the world will screw things up. Write tests against it. Tip 34: Use exceptions for exceptional problems our good friend, the javascript try...catch - ask yourself: "will this code still run if I remove all of the exception handlers". if the answer is, "no" then maybe exceptions are being used in nonexceptional circumstances John: Error and an exception are two different things. Very loosely: one is based on incorrect inputs the other is an error in a process. Programs that use exceptions as part of their normal processing suffer from all the readability and maintainability problems of classic spaghetti code. These programs break encapsulation routines and their callers are more tighlting coupled via exception handling Tip 35: Finish what you start John: Garbage collection. We are lucky as most major frameworks do garbage collection for us. resources that devs manage: memory, transactions, threads, files, timers these resources need memory allocated, THEN deallocated the problem is that devs don't have a plan for dealing with allocation AND deallocation basically, don't forget to garbage collect not doing so may lead to memory leaks don't forget to do things like close files John: I currently have this problem with action-cable web-socket connections. I am opening them and not managing the closing of these connections well. So it's leading to performance issues. Email sending: make sure it delivered. Handle the exception, finish what you started! Picks John: Rails Conf Talks are live I will update with my blogpost of top picks here. Polymorphism

The iPhreaks Show
iPS 239: Xcode Treasures with Chris Adamson

The iPhreaks Show

Play Episode Listen Later May 10, 2018 75:22


Panel:  Gui Rambo Andrew Madsen Eric Sadun Special Guest: Chris Adamson In today's episode, the iPheaks panelist speak with Chris Adamson, a freelance iOS and Mac developer from Grand Rapids Michigan. Also, Chris is an author and co-author of a number of books, including Xcode Treasures. Chris is on the show to talk about this book abut Xcode called Xcode Treasures. This is a great episode to learn about another avenue of valid information on the inner workings of Xcode. In particular, we dive pretty deep on: Book Xcode Treasures Negativity about Xcode Tools Documentation Code Warrior Hardware 32 bit issues What are the biggest frustrations with Xcode as a developer? What are the things you love about Xcode? Xcode project format Xcode not savvy with version control Apple addressing these issues Interface Builder What did you learn about Xcode when writing the book? Code Signing Sand boxing app Git control VeraCode Fonts Who needs to buy you book? Mid Level and up iOS developers need this book. Pragmatic Programmers  Beta Program When are with going to see the book? Xcode for iPad? Xcode as an IDE Core Audio talk and updates And much much more! Links: http://subfurther.com/blog https://www.linkedin.com/in/invalidname// https://www.oreilly.com/pub/au/1045 Xcode Treasures https://developer.apple.com/xcode/ Picks: Gui OS Log API Andrew Online Swift Playground Juiced.gs Erica Snippity Chris We are X

Devchat.tv Master Feed
iPS 239: Xcode Treasures with Chris Adamson

Devchat.tv Master Feed

Play Episode Listen Later May 10, 2018 75:22


Panel:  Gui Rambo Andrew Madsen Eric Sadun Special Guest: Chris Adamson In today's episode, the iPheaks panelist speak with Chris Adamson, a freelance iOS and Mac developer from Grand Rapids Michigan. Also, Chris is an author and co-author of a number of books, including Xcode Treasures. Chris is on the show to talk about this book abut Xcode called Xcode Treasures. This is a great episode to learn about another avenue of valid information on the inner workings of Xcode. In particular, we dive pretty deep on: Book Xcode Treasures Negativity about Xcode Tools Documentation Code Warrior Hardware 32 bit issues What are the biggest frustrations with Xcode as a developer? What are the things you love about Xcode? Xcode project format Xcode not savvy with version control Apple addressing these issues Interface Builder What did you learn about Xcode when writing the book? Code Signing Sand boxing app Git control VeraCode Fonts Who needs to buy you book? Mid Level and up iOS developers need this book. Pragmatic Programmers  Beta Program When are with going to see the book? Xcode for iPad? Xcode as an IDE Core Audio talk and updates And much much more! Links: http://subfurther.com/blog https://www.linkedin.com/in/invalidname// https://www.oreilly.com/pub/au/1045 Xcode Treasures https://developer.apple.com/xcode/ Picks: Gui OS Log API Andrew Online Swift Playground Juiced.gs Erica Snippity Chris We are X

IT Career Energizer
Software Development Is About People with Brian P Hogan

IT Career Energizer

Play Episode Listen Later Jan 28, 2018 23:28


Brian P Hogan is a web developer, teacher, book editor and musician. He is also author of several books including, “Exercises For Programmers”. Brian is currently a Technical Editor at Digital Ocean and is also a development editor at the Pragmatic Programmers as well as a panellist on the Ruby Rogues podcast. In this episode Brian tells us why software development is mostly about people and that it’s about solving problems. Brian also talks about making the most of opportunities to engage with smart people, why the answer is always ‘No’ unless you say ‘Yes’, and why just having a backup of your data is not enough. To find out more about this episode, visit the show notes page at www.itcareerenergizer.com/e39

/dev/hell
Episode 65: Entrepremurder

/dev/hell

Play Episode Listen Later Sep 18, 2015


Episode 65: slow roasted for sweet mesquite flavor! Derick Bailey of WatchMeCode fame joins us for an awesome discussion of the trials and tribulations of developer entrepreneurship. Derick talks about why Signal Leaf is going away, and we swap terrifying PayPal stories. Do these things! Check out our sponsors: Roave and WonderNetwork Buy stickers at devhell.info/shop Follow us on Twitter here Rate us on iTunes here Listen Download now (MP3, 77.2MB, 1:28:12 ) Links and Notes Derick Bailey WatchMeCode Signal Leaf Brian P Hogan EntreProgrammers Podcast Derick’s series of videos for Pragmatic Programmers 30x500 Amy Hoy’s blog Gary Vaynerchuck’s TED talk DevChat.tv 5by5.tv

Devchat.tv Master Feed
118 iPS Tutorials For Developers and Gamers with Ray Wenderlich, Mic Pringle, and Greg Heo

Devchat.tv Master Feed

Play Episode Listen Later Aug 20, 2015 53:33


01:14 - Ray Wenderlich Introduction Twitter Blog 01:23 - Mic Pringle Introduction Twitter GitHub The raywenderlich.com Podcast 01:40 - Greg Heo Introduction Twitter GitHub Blog 01:47 - The Conception of Ray’s Tutorial Website and The Tutorial Team Ray Wenderlich: Tutorials for iPhone / iOS / Developers and Gamers EA Games 05:58 - Maintaining High Quality and Consistency 07:26 - Tips & Advice for Writing a Tutorial 09:12 - Avoiding User Frustration 11:54 - Writing Books 13:00 - Traditional vs Self-Publishing Pragmatic Programmers 14:31 - Book Content vs Site Tutorials 15:41 - Starter Kits 16:33 - Transitioning to Swift 19:47 - Error/Bug Support 21:06 - Branching Out Into Other Technologies 22:21 - Selling Introductory vs Advanced Tutorials 25:15 - Choosing Topics 26:08 - RWDevCon 31:48 - Working with Ray and on raywenderlich.com 36:31 - Maximizing Marketing Opportunities 39:46 - Writing Tutorials for Mainstream Apps & Games 41:08 - Highlights Video Tutorials iOS Games by Tutorials Second Edition: Beginning 2D iOS Game Development with Swift iOS Animations by Tutorials by Marin Todorov   Episode Resources The Freelancers' Show Episode #164: Teaching and Learning Courses with Breanne Dyck The Freelancers' Show Episode #165: Strategy and Project Management with Marie Poulin JavaScript Jabber Episode #173: Online Learning with Gregg Pollack Picks WinObjC (Andrew) Friday Q&A 2015-07-31: Tagged Pointer Strings by Mike Ash (Andrew) The Web Platform Podcast Episode 56: Building Your Brand with Charles Max Wood (Chuck) Developer On Fire Episode 017 - Charles Max Wood - Get Involved and Try New Things (Chuck) Skype’s Inside Out Emojis (Chuck) little bites of cocoa (Ray) The insanely slow road to building a blog (and why most people give up) by Belle Beth Cooper (Ray) What the Best College Teachers Do by Ken Bain (Greg) What’s Your Learning Style? Quiz (Greg) Brian Gilham's WatchKit Resources (Mic) Exploding Kittens (Mic)

The iPhreaks Show
118 iPS Tutorials For Developers and Gamers with Ray Wenderlich, Mic Pringle, and Greg Heo

The iPhreaks Show

Play Episode Listen Later Aug 20, 2015 53:33


01:14 - Ray Wenderlich Introduction Twitter Blog 01:23 - Mic Pringle Introduction Twitter GitHub The raywenderlich.com Podcast 01:40 - Greg Heo Introduction Twitter GitHub Blog 01:47 - The Conception of Ray’s Tutorial Website and The Tutorial Team Ray Wenderlich: Tutorials for iPhone / iOS / Developers and Gamers EA Games 05:58 - Maintaining High Quality and Consistency 07:26 - Tips & Advice for Writing a Tutorial 09:12 - Avoiding User Frustration 11:54 - Writing Books 13:00 - Traditional vs Self-Publishing Pragmatic Programmers 14:31 - Book Content vs Site Tutorials 15:41 - Starter Kits 16:33 - Transitioning to Swift 19:47 - Error/Bug Support 21:06 - Branching Out Into Other Technologies 22:21 - Selling Introductory vs Advanced Tutorials 25:15 - Choosing Topics 26:08 - RWDevCon 31:48 - Working with Ray and on raywenderlich.com 36:31 - Maximizing Marketing Opportunities 39:46 - Writing Tutorials for Mainstream Apps & Games 41:08 - Highlights Video Tutorials iOS Games by Tutorials Second Edition: Beginning 2D iOS Game Development with Swift iOS Animations by Tutorials by Marin Todorov   Episode Resources The Freelancers' Show Episode #164: Teaching and Learning Courses with Breanne Dyck The Freelancers' Show Episode #165: Strategy and Project Management with Marie Poulin JavaScript Jabber Episode #173: Online Learning with Gregg Pollack Picks WinObjC (Andrew) Friday Q&A 2015-07-31: Tagged Pointer Strings by Mike Ash (Andrew) The Web Platform Podcast Episode 56: Building Your Brand with Charles Max Wood (Chuck) Developer On Fire Episode 017 - Charles Max Wood - Get Involved and Try New Things (Chuck) Skype’s Inside Out Emojis (Chuck) little bites of cocoa (Ray) The insanely slow road to building a blog (and why most people give up) by Belle Beth Cooper (Ray) What the Best College Teachers Do by Ken Bain (Greg) What’s Your Learning Style? Quiz (Greg) Brian Gilham's WatchKit Resources (Mic) Exploding Kittens (Mic)

Reboot
1: Gordon (Bartender > Film Editor > iOS Developer)

Reboot

Play Episode Listen Later Nov 17, 2014 62:52


In episode 1 of the first season of Reboot, Adarsh talks with Gordon Fontenot, an iOS developer at thoughtbot, about his career path, moving from college dropout, to bartender, to film editor and finally iOS developer. Gordon Fontenot on Twitter Gordon’s podcast with Mark Adams on iOS development Waltham, Massachusetts Avid Video Editor Central Booking AppleScript Pragmatic Programmers Stack Overflow Careers Imposter Syndrome Dog typing on computer thoughtbot

Devchat.tv Master Feed
Investing in Your Knowledge Portfolio

Devchat.tv Master Feed

Play Episode Listen Later Jul 15, 2011 20:47


In the Pragmatic Programmer it talks about your knowledge portfolio and recommend that you invest in it regularly. In fact, it draws the analogy of a stock or financial investor and how they invest. I discuss my experience in investing and my thoughts on the content of the Pragmatic Programmers book. The only major difference or disagreement I have between their suggestions and my experience is that today most of the content you’d find in books or trade magazines is available online in blogs, videos, and other media. However, in some cases, the best documentation is in a recently written and maintained book. Here are some of the things I mentioned in the podcast: Ruby Reloaded Peter Cooper’s Ruby Course Teach Me To Code Academy – Ruby on Rails Course The The Pragmatic Programmerbook

Devchat.tv Master Feed
TMTC 18 – Dave Thomas Interview – Part 2

Devchat.tv Master Feed

Play Episode Listen Later May 4, 2010 35:24


This episode of the teachmetocode podcast, Dave talks us through the process he and Andy Hunt went through in founding the Pragmatic Programmers book series and publishing company. Dave also talks about the the advantages that they have had by not holding onto or being mired down by the way things have always been done and their growth in non-conventional book selling channels. He also mentioned that if you would like them to come do training where you're at, contact Mike Clark and find people who are willing to sit in on the course. I think my favorite part of the interview was his explanation of where the Agile Manifesto came from. We also got to talk about what Agile development really is. Dave explains the correlation between his musical interests and his programming interests. He figures that at least 30-40% of speakers at any conference would have some sort of musical background. The structure and the way things come together in music actually applies to software. You create patterns or structures that work well together at multiple levels. Toward the beginning of the Pragmatic Programmers, Dave and Andy recommend learning a new language every year. He discusses his hobby of picking up new programming languages and investing in yourself. Finally, I asked Dave about running a business and how to get one started. He gave some terrific advice regarding building your own application and business. He wrapped up the episode by pointing out that programming is exceptionally hard. You have a huge amount of information you have to know in order to get into programming. On top of it, the world is complicated and makes the problems we have to solve hard. So, ultimately, make it fun! Download this Episode

Devchat.tv Master Feed
RC 17 – Interview with Dave Thomas from the Pragmatic Programmers – Part 1

Devchat.tv Master Feed

Play Episode Listen Later Apr 20, 2010 31:30


Dave Thomas is one of the founders of the Pragmatic Programmers. He is a signatory of the Agile Manifesto. He's written several books, including: The Pragmatic Programmer, Programming Ruby (The Pickaxe Book), and Agile Web Development with Rails This discussion covered a wide variety of topics, including how he picked up Ruby, learning new languages, and building businesses. I think one of my favorite parts were his description of how he came to write his books Programming Ruby and the Pragmatic Programmer. For me it was valuable to get that type of view into some of the early documentation on my primary programming language. I also appreciated his insight into building code better, rather than building better code. He offered insight into code that is appropriate to the task that is being built. He offered the following questions as qualifying whether you're building code better: Does it do what the customer wanted? Can it continue to provide value so in the future? This sort of purpose driven development is really the whole point of what we do as programmers. Thank you Dave for pointing out that the important thing is keeping the practices that allow us to adapt to changes in the ecosystem our applications run in. Dave also shared with us that talent in programming is important. Like musicians, you need talent to be able to perform. You can only get so far pushing your way through programming. Can you think about things as explicitly as a computer? More importantly, rather than the introverted programmer who doesn't communicate, a good programmer has the ability to translate the customer's requirements into computer instructions. You need the ability to communicate clearly and represent the computer and its capabilities to the customer. One of the most important things you can do is find a good set of mentors. Someone who can teach you what you're doing right and what you're doing wrong. Dave shared a terrific example where he said the right thing in the wrong way and explained how his mentor approached him and what to look for in a great mentor. Here is what Dave recommends in looking for a mentor: Spend some time getting to know them. Look for people around you. Look at what they do, since you'll be modeling yourself after them. Ask them to be your mentor. If they're not willing, they're not a good mentor. Oddly enough, the person I approached after this podcast is also named Dave. If you want to know where the Pragmatic Programmer came from, Dave tells us toward the end of this episode. We pick up the discussion next week talking about his businesses and entrepreneurship. Download this Episode

rails creativeasin dave thomas agile manifesto pragmatic programmers programming ruby
Devchat.tv Master Feed
RC 16 – The DRY Principle (Don’t Repeat Yourself)

Devchat.tv Master Feed

Play Episode Listen Later Apr 13, 2010 20:28


The DRY principle is a guiding principle behind frameworks like Ruby on Rails. It's basic tenet as provided by the Pragmatic Programmers is: EVERY PIECE OF KNOWLEDGE MUST HAVE A SINGLE, UNAMBIGUOUS, AUTHORITATIVE REPRESENTATION WITHIN A SYSTEM. This basically means that between your database schema, code, architecture, etc. you should only have one representation of each piece of knowledge that applies to your system. This goes far beyond your basic avoidance of Copy/Paste Programming. Your code does not have to be identical to be duplicate. For example, you may have code that tells you how to build an address. This may use as business name, address fields, city, state, and zip. You may also have code that builds an address for a user, using the user's name, address fields, city, state, and zip. This is a simple example that shows code duplication. But what about mathematical algorithms. Or, an example that I just worked through at work, we're using flash and HTML authentication. Both systems need to be able to authenticate. So, how do we consolidate our code so that authentication knowledge is only managed in one place? I've also seen instances where duplicate code is hard to generalize to match all cases. In those circumstances, I ask myself the following questions. Do all of these processes need to be maintained for consistency? Or in other words, if I change class A's behavior, do I need to change class B's behavior? Is this the same process in both cases regardless of dependencies? Is there a case where one process will need to be modified to significantly deviate from the other? Am I creating more work by combining these processes than I would by simply maintaining them as they are? I'm fully aware of that after making my decision, I may not get back to modifying this code, so I have to make the best decision I can. The main concern I have is maintainability. If I can maintain things in one place, for example building code generation off of a configuration file that fans out to multiple parts of the system, keeping the implementation details in the config file. (Think about a SOAP WSDL file.) Or if I can build a configuration off of some code implementation. Or I can generate some documentation from the code. I can avoid circumstances where I can break my code in one place by changing it in another. That's the true power of the DRY principle. Download this Episode