Podcasts about nathen harvey

  • 30PODCASTS
  • 44EPISODES
  • 45mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 17, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about nathen harvey

Latest podcast episodes about nathen harvey

Heavybit Podcast Network: Master Feed
Ep. #34, Together with Nathen Harvey

Heavybit Podcast Network: Master Feed

Play Episode Listen Later Apr 17, 2025 26:11


In episode 34 of Generationship, Nathen Harvey brings data, humor, and heart to a conversation about AI, DevOps, open source, and developer experience. He and Rachel dive into how AI is influencing software engineering, the role of platform engineering, metrics for assessing performance, and broader reflections on engineering culture and career growth.

ai devops nathen harvey
Generationship
Ep. #34, Together with Nathen Harvey

Generationship

Play Episode Listen Later Apr 17, 2025 26:11


In episode 34 of Generationship, Nathen Harvey brings data, humor, and heart to a conversation about AI, DevOps, open source, and developer experience. He and Rachel dive into how AI is influencing software engineering, the role of platform engineering, metrics for assessing performance, and broader reflections on engineering culture and career growth.

ai devops nathen harvey
The New Stack Podcast
What's the Future of Platform Engineering?

The New Stack Podcast

Play Episode Listen Later Mar 27, 2025 26:44


Platform engineering was meant to ease the burdens of Devs and Ops by reducing cognitive load and repetitive tasks. However, building internal development platforms (IDPs) has proven challenging. Despite this, Gartner predicts that by 2026, 80% of software engineering organizations will have a platform team.In a recent New Stack Makers episode, Mallory Haigh of Humanitec and Nathen Harvey of Google discussed the current state and future of platform engineering. Haigh emphasized that many organizations rush to build IDPs without understanding why they need them, leading to ineffective implementations. She noted that platform engineering is 10% technical and 90% cultural change, requiring deep introspection and strategic planning.AI-driven automation, particularly agentic AI, is expected to shape platform engineering's future. Haigh highlighted how AI can enhance platform orchestration and optimize GPU resource management. Harvey compared platform engineering to generative AI—both aim to reduce toil and improve efficiency. As AI adoption grows, platform teams must ensure their infrastructure supports these advancements.Learn more from The New Stack about platform engineering:  Platform Engineering on the Brink: Breakthrough or Bust?Platform Engineers Must Have Strong OpinionsThe Missing Piece in Platform Engineering: Recognizing ProducersJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. 

DevOps Diaries
051 — Nathen Harvey: How to Measure Success in the Age of AI

DevOps Diaries

Play Episode Listen Later Mar 6, 2025 55:14


How do DORA metrics apply to the unique challenges of Salesforce development? Join Jack McCurdy and Nathan Harvey as they dive into the evolving landscape of software delivery, exploring the intersection of platform engineering, AI, and human performance. They discuss practical strategies for implementing DORA metrics, navigating the shift to agile, and reducing developer burnout. In this episode they uncover how to foster empathy, improve team collaboration, and leverage AI to enhance your Salesforce development processes. Expect insights on effective tooling, communication strategies, and the importance of questioning the status quo to drive innovation.Learn more:- Everything you need to know about Agentforce- Insights from the latest DORA report- How to apply DORA metrics to SalesforceAbout DevOps Diaries: Salesforce DevOps Advocate Jack McCurdy chats to members of the Salesforce community about their experience in the Salesforce ecosystem. Expect to hear and learn from inspirational stories of personal growth and business success, whilst discovering all the trials, tribulations, and joy that comes with delivering Salesforce for companies of all shapes and sizes. New episodes bi-weekly on YouTube as well as on your preferred podcast platform.Podcast produced and sponsored by Gearset. Learn more about Gearset.Subscribe to Gearset's YouTube channel: https://grst.co/4cTAAxmLinkedIn: https://www.linkedin.com/company/gearsetX/Twitter: https://x.com/GearsetHQFacebook: https://www.facebook.com/gearsethqAbout Gearset: Gearset is the leading Salesforce DevOps platform, with powerful solutions for metadata and CPQ deployments, CI/CD, automated testing, sandbox seeding and backups. It helps Salesforce teams apply DevOps best practices to their development and release process, so they can rapidly and securely deliver higher-quality projects. Get full access to all of Gearset's features for free with a 30-day trial.Chapters00:00 Introduction to DORA and Its Importance02:57 Understanding Software Delivery Performance Metrics06:01 Challenges in Measuring Metrics in Salesforce09:04 The Role of Platform Engineering12:04 AI's Impact on Software Delivery17:57 Navigating AI in Development and Deployment29:38 Understanding Customer Needs in AI Development30:43 The Journey of Continuous Improvement32:45 Collaboration in Salesforce Teams36:07 Effective Communication Strategies40:12 The Shift from Waterfall to Continuous Improvement42:25 Reducing Developer Burnout48:24 Building Empathy with Users53:10 Challenging the Status Quo

Dev Interrupted
Observability as a Success Catalyst | Momento's Co-Founder & CTO Daniela Miao

Dev Interrupted

Play Episode Listen Later Oct 8, 2024 42:05 Transcription Available


This week, co-host Conor Bronsdon sits down with Daniela Miao, co-founder and CTO of Momento, to discuss her journey from DynamoDB at AWS to founding the real-time data infrastructure platform Momento. Daniela covers the importance of observability, the decision to rebuild Momento's stack with Rust, and how observability can speed up development cycles. They also explore strategies for aligning technical projects with business objectives, building team trust, and the critical role of communication in achieving success. Tune in for valuable insights on leadership, technical decision-making, and startup growth.Topics:02:01 Why is observability often treated as an auxiliary service?06:14 Making a push for observability13:32 Picking the right metrics to observe and pay attention to15:49 Has the technical shift to Rust paid off?19:23 How did you create trust and buy in from your team to make a switch?26:31 What could other teams learn from Momento's move to Rust?38:15 Advice would you give for other technical founders?Links:Daniela MiaoThe Momento BlogMomento: An enterprise-ready serverless platform for caching and pub/subUnpacking the 2023 DORA Report w/ Nathen Harvey of Google CloudGoogle SRERust Programming LanguageSupport the show: Subscribe to our Substack Leave us a review Subscribe on YouTube Follow us on Twitter or LinkedIn Offers: Learn about Continuous Merge with gitStream Get your DORA Metrics free forever

S.R.E.path Podcast
#56 Resolving DORA Metrics Mistakes

S.R.E.path Podcast

Play Episode Listen Later Sep 4, 2024 26:47


We're already well into 2024 and it's sad that people still have enough fuel to complain about various aspects of their engineering life. DORA seems to be turning into one of those problem areas.Not at every organization, but some places are turning it into a case of “hitting metrics” without caring for the underlying capabilities and conversations.Nathen Harvey is no stranger to this problem.He used to talk a lot about SRE at Google as a developer advocate. Then, he became the lead advocate for DORA when Google acquired it in 2018. His focus has been on questions like:How do we help teams get better at delivering and operating software? You and I can agree that this is an important question to ask. I'd listen to what he has to say about DORA because he's got a wealth of experience behind him, having also run community engineering at Chef Software.Before we continue, let's explore What is DORA? in Nathen's (paraphrased) words:DORA is a software research program that's been running since 2015.This research program looks to figure out:How do teams get good at delivering, operating, building, and running software? The researchers were able to draw out the concept of the metrics based on correlating teams that have good technology practices with highly robust software delivery outcomes.They found that this positively impacted organizational outcomes like profitability, revenue, and customer satisfaction.Essentially, all those things that matter to the business.One of the challenges the researchers found over the last decade was working out: how do you measure something like software delivery? It's not the same as a factory system where you can go and count the widgets that we're delivering necessarily.The unfortunate problem is that the factory mindset I think still leaks in. I've personally noted some silly metrics over the years like lines of code.Imagine being asked constantly: “How many lines of code did you write this week?”You might not have to imagine. It might be a reality for you. DORA's researchers agreed that the factory mode of metrics cannot determine whether or not you are a productive engineer. They settled on and validated 4 key measures for software delivery performance.Nathen elaborated that 2 of these measures look at throughput:[Those] two [that] look at throughput really ask two questions:* How long does it take for a change of any kind, whether it's a code change, configuration change, whatever, a change to go from the developer's workstation. right through to production?And then the second question on throughput is:* How frequently are you updating production?In plain English, these 2 metrics are:* Deployment Frequency. How often code is deployed to production? This metric reflects the team's ability to deliver new features or updates quickly.* Lead Time for Changes: Measures the time it takes from code being committed to being deployed to production. Nathen recounted his experience of working at organizations that differed in how often they update production from once every six months to multiple times a day. They're both very different types of organizations, so their perspective on throughput metrics will be wildly different. This has some implications for the speed of software delivery.Of course, everyone wants to move faster, but there's this other thing that comes in and that's stability.And so, the other two stability-oriented metrics look at:What happens when you do update production and... something's gone horribly wrong. “Yeah, we need to roll that back quickly or push a hot fix.” In plain English, they are:* Change Failure Rate: Measures the percentage of deployments that cause a failure in production (e.g., outages, bugs). * Failed Deployment Recovery Time: Measures how long it takes to recover from a failure in production. You might be thinking the same thing as me. These stability metrics might be a lot more interesting to reliability folks than the first 2 throughput metrics.But keep in mind, it's about balancing all 4 metrics. Nathen believes it's fair to say today that across many organizations, they look at these concepts of throughput and stability as tradeoffs of one another. We can either be fast or we can be stable. But the interesting thing that the DORA researchers have learned from their decade of collecting data is that throughput and stability aren't trade-offs of one another.They tend to move together. They've seen organizations of every shape and size, in every industry, doing well across all four of those metrics. They are the best performers. The interesting thing is that the size of your organization doesn't matter the industry that you're in.Whether you're working in a highly regulated or unregulated industry, it doesn't matter.The key insight that Nathen thinks we should be searching for is: how do you get there? To him, it's about shipping smaller changes. When you ship small changes, they're easier to move through your pipeline. They're easier to reason about. And when something goes wrong, they're easier to recover from and restore service.But along with those small changes, we need to think about those feedback cycles.Every line of code that we write is in reality a little bit of an experiment. We think it's going to do what we expect and it's going to help our users in some way, but we need to get feedback on that as quickly as possible.Underlying all of this, both small changes and getting fast feedback, is a real climate for learning. Nathen drew up a few thinking points from this:So what is the learning culture like within our organization? Is there a climate for learning? And are we using things like failures as opportunities to learn, so that we can ever be improving? I don't know if you're thinking the same as me already, but we're already learning that DORA is a lot more than just metrics. To Nathen (and me), the metrics should be one of the least interesting parts of DORA because it digs into useful capabilities, like small changes and fast feedback. That's what truly helps determine how well you're going to do against those performance metrics.Not saying “We are a low to medium performer. Now go and improve the metrics!”I think the issue is that a lot of organizations emphasize the metrics because it's something that can sit on an executive dashboard But the true reason we have metrics is to help drive conversations.Through those conversations, we drive improvement.That's important because currently an unfortunately noticeable amount of organizations are doing this according to Nathen:I've seen organizations [where it's like]: “Oh, we're going to do DORA. Here's my dashboard. Okay, we're done. We've done DORA. I can look at these metrics on a dashboard.” That doesn't change anything. We have to go the step further and put those metrics into action.We should be treating the metrics as a kind of compass on a map. You can use those metrics to help orient yourself and understand, “Where are we heading?”.But then you have to choose how are you going to make progress toward whatever your goal is.The capabilities enabled by the DORA framework should help answer questions like:* Where are our bottlenecks?* Where are our constraints?* Do we need to do some improvement work as a team?We also talked about the SPACE framework, which is a follow-on tool from DORA metrics. It is a framework for understanding developer productivity. It encourages teams or organizations to look at five dimensions when trying to measure something from a productivity perspective.It stands for:* S — satisfaction and well-being* P — performance* A — activity* C — communication and collaboration* E — efficiency and flowWhat the SPACE framework recommends is that youFirst, pick metrics from two to three of those five categories. (You don't need a metric from every one of those five but find something that works well for your team.)Then write down those metrics and start measuring them. Here's the interesting thing: DORA is an implementation of SPACE. You can correlate each metric with the SPACE acronym!* Lead time for changes is a measure of Efficiency and flow* Deployment frequency is an Activity* Change fail rate is about Performance.* Failed deployment recovery time is about Efficiency and flowKeep in mind that SPACE itself has no metrics. It is a framework for identifying metrics.Nathen reiterated that you can't use the space metrics because there is no such thing. I mentioned earlier how DORA is a means of identifying the capabilities that can improve the metrics.These can be technical practices like using continuous integration.But they can also be capabilities like collaboration and communication. As an example, you might look at what your change approval process looks like. You might look at how collaboration and communication have failed when you've had to send changes off to an external approval board like a CAB (change approval board).DORA's research backs the above up:What our research has shown through collecting data over the years, is that while they do exist on the whole, an external change approval body will slow you down.That's no surprise. So your change lead time is going to increase, your deployment frequency will decrease. But, at best, they have zero impact on your change fail rate. In most cases, they have a negative impact on your change fail rate. So you're failing more often.It goes back to the idea of smaller changes, faster feedback, and being able to validate that. Building in audit controls and so forth.This is something that reliability-focused engineers should be able to help with because one of the things Sebastian and I talk about a lot is embracing and managing risk effectively and not trying to mitigate it through stifling measures like CABs. In short, DORA and software reliability are not mutually exclusive concepts.They're certainly in the same universe.Nathen went as far as to say that some SRE practices necessarily get a little bit deeper than sort of the capability level that DORA has and provide even more sort of specific guidance on how to do things.He clarified a doubt I had because a lot of people have argued with me (mainly at conferences) that DORA is this thing that developers do, earlier in the SDLC.And then SRE is completely different because it focuses on the production side. The worst possible situation could be turning to developers and saying, “These 2 throughput metrics, they're yours. Make sure they go up no matter what,” and then turn to our SREs and say “Those stability metrics, they're yours. Make sure they stay good” All that does is put these false incentives in place and we're just fighting against each other.We talked a little more about the future of DORA in our podcast episode (player/link right at the top of this post) if you want to hear about that.Here are some useful links from Nathen for further research:DORA online community of practiceDORA homepage[Article] The SPACE of Developer ProductivityNathen Harvey's Linktree This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit read.srepath.com

DevOps Diaries
032 — Nathen Harvey: Using DORA to measure your team performance with confidence!

DevOps Diaries

Play Episode Listen Later May 30, 2024 37:05


Nathen Harvey is Developer Advocate and the lead for DORA at Google Cloud, their DevOps Research and Assessment unit. For ten years Nathen has spearheaded tech communities and authored several reports that now form the industry standard for measuring DevOps performance. He was once a CRM system administrator too — he knows his stuff and the challenges we all face too!Nathen joins Jack on the DevOps Diaries podcast to discuss all things metrics. In an enticing and insightful conversation, Nathen shares with us how the DORA metrics came to be, what they are, and why measuring performance is important for all teams to drive their businesses in the right direction.Nathen also shares with us some of the common pitfalls of metrics, including Goodhart's Law, and what we can do to avoid them. If you're unsure of where to start when it comes to measuring performance, or how to improve, this is the episode for you.Nathen and Jack also discuss the wider market trends and how engineering teams can up their game using the SPACE framework.Learn more:DORA 2023 reportThe State of Salesforce DevOps 2024 reportThe common pitfalls when measuring performanceHow to align metrics with business performanceConnect with NathenLinkedInX/TwitterConnect with Jack X/TwitterLinkedInPodcast produced and sponsored by Gearset, the complete Salesforce DevOps platform. Try Gearset free for 30 days. 

Giant Robots Smashing Into Other Giant Robots
507 - Scaling New Heights: Innovating in Software Development with Merico's Founders Henry Yin and Maxim Wheatley

Giant Robots Smashing Into Other Giant Robots

Play Episode Listen Later Jan 11, 2024 44:42


In this episode of the "Giant Robots Smashing Into Other Giant Robots" podcast, host Victoria Guido delves into the intersection of technology, product development, and personal passions with her guests Henry Yin, Co-Founder and CTO of Merico, and Maxim Wheatley, the company's first employee and Community Leader. They are joined by Joe Ferris, CTO of thoughtbot, as a special guest co-host. The conversation begins with a casual exchange about rock climbing, revealing that both Henry and Victoria share this hobby, which provides a unique perspective on their professional roles in software development. Throughout the podcast, Henry and Maxim discuss the journey and evolution of Merico, a company specializing in data-driven tools for developers. They explore the early stages of Merico, highlighting the challenges and surprises encountered while seeking product-market fit and the strategic pivot from focusing on open-source funding allocation to developing a comprehensive engineering metric platform. This shift in focus led to the creation of Apache DevLake, an open-source project contributed to by Merico and later donated to the Apache Software Foundation, reflecting the company's commitment to transparency and community-driven development. The episode also touches on future challenges and opportunities in the field of software engineering, particularly the integration of AI and machine learning tools in the development process. Henry and Maxim emphasize the potential of AI to enhance developer productivity and the importance of data-driven insights in improving team collaboration and software delivery performance. Joe contributes to the discussion with his own experiences and perspectives, particularly on the importance of process over individual metrics in team management. Merico (https://www.merico.dev/) Follow Merico on GitHub (https://github.com/merico-dev), Linkedin (https://www.linkedin.com/company/merico-dev/), or X (https://twitter.com/MericoDev). Apache DevLake (https://devlake.apache.org/) Follow Henry Yin on LinkedIn (https://www.linkedin.com/in/henry-hezheng-yin-88116a52/). Follow Maxim Wheatley on LinkedIn (https://www.linkedin.com/in/maximwheatley/) or X (https://twitter.com/MaximWheatley). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: VICTORIA: This is the Giant Robots Smashing Into Other Giant Robots podcast, where we explore the design, development, and business of great products. I'm your host, Victoria Guido. And with me today is Henry Yin, Co-Founder and CTO of Merico, and Maxim Wheatley, the first employee and Community Leader of Merico, creating data-driven developer tools for forward-thinking devs. Thank you for joining us. HENRY: Thanks for having us. MAXIM: Glad to be here, Victoria. Thank you. VICTORIA: And we also have a special guest co-host today, the CTO of thoughtbot, Joe Ferris. JOE: Hello. VICTORIA: Okay. All right. So, I met Henry and Maxim at the 7CTOs Conference in San Diego back in November. And I understand that Henry, you are also an avid rock climber. HENRY: Yes. I know you were also in Vegas during Thanksgiving. And I sort of have [inaudible 00:49] of a tradition to go to Vegas every Thanksgiving to Red Rock National Park. Yeah, I'd love to know more about how was your trip to Vegas this Thanksgiving. VICTORIA: Yes. I got to go to Vegas as well. We had a bit of rain, actually. So, we try not to climb on sandstone after the rain and ended up doing some sport climbing on limestone around the Blue Diamond Valley area; a little bit light on climbing for me, actually, but still beautiful out there. I loved being in Red Rock Canyon outside of Las Vegas. And I do find that there's just a lot of developers and engineers who have an affinity for climbing. I'm not sure what exactly that connection is. But I know, Joe, you also have a little bit of climbing and mountaineering experience, right? JOE: Yeah. I used to climb a good deal. I actually went climbing for the first time in, like, three years this past weekend, and it was truly pathetic. But you have to [laughs] start somewhere. VICTORIA: That's right. And, Henry, how long have you been climbing for? HENRY: For about five years. I like to spend my time in nature when I'm not working: hiking, climbing, skiing, scuba diving, all of the good outdoor activities. VICTORIA: That's great. And I understand you were bouldering in Vegas, right? Did you go to Kraft Boulders? HENRY: Yeah, we went to Kraft also Red Spring. It was a surprise for me. I was able to upgrade my outdoor bouldering grade to B7 this year at Red Spring and Monkey Wrench. There was always some surprises for me. When I went to Red Rock National Park last year, I met Alex Honnold there who was shooting a documentary, and he was really, really friendly. So, really enjoying every Thanksgiving trip to Vegas. VICTORIA: That's awesome. Yeah, well, congratulations on B7. That's great. It's always good to get a new grade. And I'm kind of in the same boat with Joe, where I'm just constantly restarting my climbing career. So [laughs], I haven't had a chance to push a grade like that in a little while. But that sounds like a lot of fun. HENRY: Yeah, it's really hard to be consistent on climbing when you have, like, a full-time job, and then there's so much going on in life. It's always a challenge. VICTORIA: Yeah. But a great way to like, connect with other people, and make friends, and spend time outdoors. So, I still really appreciate it, even if I'm not maybe progressing as much as I could be. That's wonderful. So, tell me, how did you and Maxim actually meet? Did you meet through climbing or the outdoors? MAXIM: We actually met through AngelList, which I really recommend to anyone who's really looking to get into startups. When Henry and I met, Merico was essentially just starting. I had this eagerness to explore something really early stage where I'd get to do all of the interesting kind of cross-functional things that come with that territory, touching on product and marketing, on fundraising, kind of being a bit of everything. And I was eager to look into something that was applying, you know, machine learning, data analytics in some really practical way. And I came across what Hezheng Henry and the team were doing in terms of just extracting useful insights from codebases. And we ended up connecting really well. And I think the previous experience I had was a good fit for the team, and the rest was history. And we've had a great time building together for the last five years. VICTORIA: Yeah. And tell me a little bit more about your background and what you've been bringing to the Merico team. MAXIM: I think, like a lot of people in startups, consider myself a member of the Island of Misfit Toys in the sense that no kind of clear-cut linear pathway through my journey but a really exciting and productive one nonetheless. So, I began studying neuroscience at Georgetown University in Washington, D.C. I was about to go to medical school and, in my high school years had explored entrepreneurship in a really basic way. I think, like many people do, finding ways to monetize my hobbies and really kind of getting infected with that bug that I could create something, make money from it, and kind of be the master of my own destiny, for lack of less cliché terms. So, not long after graduating, I started my first job that recruited me into a seed-stage venture capital, and from there, I had the opportunity to help early-stage startups, invest in them. I was managing a startup accelerator out there. From there, produced a documentary that followed those startups. Not long after all of that, I ended up co-founding a consumer electronics company where I was leading product, so doing lots of mechanical, electrical, and a bit of software engineering. And without taking too long, those were certainly kind of two of the more formative things. But one way or another, I've spent my whole career now in startups and, especially early-stage ones. It was something I was eager to do was kind of take some of the high-level abstract science that I had learned in my undergraduate and kind of apply some of those frameworks to some of the things that I do today. VICTORIA: That's super interesting. And now I'm curious about you, Henry, and your background. And what led you to get the idea for Merico? HENRY: Yeah. My professional career is actually much simpler because Merico was my first company and my first job. Before Merico, I was a PhD student at UC Berkeley studying computer science. My research was an intersection of software engineering and machine learning. And back then, we were tackling this research problem of how do we fairly measure the developer contributions in a software project? And the reason we are interested in this project has to do with the open-source funding problem. So, let's say an open-source project gets 100k donations from Google. How does the maintainers can automatically distribute all of the donations to sometimes hundreds or thousands of contributors according to their varying level of contributions? So, that was the problem we were interested in. We did research on this for about a year. We published a paper. And later on, you know, we started the company with my, you know, co-authors. And that's how the story began for Merico. VICTORIA: I really love that. And maybe you could tell me just a little bit more about what Merico is and why a company may be interested in trying out your services. HENRY: The product we're currently offering actually is a little bit different from what we set out to build. At the very beginning, we were building this platform for open-source funding problem that we can give an open-source project. We can automatically, using algorithm, measure developer contributions and automatically distribute donations to all developers. But then we encountered some technical and business challenges. So, we took out the metrics component from the previous idea and launched this new product in the engineering metric space. And this time, we focus on helping engineering leaders better understand the health of their engineering work. So, this is the Merico analytics platform that we're currently offering to software engineering teams. JOE: It's interesting. I've seen some products that try to judge the health of a codebase, but it sounds like this is more trying to judge the health of the team. MAXIM: Yeah, I think that's generally fair to say. As we've evolved, we've certainly liked to describe ourselves as, you know, I think a lot of people are familiar with observability tools, which help ultimately ascertain, like, the performance of the technology, right? Like, it's assessing, visualizing, chopping up the machine-generated data. And we thought there would be a tremendous amount of value in being, essentially, observability for the human-generated data. And I think, ultimately, what we found on our journey is that there's a tremendous amount of frustration, especially in larger teams, not in looking to use a tool like that for any kind of, like, policing type thing, right? Like, no one's looking if they're doing it right, at least looking to figure out, like, oh, who's underperforming, or who do we need to yell at? But really trying to figure out, like, where are the strengths? Like, how can we improve our processes? How can we make sure we're delivering better software more reliably, more sustainably? Like how are we balancing that trade-off between new features, upgrades and managing tech debt and bugs? We've ultimately just worked tirelessly to, hopefully, fill in those blind spots for people. And so far, I'm pleased to say that the reception has been really positive. We've, I think, tapped into a somewhat subtle but nonetheless really important pain point for a lot of teams around the world. VICTORIA: Yeah. And, Henry, you said that you started it based on some of the research that you did at UC Berkeley. I also understand you leaned on the research from the DevOps research from DORA. Can you tell me a little bit more about that and what you found insightful from the research that was out there and already existed? MAXIM: So, I think what's really funny, and it really speaks to, I think, the importance in product development of just getting out there and speaking with your potential users or actual users, and despite all of the deep, deep research we had done on the topic of understanding engineering, we really hadn't touched on DORA too much. And this is probably going back about five years now. Henry and I were taking a customer meeting with an engineering leader at Yahoo out in the Bay Area. He kind of revealed this to us basically where he's like, "Oh, you guys should really look at incorporating DORA into this thing. Like, all of the metrics, all of the analytics you're building super cool, super interesting, but DORA really has this great framework, and you guys should look into it." And in hindsight, I think we can now [chuckles], honestly, admit to ourselves, even if it maybe was a bit embarrassing at the time where both Henry and I were like, "What? What is that? Like, what's Dora?" And we ended up looking into it and since then, have really become evangelists for the framework. And I'll pass it to Henry to talk about, like, what that journey has looked like. HENRY: Thanks, Maxim. I think what's cool about DORA is in terms of using metrics, there's always this challenge called Goodhart's Law, right? So, whenever a metric becomes a target, the metric cease to be a good metric because people are going to find ways to game the metric. So, I think what's cool about DORA is that it actually offers not just one metric but four key metrics that bring balance to covering both the stability and velocity. So, when you look at DORA metrics, you can't just optimize for velocity and sacrificing your stability. But you have to look at all four metrics at the same time, and that's harder to game. So, I think that's why it's become more and more popular in the industry as the starting point for using metrics for data-driven engineering. VICTORIA: Yeah. And I like how DORA also represents it as the metrics and how they apply to where you are in the lifecycle of your product. So, I'm curious: with Merico, what kind of insights do you think engineering leaders can gain from having this data that will unlock some of their team's potential? MAXIM: So, I think one of the most foundational things before we get into any detailed metrics is I think it's more important than ever, especially given that so many of us are remote, right? Where the general processes of software engineering are generally difficult to understand, right? They're nuanced. They tend to kind of happen in relative isolation until a PR is reviewed and merged. And it can be challenging, of course, to understand what's being done, how consistently, how well, like, where are the good parts, where are the bad parts. And I think that problem gets really exasperated, especially in a remote setting where no one is necessarily in the same place. So, on a foundational level, I think we've really worked hard to solve that challenge, where just being able to see, like, how are we doing? And to that point, I think what we've found before anyone even dives too deep into all of the insights that we can deliver, I think there's a tremendous amount of appetite for anyone who's looking to get into that practice of constant improvement and figuring out how to level up the work they're doing, just setting close benchmarks, figuring out, like, okay, when we talk about more nebulous or maybe subjective terms like speed, or quality, what does good look like? What does consistent look like? Being able to just tie those things to something that really kind of unifies the vocabulary is something I always like to say, where, okay, now, even if we're not focused on a specific metric, or we don't have a really particular goal in mind that we want to assess, now we're at least starting the conversation as a team from a place where when we talk about quality, we have something that's shared between us. We understand what we're referring to. And when we're talking about speed, we can also have something consistent to talk about there. And within all of that, I think one of the most powerful things is it helps to really kind of ground the conversations around the trade-offs, right? There's always that common saying: the triangle of trade-offs is where it's, like, you can have it cheap; you can have it fast, and you can have it good, but you can only have two. And I think with DORA, with all of these different frameworks with many metrics, it helps to really solidify what those trade-offs look like. And that's, for me at least, been one of the most impactful things to watch: is our global users have really started evolving their practices with it. HENRY: Yeah. And I want to add to Maxim's answer. But before that, I just want to quickly mention how our products are structured. So, Merico actually has an open-source component and a proprietary component. So, the open-source component is called Apache DevLake. It's an open-source project we created first within Merico and later on donated to Apache Software Foundation. And now, it's one of the most popular engineering metrics tool out there. And then, on top of that, we built a SaaS offering called DevInsight Cloud, which is powered by Apache DevLake. So, with DevLake, the open-source project, you can set up your data connections, connect DevLake to all of the dev tools you're using, and then we collect data. And then we provide many different flavors of dashboards for our users. And many of those dashboards are structured, and there are different questions engineering teams might want to ask. For example, like, how fast are we responding to our customer requirement? For that question, we will look at like, metrics like change lead time, or, like, for a question, how accurate is our planning for the sprint? In that case, the dashboard will show metrics relating to the percentage of issues we can deliver for every sprint for our plan. So, that's sort of, you know, based on the questions that the team wants to answer, we provide different dashboards that help them extract insights using the data from their DevOps tools. JOE: It's really interesting you donated it to Apache. And I feel like the hybrid SaaS open-source model is really common. And I've become more and more skeptical of it over the years as companies start out open source, and then once they start getting competitors, they change the license. But by donating it to Apache, you sort of sidestep that potential trust issue. MAXIM: Yeah, you've hit the nail on the head with that one because, in many ways, for us, engaging with Apache in the way that we have was, I think, ultimately born out of the observations we had about the shortcomings of other products in the space where, for one, very practical. We realized quickly that if we wanted to offer the most complete visibility possible, it would require connections to so many different products, right? I think anyone can look at their engineering toolchain and identify perhaps 7, 9, 10 different things they're using on a day-to-day basis. Oftentimes, those aren't shared between companies, too. So, I think part one was just figuring out like, okay, how do we build a framework that makes it easy for developers to build a plugin and contribute to the project if there's something they want to incorporate that isn't already supported? And I think that was kind of part one. Part two is, I think, much more important and far more profound, which is developer trust, right? Where we saw so many different products out there that claimed to deliver these insights but really had this kind of black-box approach, right? Where data goes in, something happens, insights come out. How's it doing that? How's it weighting things? What's it calculating? What variables are incorporated? All of that is a mystery. And that really leads to developers, rightfully, not having a basis to trust what's actually being shown to them. So, for us, it was this perspective of what's the maximum amount of transparency that we could possibly offer? Well, open source is probably the best answer to that question. We made sure the entirety of the codebase is something they can take a look at, they can modify. They can dive into the underlying queries and algorithms and how everything is working to gain a total sense of trust in how is this thing working? And if I need to modify something to account for some nuanced details of how our team works, we can also do that. And to your point, you know, I think it's definitely something I would agree with that one of the worst things we see in the open-source community is that companies will be kind of open source in name only, right? Where it's really more of marketing or kind of sales thing than anything, where it's like, oh, let's tap into the good faith of open source. But really, somehow or another, through bait and switch, through partial open source, through license changes, whatever it is, we're open source in name only but really, a proprietary, closed-source product. So, for us, donating the core of DevLake to the Apache Foundation was essentially our way of really, like, putting, you know, walking the talk, right? Where no one can doubt at this point, like, oh, is this thing suddenly going to have the license changed? Is this suddenly going to go closed-source? Like, the answer to that now is a definitive no because it is now part of that ecosystem. And I think with the aspirations we've had to build something that is not just a tool but, hopefully, long-term becomes, like, foundational technology, I think that gives people confidence and faith that this is something they can really invest in. They can really plumb into their processes in a deep and meaningful way with no concerns whatsoever that something is suddenly going to change that makes all of that work, you know, something that they didn't expect. JOE: I think a lot of companies guard their source code like it's their secret sauce, but my experience has been more that it's the secret shame [laughs]. HENRY: [laughs] MAXIM: There's no doubt in my role with, especially our open-source product driving our community we've really seen the magic of what a community-driven product can be. And open source, I think, is the most kind of a true expression of a community-driven product, where we have a Slack community with nearly 1,000 developers in it now. Naturally, right? Some of those developers are in there just to ask questions and answer questions. Some are intensely involved, right? They're suggesting improvements. They're suggesting new features. They're finding ways to refine things. And it really is that, like, fantastic culture that I'm really proud that we've cultivated where best idea ships, right? If you've got a good idea, throw it into a GitHub issue or a comment. Let's see how the community responds to it. Let's see if someone wants to pick it up. Let's see if someone wants to submit a PR. If it's good, it goes into production, and then the entire community benefits. And, for me, that's something I've found endlessly exciting. HENRY: Yeah. I think Joe made a really good point on the secret sauce part because I don't think the source code is our secret sauce. There's no rocket science in DevLake. If we break it down, it's really just some UI UX plus data pipelines. I think what's making DevLake successful is really the trust and collaboration that we're building with the open-source community. When it comes to trust, I think there are two aspects. First of all, trust on the metric accuracy, right? Because with a lot of proprietary software, you don't know how they are calculating the metrics. If people don't know how the metrics are calculated, they can't really trust it and use it. And secondly, is the trust that they can always use this software, and there's no vendor lock-in. And when it comes to collaboration, we were seeing many of our data sources and dashboards they were contributed not by our core developers but by the community. And the communities really, you know, bring in their insights and their use cases into DevLake and make DevLake, you know, more successful and more applicable to more teams in different areas of soft engineering. MID-ROLL AD: Are you an entrepreneur or start-up founder looking to gain confidence in the way forward for your idea? At thoughtbot, we know you're tight on time and investment, which is why we've created targeted 1-hour remote workshops to help you develop a concrete plan for your product's next steps. Over four interactive sessions, we work with you on research, product design sprint, critical path, and presentation prep so that you and your team are better equipped with the skills and knowledge for success. Find out how we can help you move the needle at tbot.io/entrepreneurs. VICTORIA: I understand you've taken some innovative approaches on using AI in your open-source repositories to respond to issues and questions from your developers. So, can you tell me a little bit more about that? HENRY: Absolutely. I self-identify as a builder. And one characteristic of builder is to always chase after the dream of building infinite things within the finite lifespan. So, I was always thinking about how we can be more productive, how we can, you know, get better at getting better. And so, this year, you know, AI is huge, and there are so many AI-powered tools that can help us achieve more in terms of delivering software. And then, internally, we had a hackathon, and there's one project, which is an AI-powered coding assistant coming out of it called DevChat. And we have made it public at devchat.ai. But we've been closely following, you know, what are the other AI-powered tools that can make, you know, software developers' or open-source maintainers' lives easier? And we've been observing that there are more and more open-source projects adopting AI chatbots to help them handle, you know, respond to GitHub issues. So, I recently did a case study on a pretty popular open-source project called LangChain. So, it's the hot kid right now in the AI space right now. And it's using a chatbot called Dosu to help respond to issues. I had some interesting findings from the case study. VICTORIA: In what ways was that chatbot really helpful, and in what ways did it not really work that well? HENRY: Yeah, I was thinking of how to measure the effectiveness of that chatbot. And I realized that there is a feature that's built in GitHub, which is the reaction to comment. So, how the chatbot works is whenever there is a new issue, the chatbot would basically retrieval-augmented generation pipeline and then using ORM to generate a response to the issue. And then there's people leave reactions to that comment by the chatbot, but mostly, it's thumbs up and thumbs down. So, what I did is I collect all of the issues from the LangChain repository and look at how many thumbs up and thumbs down Dosu chatbot got, you know, from all of the comments they left with the issues. So, what I found is that over across 2,600 issues that Dosu chatbot helped with, it got around 900 thumbs ups and 1,300 thumbs down. So, then it comes to how do we interpret this data, right? Because it got more thumbs down than thumbs up doesn't mean that it's actually not useful or harmful to the developers. So, to answer that question, I actually looked at some examples of thumbs-up and thumb-down comments. And what I found is the thumb down doesn't mean that the chatbot is harmful. It's mostly the developers are signaling to the open-source maintainers that your chatbot is not helping in this case, and we need human intervention. But with the thumbs up, the chatbot is actually helping a lot. There's one issue where people post a question, and the chatbot just wrote the code and then basically made a suggestion on how to resolve the issue. And the human response is, "Damn, it worked." And that was very surprising to me, and it made me consider, you know, adopting similar technology and AI-powered tools for our own open-source project. VICTORIA: That's very cool. Well, I want to go back to the beginning of Merico. And when you first got started, and you were trying to understand your customers and what they need, was there anything surprising in that early discovery process that made you change your strategy? HENRY: So, one challenge we faced when we first explored open-source funding allocation problem space is that our algorithm looks at the Git repository. But with software engineering, especially with open-source collaboration, there are so many activities that are happening outside of open-source repos on GitHub. For example, I might be an evangelist, and my day-to-day work might be, you know, engaging in community work, talking about the open-source project conference. And all of those things were not captured by our algorithm, which was only looking at the GitHub repository at the time. So, that was one of the technical challenge that we faced and led us to switch over to more of the system-driven metrics side. VICTORIA: Gotcha. Over the years, how has Merico grown? What has changed between when you first started and today? HENRY: So, one thing is the team size. When we just got started, we only have, you know, the three co-founders and Maxim. And now we have grown to a team of 70 team members, and we have a fully distributed team across multiple continents. So, that's pretty interesting dynamics to handle. And we learned a lot of how to build effective team and a cohesive team along the way. And in terms of product, DevLake now, you know, has more than 900 developers in our Slack community, and we track over 360 companies using DevLake. So, definitely, went a long way since we started the journey. And yeah, tomorrow we...actually, Maxim and I are going to host our end-of-year Apache DevLake Community Meetup and featuring Nathen Harvey, the Google's DORA team lead. Yeah, definitely made some progress since we've been working on Merico for four years. VICTORIA: Well, that's exciting. Well, say hi to Nathen for me. I helped takeover DevOps DC with some of the other organizers that he was running way back in the day, so [laughs] that's great. What challenges do you see on the horizon for Merico and DevLake? MAXIM: One of the challenges I think about a lot, and I think it's front of mind for many people, especially with software engineering, but at this point, nearly every profession, is what does AI mean for everything we're doing? What does the future look like where developers are maybe producing the majority of their code through prompt-based approaches versus code-based approaches, right? How do we start thinking about how we coherently assess that? Like, how do you maybe redefine what the value is when there's a scenario where perhaps all coders, you know, if we maybe fast forward a few years, like, what if the AI is so good that the code is essentially perfect? What does success look like then? How do you start thinking about what is a good team if everyone is shooting out 9 out of 10 PRs nearly every time because they're all using a unified framework supported by AI? So, I think that's certainly kind of one of the challenges I envision in the future. I think, really, practically, too, many startups have been contending with the macroclimate within the fundraising climates. You know, I think many of the companies out there, us included, had better conditions in 2019, 2020 to raise funds at more favorable valuations, perhaps more relaxed terms, given the climate of the public markets and, you know, monetary policy. I think that's, obviously, we're all experiencing and has tightened things up like revenue expectations or now higher kind of expectations on getting into a highly profitable place or, you know, the benchmark is set a lot higher there. So, I think it's not a challenge that's unique to us in any way at all. I think it's true for almost every company that's out there. It's now kind of thinking in a more disciplined way about how do you kind of meet the market demands without compromising on the product vision and without compromising on the roadmap and the strategies that you've put in place that are working but are maybe coming under a little bit more pressure, given kind of the new set of rules that have been laid out for all of us? VICTORIA: Yeah, that is going to be a challenge. And do you see the company and the product solving some of those challenges in a unique way? HENRY: I've been thinking about how AI can fulfill the promise of making developers 10x developer. I'm an early adopter and big fan of GitHub Copilot. I think it really helps with writing, like, the boilerplate code. But I think it's improving maybe my productivity by 20% to 30%. It's still pretty far away from 10x. So, I'm thinking how Merico's solutions can help fill the gap a little bit. In terms of Apache DevLake and its SaaS offering, I think we are helping with, like, the team collaboration and measuring, like, software delivery performance, how can the team improve as a whole. And then, recently, we had a spin-off, which is the AI-powered coding assistant DevChat. And that's sort of more on the empowering individual developers with, like, testing, refactoring these common workflows. And one big thing for us in the future is how we can combine these two components, you know, team collaboration and improvement tool, DevLake, with the individual coding assistant, DevChat, how they can be integrated together to empower developers. I think that's the big question for Merico ahead. JOE: Have you used Merico to judge the contributions of AI to a project? HENRY: [laughs] So, actually, after we pivot to engineering metrics, we focus now less on individual contribution because that sometimes can be counterproductive. Because whenever you visualize that, then people will sometimes become defensive and try to optimize for the metrics that measure individual contributions. So, we sort of...nowadays, we no longer offer that kind of metrics within DevLake, if that makes sense. MAXIM: And that kind of goes back to one of Victoria's earlier questions about, like, what surprised us in the journey. Early on, we had this very benevolent perspective, you know, I would want to kind of underline that, that we never sought to be judging individuals in a negative way. We were looking to find ways to make it useful, even to a point of finding ways...like, we explored different ways to give developers badges and different kind of accomplishment milestones, like, things to kind of signal their strengths and accomplishments. But I think what we've found in that journey is that...and I would really kind of say this strongly. I think the only way that metrics of any kind serve an organization is when they support a healthy culture. And to that end, what we found is that we always like to preach, like, it's processes, not people. It's figuring out if you're hiring correctly, if you're making smart decisions about who's on the team. I think you have to operate with a default assumption within reason that those people are doing their best work. They're trying to move the company forward. They're trying to make good decisions to better serve the customers, better serve the company and the product. With that in mind, what you're really looking to do is figure out what is happening within the underlying processes that get something from thought to production. And how do you clear the way for people? And I think that's really been a big kind of, you know, almost like a tectonic shift for our company over the years is really kind of fully transitioning to that. And I think, in some ways, DORA has represented kind of almost, like, a best practice for, like, processes over people, right? It's figuring out between quality and speed; how are you doing? Where are those trade-offs? And then, within the processes that account for those outcomes, how can you really be improving things? So, I would say, for us, that's, like, been kind of the number one thing there is figuring out, like, how do we keep doubling down on processes, not people? And how do we really make sure that we're not just telling people that we're on their side and we're taking a, you know, a very humanistic perspective on wanting to improve the lives of people but actually doing it with the product? HENRY: But putting the challenge on measuring individual contributions aside, I'm as curious as Joe about AI's role in software engineering. I expect to see more and more involvement of AI and gradually, you know, replacing low-level and medium-level and, in the future, even high-level tasks for humans so we can just focus on, like, the objective instead of the implementation. VICTORIA: I can imagine, especially if you're starting to integrate AI tools into your systems and if you're growing your company at scale, some of the ability to have a natural intuition about what's going on it really becomes a challenge, and the data that you can derive from some of these products could help you make better decisions and all different types of things. So, I'm kind of curious to hear from Joe; with your history of open-source contribution and being a part of many different development teams, what kind of information do you wish that you had to help you make decisions in your role? JOE: Yeah, that's an interesting question. I've used some tools that try to identify problem spots in the code. But it'd be interesting to see the results of tools that analyze problem spots in the process. Like, I'd like to learn more about how that works. HENRY: I'm curious; one question for Joe. What is your favorite non-AI-powered code scanning tool that you find useful for yourself or for your team? JOE: I think the most common static analysis tool I use is something to find the Git churn in a repository. Some of this probably is because I've worked mostly on projects these days with dynamic languages. So, there's kind of a limit to how much static analysis you can do of, you know, a Ruby or a Python codebase. But just by analyzing which parts of the application changed the most, help you find which parts are likely to be the buggiest and the most complex. I think every application tends to involve some central model. Like, if you're making an e-commerce site, then probably products are going to have a lot of the core logic, purchases will have a lot of the core logic. And identifying those centers of gravity just through the Git statistics has helped me find places that need to be reworked. HENRY: That's really interesting. Is it something like a hotspot analysis? And when you find a hotspot, then would you invest more resources in, like, refactoring the hotspot to make it more maintainable? JOE: Right, exactly. Like, you can use the statistics to see which files you should look at. And then, usually, when you actually go into the files, especially if you look at some of the changes to the files, it's pretty clear that it's become, you know, for example, a class has become too large, something has become too tightly coupled. HENRY: Gotcha. VICTORIA: Yeah. And so, if you could go back in time, five years ago and give yourself some advice when you first started along this journey, what advice would you give yourself? MAXIM: I'll answer the question in two ways: first for the company and then for myself personally. I think for the company, what I would say is, especially when you're in that kind of pre-product market fit space, and you're maybe struggling to figure out how to solve a challenge that really matters, I think you need to really think carefully about, like, how would you yourself be using your product? And if you're finding reasons, you wouldn't, like, really, really pay careful attention to those. And I think, for us, like, early on in our journey, we ultimately kind of found ourselves asking, we're like, okay, we're a smaller earlier stage team. Perhaps, like, small improvements in productivity or quality aren't going to necessarily move the needle. That's one of the reasons maybe we're not using this. Maybe our developers are already at bandwidth. So, it's not a question of unlocking more bandwidth or figuring out where there's kind of weak points or bottlenecks at that level, but maybe how can we dial in our own processes to let the whole team function more effectively. And I think, for us, like, the more we started thinking through that lens of, like, what's useful to us, like, what's solving a pain point for us, I think, in many ways, DevLake was born out of that exact thinking. And now DevLake is used by hundreds of companies around the world and has, you know, this near thousand developer community that supports it. And I think that's testament to the power of that. For me, personally, if I were to kind of go back five years, you know, I'm grateful to say there isn't a whole lot I would necessarily change. But I think if there's anything that I would, it would just to be consistently more brave in sharing ideas, right? I think Merico has done a great job, and it's something I'm so proud of for us as a team of really embracing new ideas and really kind of making sure, like, best idea ships, right? There isn't a title. There isn't a level of seniority that determines whether or not someone has a right to suggest something or improve something. And I think with that in mind, for me as a technical person but not a member of technical staff, so to speak, I think there was many occasions, for me personally, where I felt like, okay, maybe because of that, I shouldn't necessarily weigh in on certain things. And I think what I've found, and it's a trust-building thing as well, is, like, even if you're wrong, even if your suggestion may be misunderstands something or isn't quite on target, there's still a tremendous amount of value in just being able to share a perspective and share a recommendation and push it out there. And I think with that in mind, like, it's something I would encourage myself and encourage everybody else in a healthy company to feel comfortable to just keep sharing because, ultimately, it's an accuracy-by-volume game to a certain degree, right? Where if I come up with one idea, then I've got one swing at the bat. But if us as a collective come up with 100 ideas that we consider intelligently, we've got a much higher chance of maybe a handful of those really pushing us forward. So, for me, that would be advice I would give myself and to anybody else. HENRY: I'll follow the same structure, so I'll start by the advice in terms of company and advice to myself as an individual. So, for a company level, I think my advice would be fail fast because every company needs to go through this exploration phase trying to find their product-market fit, and then they will have to test, you know, a couple of ideas before they find the right fit for themselves, the same for us. And I wish that we actually had more in terms of structure in exploring these ideas and set deadlines, you know, set milestones for us to quickly test and filter out bad ideas and then accelerate the exploration process. So, fail fast would be my suggestion at the company level. From an individual level, I would say it's more adapting to my CTO role because when I started the company, I still had that, you know, graduate student hustle mindset. I love writing code myself. And it's okay if I spent 100% of my time writing code when the company was, you know, at five people, right? But it's not okay [chuckles] when we have, you know, a team of 40 engineers. So, I wish I had that realization earlier, and I transitioned to a real CTO role earlier, focusing more, like, on technical evangelism or building out the technical and non-technical infrastructure to help my engineering teams be successful. VICTORIA: Well, I really appreciate that. And is there anything else that you all would like to promote today? HENRY: So if you're, you know, engineering leaders who are looking to measure, you know, some metrics and adopt a more data-driven approach to improving your software delivery performance, check out Apache DevLake. It's open-source project, free to use, and it has some great dashboards, support, various data resources. And join our community. We have a pretty vibrant community on Slack. And there are a lot of developers and engineering leaders discussing how they can get more value out of data and metrics and improve software delivery performance. MAXIM: Yeah. And I think to add to that, something I think we've found consistently is there's plenty of data skeptics out there, rightfully so. I think a lot of analytics of every kind are really not very good, right? And so, I think people are rightfully frustrated or even traumatized by them. And for the data skeptics out there, I would invite them to dive into the DevLake community and pose your challenges, right? If you think this stuff doesn't make sense or you have concerns about it, come join the conversation because I think that's really where the most productive discussions end up coming from is not from people mutually high-fiving each other for a successful implementation of DORA. But the really exciting moments come from the people in the community who are challenging it and saying like, "You know what? Like, here's where I don't necessarily think something is useful or I think could be improved." And it's something that's not up to us as individuals to either bless or to deny. That's where the community gets really exciting is those discussions. So, I would say, if you're a data skeptic, come and dive in, and so long as you're respectful, challenge it. And by doing so, you'll hopefully not only help yourself but really help everybody, which is what I love about this stuff so much. JOE: I'm curious, does Merico use Merico? HENRY: Yes. We've been dogfooding ourself a lot. And a lot of the product improvement ideas actually come from our own dogfooding process. For example, there was one time that we look at a dashboard that has this issue change lead time. And then we found our issue, change lead time, you know, went up in the past few month. And then, we were trying to interpret whether that's a good thing or a bad thing because just looking at a single metric doesn't tell us the story behind the change in the metrics. So, we actually improved the dashboard to include some, you know, covariates of the metrics, some other related metrics to help explain the trend of the metric. So yeah, dogfooding is always useful in improving product. VICTORIA: That's great. Well, thank you all so much for joining. I really enjoyed our conversation. You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. And you can find me on Twitter @victori_ousg. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening. See you next time.

It's 5:05! Daily cybersecurity and open source briefing
Episode #265: Edwin Kwan: Who Should Bear the Cost of Invoice Scam?; Marcel Brown: This Day in Tech History; Olimpiu Pop: DORA Metrics - an agile, emotionally safe culture is the way; Shannon Lietz: Security in the DORA Report ; Nathen Harvey: Insights on

It's 5:05! Daily cybersecurity and open source briefing

Play Episode Listen Later Nov 3, 2023 17:27


Free, ungated access to all 265+ episodes of “It's 5:05!” on your favorite podcast platforms: https://bit.ly/505-updates. You're welcome to

The Engineering Enablement Podcast
Key findings from the 2023 State of Devops Report | Nathen Harvey (DORA at Google)

The Engineering Enablement Podcast

Play Episode Listen Later Oct 25, 2023 45:59


This week's episode dives into the DORA research program and this year's State of DevOps Report. Nathen Harvey, who leads DORA at Google, shares the key findings from the research and what's changed since previous reports. Discussion points: (1:10) What DORA focuses on (2:17) Where the DORA metrics fit  (4:35) Introduction to user-centric software development (8:05) Impact of user-centricity on software delivery (9:40) Team performance vs. organizational performance  (13:50) Importance of internal documentation (15:19) Methodology for designing surveys (19:52) Impact of documentation on software delivery (23:11) Reemergence of the Elite cluster (25:55) Advice for leaders leveraging benchmarks (28:30) Redefining MTTR (33:45) Changing how Change Failure Rate is measured (36:45) Impact of AI on software delivery  (41:25) Impact of code review speed Mentions and links: Connect with Nathen on LinkedIn Listen to the previous episode with Nathen Read the 2023 State of Devops Report The DORA Quick Check Blog post: Documentation is like sunshine Join the DORA community DevEx: What Actually Drives Productivity

0800-DEVOPS
2023 State of DevOps Report with Nathen Harvey

0800-DEVOPS

Play Episode Listen Later Oct 22, 2023 36:15


This episode is a special one for me since, for the first time, I have a reappearance in the show – my friend Nathen Harvey, the godfather of the DORA community! We talked about the community, the inaugural DORA Summit, and the freshly published 2023 State of DevOps Report. Nathen touched upon a couple of very interesting insights and shared a ton of advice!An interesting insight is about trunk-based development influencing burnout. Community is working hard to explain this so feel free to join the DORA community trunk-based development discussion that is scheduled for Thursday, December 7 at 6PM UTC. Join the DORA Community of Practice for details and an invitation to the discussion.Please leave a review on your favorite podcast platform or Podchaser, and subscribe to 0800-DEVOPS newsletter here.This interview is featured in 0800-DEVOPS #54 - 2023 State of DevOps Report with Nathen Harvey.[Check out podcast chapters if available on your podcast platform or use links below](0:00)Introduction (1:18)DORA community (6:49)2023 State of DevOps Report (28:25)DORA research and service organizations (31:21)Accelerate, second edition (34:36)Follow recommendations

Dev Interrupted
Unpacking DORA's State of DevOps Report w/ Nathen Harvey of Google Cloud

Dev Interrupted

Play Episode Listen Later Oct 17, 2023 45:04


What does this year's Accelerate State of DevOps Report 2023 mean for your team?LinearB & DORA have officially joined forces. On this week's episode of Dev Interrupted, co-host Conor Bronsdon interviews Nathen Harvey, Head of Google Cloud's DORA team. With data gathered from over 36,000 global professionals, this year's report investigated how top DevOps performers integrate technical, process, and cultural capabilities into their practices for success.Listen to learn how your team can focus on three core outcomes of DevOps: enhancing organizational value, boosting team innovation and collaboration, and promoting team member well-being and productivity.Show Notes:DORA's 2023 State of DevOps ReportGet your DORA Metrics free foreverLinearB's 2023 Software Engineering Benchmarks ReportSupport the show: Subscribe to our Substack Leave us a review Subscribe on YouTube Follow us on Twitter or LinkedIn Offers: Learn about Continuous Merge with gitStream Want to try LinearB? Book a Demo & use discount code "Dev Interrupted Podcast"

Lenny's Podcast: Product | Growth | Career
How to measure and improve developer productivity | Nicole Forsgren (Microsoft Research, GitHub, Google)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Jul 30, 2023 76:17


This episode is brought to you by DX—a platform for measuring and improving developer productivity.—Dr. Nicole Forsgren is a developer productivity and DevOps expert who works with engineering organizations to make work better. Best known as co-author of the Shingo Publication Award-winning book Accelerate and the DevOps Handbook, 2nd edition and author of the State of DevOps Reports, she has helped some of the biggest companies in the world transform their culture, processes, tech, and architecture. Nicole is currently a Partner at Microsoft Research, leading developer productivity research and strategy, and a technical founder/CEO with a successful exit to Google. In a previous life, she was a software engineer, sysadmin, hardware performance engineer, and professor. She has published several peer-reviewed journal papers, has been awarded public and private research grants (funders include NASA and the NSF), and has been featured in the Wall Street Journal, Forbes, Computerworld, and InformationWeek. In today's podcast, we discuss:• Two frameworks for measuring developer productivity: DORA and SPACE• Benchmarks for what good and great look like• Common mistakes to avoid when measuring developer productivity• Resources and tools for improving your metrics• Signs your developer experience needs attention• How to improve your developer experience• Nicole's Four-Box framework for thinking about data and relationships—Find the full transcript at: https://www.lennyspodcast.com/how-to-measure-and-improve-developer-productivity-nicole-forsgren-microsoft-research-github-goo/#transcript—Where to find Nicole Forsgren:• Twitter: https://twitter.com/nicolefv• LinkedIn: https://www.linkedin.com/in/nicolefv/• Website: https://nicolefv.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• Twitter: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Nicole's background(07:55) Unpacking the terms “developer productivity,” “developer experience,” and “DevOps”(10:06) How to move faster and improve practices across the board(13:43) The DORA framework(18:54) Benchmarks for success(22:33) Why company size doesn't matter (24:54) How to improve DevOps capabilities by working backward(29:23) The SPACE framework and choosing metrics(32:51) How SPACE and DORA work together(35:39) Measuring satisfaction(37:52) Resources and tools for optimizing metrics(41:29) Nicole's current book project(45:43) Common pitfalls companies run into when rolling out developer productivity/optimizations(47:42) How the DevOps space has progressed(50:07) The impact of AI on the developer experience and productivity(54:04) First steps to take if you're trying to improve the developer experience(55:15) Why Google is an example of a company implementing DevOps solutions well(56:11) The importance of clear communication(57:32) Nicole's Four-Box framework(1:05:15) Advice on making decisions (1:08:56) Lightning round—Referenced:• Chef: https://www.chef.io/• DORA: https://dora.dev/• GitHub: https://github.com/• Microsoft Research: https://www.microsoft.com/en-us/research/• What is DORA?: https://devops.com/what-is-dora-and-why-you-should-care/• Dustin Smith on LinkedIn: https://www.linkedin.com/in/dustin-smith-b0525458/• Nathen Harvey on LinkedIn: https://www.linkedin.com/in/nathen/• What is CI/CD?: https://about.gitlab.com/topics/ci-cd/• Trunk-based development: https://cloud.google.com/architecture/devops/devops-tech-trunk-based-development• DORA DevOps Quick Check: https://dora.dev/quickcheck/• Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations: https://www.amazon.com/Accelerate-Software-Performing-Technology-Organizations/dp/1942788339• The SPACE of Developer Productivity: https://queue.acm.org/detail.cfm?id=3454124• DevOps Metrics: Nicole Forsgren and Mik Kersten: https://queue.acm.org/detail.cfm?id=3182626• How to Measure Anything: Finding the Value of Intangibles in Business: https://www.amazon.com/How-Measure-Anything-Intangibles-Business/dp/1118539273/• GitHub Copilot: https://github.com/features/copilot• Tabnine: https://www.tabnine.com/the-leading-ai-assistant-for-software-development• Nicole's Decision-Making Spreadsheet: https://docs.google.com/spreadsheets/d/1wItAODkhZ-zKnnFbyDERCd8Hq2NQ03WPvCfigBQ5vpc/edit?usp=sharing• How to do linear regression and correlation analysis: https://www.lennysnewsletter.com/p/linear-regression-and-correlation-analysis• Good Strategy/Bad Strategy: The difference and why it matters: https://www.amazon.com/Good-Strategy-Bad-difference-matters/dp/1781256179/• Designing Your Life: How to Build a Well-Lived, Joyful Life: https://www.amazon.com/Designing-Your-Life-Well-Lived-Joyful/dp/1101875321• Ender's Game: https://www.amazon.com/Enders-Game-Ender-Quintet-1/dp/1250773024/ref=tmm_pap_swatch_0• Suits on Netflix: https://www.netflix.com/title/70195800• Ted Lasso on AppleTV+: https://tv.apple.com/us/show/ted-lasso• Never Have I Ever on Netflix: https://www.netflix.com/title/80179190• Eight Sleep: https://www.eightsleep.com/• COSRX face masks: https://www.amazon.com/COSRX-Advanced-Secretion-Hydrating-Moisturizing/dp/B08JSL9W6K/—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

The Engineering Enablement Podcast
A masterclass on DORA – research program, common pitfalls, and future direction | Nathen Harvey (Google)

The Engineering Enablement Podcast

Play Episode Listen Later Jan 25, 2023 54:45


Nathen Harvey, who leads DORA at Google, explains what DORA is, how it has evolved in recent years, the common challenges companies face as they adopt DORA metrics, and where the program may be heading in the future.—Discussion points:(1:48) What DORA is today and how it exists within Google(3:37) The vision for Google and DORA coming together(5:20) How the DORA research program works(7:53) Who participates in the DORA survey(9:28) How the industry benchmarks are identified (11:05) How the reports have evolved over recent years(13:55) How reliability is measured (15:19) Why the 2022 report didn't have an Elite category(17:11) The new Slowing, Flowing, and Retiring clusters(19:25) How to think about applying the benchmarks(20:45) Challenges with how DORA metrics are used(24:02) Why comparing teams' DORA metrics is an antipattern (26:18) Why ‘industry' doesn't matter when comparing organizations to benchmarks (29:32) Moving beyond DORA metrics to optimize organizational performance (30:56) Defining different DORA metrics(36:27) Measuring deployment frequency at the team level, not the organizational level(38:29) The capabilities: there's more to DORA than the four metrics (43:09) How DORA and SPACE are related(47:58) DORA's capabilities assessment tool (49:26) Where DORA is heading—Mentions and links:Follow Nathen on LinkedIn or TwitterEngineering Enablement episode with Dr. Nicole Forsgren2022 State of DevOps report  Bryan Finster's How to Use & Abuse DORA Metrics (and Abi's summary of the paper) Engineering Enablement episode with Dr. Margaret-Anne StoreyJoin the DORA community for discussion and events: dora.community 

GOTO - Today, Tomorrow and the Future
97 Things Every Cloud Engineer Should Know • Emily Freeman, Nathen Harvey & Chris Williams

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Jan 13, 2023 43:33 Transcription Available


This interview was recorded for the GOTO Book Club.gotopia.tech/bookclubRead the full transcription of the interview hereEmily Freeman - Head of DevOps Product Marketing, Head of Community Engagement at AWS & Co-Editor of "97 Things Every Cloud Engineer Should Know"Nathen Harvey - Developer Advocate at Google Cloud and Co-Editor of "97 Things Every Cloud Engineer Should Know"Chris Williams - Cloud Therapist at World Wide TechnologyDESCRIPTIONMigrating to the cloud has become a "sine qua non" these days. The compact articles in 97 Things Every Cloud Engineer Should Know inspect the entirety of cloud computing, including fundamentals, architecture and migration. You'll go through security and compliance, operations and reliability and software development. And examine networking, organizational culture, and more.Find out the story behind the benefits of curating such a community-driven book from the co-editors Emily Freeman, head of DevOps product marketing at AWS, Nathen Harvey, developer advocate at Google Cloud, and Chris Williams, cloud therapist and principal cloud solutions architect for World Wide Technologies.The interview is based on Emily's & Nathen's co-edited book "97 Things Every Cloud Engineer Should Know"RECOMMENDED BOOKSEmily Freeman & Nathen Harvey • 97 Things Every Cloud Engineer Should KnowEmily Freeman • DevOps For DummiesMartin Kleppmann • Designing Data-Intensive ApplicationsEmil Stolarsky & Jaime Woo • 97 Things Every SRE Should KnowKevlin Henney & Trisha Gee • 97 Things Every Java Programmer Should KnowKevlin Henney • 97 Things Every Programmer Should KnowHenney & Monson-Haefel • 97 Things Every Software Architect Should KnowKasun Indrasiri & Danesh Kuruppu • gRPC: Up and RunningTwitterLinkedInFacebookLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily

0800-DEVOPS
2022 State of DevOps Report with Nathen Harvey

0800-DEVOPS

Play Episode Listen Later Nov 22, 2022 43:20


Nathen Harvey needs little introduction in DevOps community. He is Developer Advocate at Google, co-author of State of DevOps Report and co-author of a great book called “97 Things Every Cloud Engineer Should Know”. We talked about insights and surprises from this year's State of DevOps Report. And Nathen shared his view on recent hot takes that “DevOps is dead”

Google Cloud Platform Podcast
2022 State of DevOps Survey with Nathen Harvey and Derek DeBellis

Google Cloud Platform Podcast

Play Episode Listen Later Oct 5, 2022 44:07


On the show this week, we're talking updated DevOps practices for 2022 with hosts Stephanie Wong and Chloe Condon and our guests Nathen Harvey and Derek DeBellis. Nathen and Derek start the show with a thorough discussion of DORA, the research program dedicated to helping organizations improve software delivery and operations, and the state of DevOps report that Google publishes every year. This year, the DevOps research team strengthened their focus on security and discovered that one of the biggest predictors in security practice adoption is company culture. Open, communicative, and trustful company cultures are some of the best for accepting and implementing optimized security practices. Derek tells us how company cultures are measured and scored for this purpose and Nathen talks about team and individual burnout and its affects on culture. Low, medium, high, and elite teams are another indicator of culture, and Nathen explains how teams earn their label through four keys of software delivery performance. Each year, they let the data show these four clusters of team performance. But this year there were only three, and Derek talks more about this phenomenon and why the elite cluster seems to have disappeared. When operational performance analysis was added, the four clusters reemerged and were renamed to better suit the new analysis metrics. Nathen details these four new clusters: starting, which performs neither well nor poorly and may be just starting out; flowing, teams that are performing well across throughput, stability, and operational performance; slowing teams, which don't have high throughput but excel in other areas; and retiring teams, which are reliable but not actively developing projects. We discuss how companies may shift from one cluster to another and how much context can affect this shift. We talk about key findings in the 2022 DevOps report, especially in the security space. Some of the most notable include the adoption of DevOps security practices and the decreased incidence of burnout on teams who leverage security practices. Nathen and Derek elaborate on how this year's research changed from last year and what remained the same. Nathen Harvey Nathen works with teams helping them learn about and apply the findings of our research into high performing teams. He's been involved in the DevOps community for more than a decade. Derek DeBellis Derek is a Quantitative User Experience Researcher at Google, where Derek focuses on survey research, logs analysis, and figuring out ways to measure concepts central to product development. Derek has published on Human-AI interaction, the impact of Covid-19's onset on smoking cessation, designing for NLP errors and the role of UX in ensuring privacy. Cool things of the week Try out Cloud Spanner databases at no cost with new free trial instances blog Chipotle Is Testing More Artificial Intelligence Solutions To Improve Operations article Gyfted uses Google Cloud AI/ML tools to match tech workers with the best jobs blog Interview 2022 Accelerate State of DevOps Report blog DevOps site 2022 State of the DevOps Report Report site DORA site DORA Community site SLSA site Security Software Development Framework site Westrum organizational culture site Google finds culture, not tech, is the biggest predictor of DevOps security outcomes article GCP Podcast Episode 205: DevOps with Nathen Harvey and Jez Humble podcast GCP Podcast Episode 284: State of DevOps Report 2021 with Nathen Harvey and Dustin Smith podcast GCP Podcast Episode 290: Resiliency at Shopify with Camilo Lopez and Tai Dickerson podcast What's something cool you're working on? Steph is working on talks for DevFest Nantes and a Google Cloud dev conference in London. She'll be talking about subsea fiber optics and Google Cloud networking products. Chloe is a Noogler, so she's been working on learning as much as she can! She is excited to make her podcast debut this week! Hosts Stephanie Wong and Chloe Condon

On Cloud
It's all here: inside the 2021 “Accelerate State of DevOps” report

On Cloud

Play Episode Listen Later Feb 9, 2022 26:53


The 2021 “Accelerate State of DevOps” report from Google is out! In this episode, Mike Kavis talks with Google's Nathen Harvey and Deloitte's Manoj Mishra about the report's most compelling findings. Among them: DevOps and SRE are complementary; successful SRE equals knowing your customer; documentation is critical, but it should be organic; and there's a newly-announced “Reliability” metric. The group also discusses customer satisfaction and the new “Release Management” role in DevOps/SRE.

Google Cloud Platform Podcast
2021 Year End Wrap Up

Google Cloud Platform Podcast

Play Episode Listen Later Dec 15, 2021 43:16


We're finishing out 2021 with a celebration of our favorite episodes and topics from the year! From new tools for Cost Optimization in GKE and advances in AI to tips for improving feelings of imposter syndrome, Carter Morgan, Stephanie Wong, and Mark Mirchandani share memorable moments from 2021 and look forward to future episodes. Carter Morgan Carter Morgan is Developer Advocate for Google Cloud, where he creates and hosts content on Google's Youtube channel, co-hosts several Google Cloud podcasts, and designs courses like the Udacity course “Scalable Microservices with Kubernetes” he co-created with Kelsey Hightower. Carter Morgan is an international standup comedian, who's approach of creating unique moments with the audience in front of him has seen him perform all over the world, including in Paris, London, the Melbourne International Comedy Festival with Joe White. And in 2019, and the 2019 Edinburgh Fringe Festival. Previously, he was a programmer for the USAF and Microsoft. Stephanie Wong Stephanie Wong is a Developer Advocate focusing on online content across all Google Cloud products. She's a host of the GCP Podcast and the Where the Internet Lives podcast, along with many GCP Youtube video series. She is the winner of a 2021 Webby Award for her content about data centers. Previously she was a Customer Engineer at Google and at Oracle. Outside of her tech life she is a former pageant queen and hip hop dancer and has an unhealthy obsession with dogs. Mark Mirchandani Mark Mirchandani is a developer advocate for Google Cloud, occasional host of the Google Cloud Platform podcast, and helps create content for users. Cool things of the week Anthos Multi-Cloud v2 is generally available docs Machine learning, Google Kubernetes Engine, and more: 10 free training offers to take advantage of before 2022 blog The past, present, and future of Kubernetes with Eric Brewer blog GCP Podcast Episode 124: VP of Infrastructure Eric Brewer podcast Our Favorite Episodes of 2021 Mark's Favorites GCP Podcast Episode 252: GKE Cost Optimization with Kaslin Fields and Anthony Bushong podcast GCP Podcast Episode 267: Cloud Firestore for Users who are new to Firestore podcast GKE Essentials videos Beyond Your Bill vidoes Stephanie's Favorites GCP Podcast Episode 270: Traditional vs. Service Networking with Ryan Przybyl podcast GCP Podcast Episode 271: The Future of Service Networking with Ryan Przybyl podcast GCP Podcast Episode 279: MLB with Perry Pierce and JoAnn Brereton podcast Carter's Favorites GCP Podcast Episode 284: State of DevOps Report 2021 with Nathen Harvey and Dustin Smith podcast GCP Podcast Episode 287: Imposter Syndrome with Carter Morgan podcast Most Popular Episodes of 2021 GCP Podcast Episode Episode 264: SRE III with Steve McGhee and Yuri Grinshtey podcast GCP Podcast Episode 258: The Power of Serverless with Aparna Sinha and Philip Beevers podcast GCP Podcast Episode 253: Data Governance with Jessi Ashdown and Uri Gilad podcast GCP Podcast Episode 263: SAP + Apigee: The Power of APIs with Benjamin Schuler and Dave Feuer podcast GCP Podcast Episode 271: The Future of Service Networking with Ryan Przybyl podcast Sound Effects Attribution “Dun Dun Duuun” by Divenorth of Freesound.org “Cash Register” by Kiddpark of Freesound.org “Jingles and Pings” by BristolStories of HDInteractive.com “Time – Inception Theme” Composed by Hanz Zimmer (super-low-budget midi version) Hosts Stephanie Wong, Carter Morgan and Mark Mirchandani

Tech Lead Journal
#68 - 2021 Accelerate State of DevOps Report - Nathen Harvey

Tech Lead Journal

Play Episode Listen Later Dec 13, 2021 47:55


“Many organizations think in order to be safe, they have to be slow. But the data shows us that the best performers are getting both. And in fact, as speed increases, so too does stability." Nathen Harvey is the co-author of 2021 Accelerate State of DevOps Report and a Developer Advocate at Google. In this episode, we discussed in-depth the latest release of the State of DevOps Report. Nathen started by describing what the report is all about, how it got started, and explained the five key metrics suggested by the report to measure the software delivery and operational performance. Nathen then explained how the report categorizes different performers based on their performance against the key metrics and how the elite performers outperform the others in terms of speed, stability, and reliability. Next, we dived into several new key findings that came out of the 2021 report that relate to documentation, secure software supply chain, and burnout. Towards the end, Nathen gave great tips on how we can use the findings from the reports to get started and improve our software delivery and operational performance, that ultimately will improve our organizational performance. Listen out for: Career Journey - [00:05:28] State of DevOps Report - [00:09:32] The Five Key Metrics - [00:13:55] Speed, Safety, and Reliability - [00:19:58] Performers Categories - [00:23:26] 2021 New Key Findings - [00:28:01] New Finding: Documentation - [00:30:44] New Finding: Secure Software Supply Chain - [00:34:58] New Finding: Burnout - [00:37:22] How to Start Improving - [00:39:36] 3 Tech Lead Wisdom - [00:43:55] _____ Nathen Harvey's Bio Nathen Harvey, Developer Relations Engineer at Google, has built a career on helping teams realize their potential while aligning technology to business outcomes. Nathen has had the privilege of working with some of the best teams and open source communities, helping them apply the principles and practices of DevOps and SRE. He is part of the Google Cloud DORA research team and a co-author of the 2021 Accelerate State of DevOps Report. Nathen was an editor for 97 Things Every Cloud Engineer Should Know, published by O'Reilly in 2020. Follow Nathen: Twitter – @nathenharvey LinkedIn – https://linkedin.com/in/nathen Github – https://github.com/nathenharvey Our Sponsor Are you looking for a new cool swag? Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available. Check out all the cool swags by visiting https://techleadjournal.dev/shop. Like this episode? Subscribe on your favorite podcast app and submit your feedback. Follow @techleadjournal on LinkedIn, Twitter, and Instagram. Pledge your support by becoming a patron. For more info about the episode (including quotes and transcript), visit techleadjournal.dev/episodes/68.

Google Cloud Platform Podcast
State of DevOps Report 2021 with Nathen Harvey and Dustin Smith

Google Cloud Platform Podcast

Play Episode Listen Later Nov 10, 2021 45:40


This week, Stephanie Wong and Carter Morgan are talking about the recently released State of DevOps Report. Guests Dustin Smith and Nathen Harvey tell us all about DORA, the research group working to study DevOps, and the findings of their years-long study aimed at improving workplace environments, fostering sustainable increased productivity, and ensuring quality output across industries. During their years of research, the DORA team has developed ways to measure team results and workplace culture. Our guests tell us about the five measures they use, including deployment frequency and reliability. The shared responsibility and collaboration of teams at a company to optimize these five metrics is what makes good DevOps performance. Through a real-life example, we hear how the coordination of goals and incentives across departments can improve results of the DevOps metrics, thus improving the speed and stability of finished products. Once businesses identify problems, they need realistic expectations of the time and energy required to solve these issues. Learning from each change made and growing during the process is an important part of optimization, and our guests talk about the best practices their research has identified for facilitating smoother transitions. High quality documentation is a vital part of optimizing DevOps, and this year’s report examined internal documentation for the first time. Nathan describes what makes good documentation, like clear ownership of the documents and docs that are regularly updated for easy sharing and scaling of up-to-date material across the company. Dustin elaborates, explaining other factors that make quality, reliable documents. Later, we talk SRE and how companies can measure and optimize Site Reliability Engineering. A supportive team culture and ensuring a secure product and supply chain are some important factors in optimal SRE, the DORA study found. Our guests offer advice for companies looking to get started with DevOps practices. Nathen Harvey Nathen Harvey is a developer relations engineer at Google who has built a career on helping teams realize their potential while aligning technology to business outcomes. Nathen has had the privilege of working with some of the best teams and open source communities, helping them apply the principles and practices of DevOps and SRE. Dustin Smith Dustin Smith is a UX Research Manager and the DORA research lead. He studies the factors that influence a team’s ability to deliver software quickly and reliably. Cool things of the week Email is 50 years old, and still where it's @ blog Make the most of hybrid work with Google Workspace blog We analyzed 80 million ransomware samples – here's what we learned blog Interview DevOps site DORA site SRE site 2021 Accelerate State of DevOps report addresses burnout, team performance report

Screaming in the Cloud
Driving State-of-the-Art DevOps with Nathen Harvey

Screaming in the Cloud

Play Episode Listen Later May 20, 2021 33:42


About NathenNathen Harvey, Cloud Developer Advocate at Google, helps the community understand and apply DevOps and SRE practices in the cloud.  Nathen formerly led the Chef community, co-hosted the Food Fight Show, and managed operations and infrastructure for a diverse range of web applications. Links: cloud.google.com/devops: https://cloud.google.com/devops 97 Things every Cloud Engineer Should Know: https://shop.aer.io/oreilly/p/97-things-every/9781492076735-9149 Twitter: https://twitter.com/nathenharvey TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It’s an awesome approach. I’ve used something similar for years. Check them out. But wait, there’s more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It’s awesome. If you don’t do something like this, you’re likely to find out that you’ve gotten breached, the hard way. Take a look at this. It’s one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That’s canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I’m a big fan of this. More from them in the coming weeks.Corey: This episode is sponsored in part by our friends at Lumigo. If you've built anything from serverless, you know that if there's one thing that can be said universally about these applications, it's that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build applications.It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You've created more problems for yourself. Make one of them go away. To learn more, visit lumigo.io.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. I’m joined this week by Nathen Harvey, a cloud developer advocate at a small startup called Google. Nathen, thank you for joining me.Nathen: Hey, Corey. It’s really great to be here.Corey: We’ll get to the Google bits in a little, but first, I want to start back in the beginning with your origin story. It turns out, for example, that you were at a lot of places, and the first thing going through your history that I really recognized was way back at the end of 2009, where you were the web operations manager at Custom Ink. They’re a t-shirt company—and other apparel—that I’ve been using for three years now for the charity t-shirt drive here, as well as other sundry things. Longtime listeners of the show might remember we had Ken Collins on to talk about Ruby in Lambda and other horrifying things, before it was cool.Nathen: Yes, indeed, I was at Custom Ink. And, you know, you talk about them being a t-shirt company, and I don’t know… maybe I’m still a shill for Custom Ink, but I really look at them as an experience company. And you’ve recognized that yourself. They produce and help people, really encourage that group and experiences, and really drive what does it mean to connect with other humans, and how can you do that through custom apparel? To me, that’s what Custom Ink has always been about. They’re not selling t-shirts; they are selling an experience.Corey: In my case, I view them as a t-shirt company because, let’s be fair here, I wind up doing charity t-shirt drives, and they’ve always been extremely supportive of—well, there’s really no other way to put this—my ridiculous nonsense. The year I had linked campaigns of the ‘AMI has three syllables’ shirt that was on sale, and then for the Amazonians, ‘ah-mi’ is how it’s pronounced instead and that one was $10 more because there’s a price to being wrong. And all proceeds, of course, went to benefit the charity of the year. And that was a fun thing. And I talked to a number of other folks on this, and they look at me very strangely, and Custom Ink didn’t even blink.Nathen: Right, right. Absolutely. Absolutely.Corey: And yes, they said lots of other apparel, but for whatever reason, it seems that sending out complicated multiple options of things that need each hit minimum order quantities to print during a fundraiser, and the fact that I don’t have to deal with the money because they just wind up sending it over directly. It’s just easier. It’s one of those things where back when I was a single person who was doing this stuff, I didn’t have to worry about it. Now that I’ve grown and my needs have multiplied, I still like doing business with them. Great folks.Nathen: Absolutely. And that’s exactly what I mean by—like, they’ve sold you on that experience. That’s why you continue to do business with them. It’s not just because of the t-shirts. It’s the whole package that goes along with it.Corey: And then in 2012, the world didn’t end. But yours kind of did because you stopped working at Custom Ink and went to another company called Chef. You were there for a little over six years. You started off as a community director and then became the VP of Community Development. And I think you did an amazing job, but first tell me about that, then I will give my hot take.Nathen: All right, great. I’m always up for the hot takes. So listen, Chef was an amazing community of people. Oh, it was also a company. And so I really fell in love with—while I was at Custom Ink, actually, we were using Chef, and I fell in love with the community.And I was doing a lot of community support, running my own podcast, or participating with some co-hosts on a podcast called the Food Fight Show back in the day—it was all about Chef—running meetups and so forth. And at one point I decided, you know, what I should do maybe is stop being on call and start supporting this community full time. And that’s exactly what I did. I went to Chef and yes, as you mentioned, spent just over six years there, or just about six years there, and it was really, really an incredible time. Lots of hugs to be given, and just a great community in the DevOps space.Corey: I took a somewhat, I guess, agreeing or disagreeing position. I was on the Puppet side of the configuration management debate, and it was challenging. And then, ah, I was one of the very early developers behind SaltStack because clearly, the problem with all of these things was that no one had written it correctly, and we were going to fix that. And it turns out no, no, the problem was customers the whole time. But that’s a separate debate.So, I was never in the Chef ecosystem. That was the one system I never really touched in anger. And it’s easy to turn this into a, “Oh, you folks were the competition,” despite the fact I’ve never actually worked directly for either of those companies. But it was never like that because our real enemies were people configuring things by hand, for one because that’s unnecessary toil; don’t do it, and it was also just such an uplifting sense of community. Some of the best people I knew were in the Chef ecosystem, in the Chef orbit.For a while, they’re, on some level—and this is something I’d love to get your thoughts on—it seems that a failure mode that Chef exhibited was hiring directly from its community, where if someone was a big fan of Chef, start a stopwatch, they’re going to be working there before the month is out.Nathen: I think that Chef, the company definitely pulled a lot of community members into the organization. And frankly, when the company started, that was really, really great because it was an early startup. And as the company grew, it was still wonderful, of course, to pull in people from the community to really help drive the future direction, how our customers are using it. But like you said, there is a little bit of a challenge or concern when you start pulling too many of your most vocal supporters out of the community and putting them into the company, sometimes in places or roles where they didn’t have the opportunity to be as vocal, as big a champions for the product, for the services.Corey: I think at some level, it was—again, it helps to have people who are passionate about the product working there, but on the other, it felt like over time, it wound up weakening the community in some respects, just because everyone who worked there eventually found themselves in a scenario of well, I work here, it’s what we do, and now I have to say nice things. It winds up weakening the grassroot story.Nathen: Mmm. There’s definitely some truth to that, but I think there’s also some truths to just the evolution of community as you went from a community in the early days where there were a lot of contributors to over time—gratefully so—the community that—or sort of the proportion of the community that were consumers of Chef versus contributors to Chef, that balance changed. And so you had a lot more customers using the product. So, I don’t disagree with you, but I do think that it’s part of the natural evolution of community as well.Corey: And all things must end. And of course, Chef got acquired, I believe, after you left. So, I mean, at that point, you left, they were rudderless and what else were they to do? And you went to Google. And that is always an interesting story because Google’s community interaction before the time you wound up there, and after—I don’t know that you were necessarily the proximate cause, but I’m going to hang that around your neck because it’s all been a positive change since then—look radically different.Nathen: Yeah. Well, thank you. It is definitely not something that I should wear or carry alone, but going to Google was an interesting choice for me and I recognize that. And, you know, honestly, Corey, one of the things that drove me to Google was a good friend of mine, Seth Vargo. And just to kind of tie the complete throughline here, Seth and I worked together at Custom Ink, we’ve worked together at Chef, he left Chef and went to Hashi, and then went to Google. And the day after I knew that he was going to Google, I called him up and I said, “Seth, come on. Google’s so big. Why? Why? And how? I don’t understand. I don’t understand the move.”Corey: I asked him many of the same questions back in episode three of this show. He was a very early guest when I was scared speechless having conversations. It’s improved since then, a couple hundred in. But yeah, very friendly; very open; very warm.Nathen: Yeah. And, you know—Corey: “Why are you at Google?” was sort of the next follow-on question there in that era.Nathen: [laugh]. Yes, indeed. And I do think that Google, and specifically Google Cloud, has really taken to heart this idea that there’s a lot that we can learn from each other. And I don’t mean from each other within Google. Although, of course, we can learn a lot from each other.But we can also learn a lot from our community, from our customer base. How are they using Google Cloud? How are they using technology to drive their business forward? These are all things that we can learn. It turns out, not every company has Google, and that’s a good thing.Not every company should be Google or Google-sized, and certainly don’t have Google customers. And I think that it’s really important that we recognize that when we work with a customer, they’re the experts in their customers, and in their systems, and so forth.Corey: A lot has changed with Google’s approach to, well, basically everything. It turns out that when you’re a company that is, what, 26 years old now—27, something like that—starting with humble beginnings and then becoming a trillion-dollar entity, things change. Culture change, your community changes, what you do changes, and that becomes something that I think is not necessarily fully appreciated or fully understood in some corners. But then 2018 hit. You went to Google; what did you do then?Because it is such a large company that it is very difficult to know what any individual is up to there, and the primary means that I engage in the DevRel community space—specifically via aggressively shitposting on Twitter—isn’t really your means of interacting with the community. So, from that particular point of view, it’s, “Oh, yeah, he went to Google, and no one ever heard from him again.” What is it you say it is you do there?Nathen: Yeah. So, for sure. What I do here as a cloud advocate, is I really focus in on kind of two areas, I would say: DevOps—and I recognize that is a terrible, terrible word because when I say it, we all think of different things, but I definitely focus on the DevOps—and then SRE practices as well, or Site Reliability Engineering. And specifically what I work on is how do we bring the principles and practices of DevOps, of SRE, into our customer base and into the community at large? How do we drive what is the state of the art?How do we approach these particular topics? And so that’s really what I’ve been focused on since joining Google. Well, frankly, I was focused on that while at Chef, as well, maybe without the SRE bend so much, but certainly at Google SRE comes in, but it’s always—for the past decade for me—been about DevOps and how do we use technology to align the humans and work towards the business outcomes that we’re driving for?Corey: And business outcomes become an interesting story in the world of cloud because it distills down, for a cloud service provider is, we would like people to use our cloud, more of it, in perpetuity. It is not a complicated business model—if I can be direct—because business models inherently are not. “Whatever it is your company does, we would like you to do it here.” And that turns into a bunch of differentiated services across the spectrum, in some cases hilariously so, when it turns into basically pick an English word, and there’s a 50/50 shot that’s part of a service name somewhere. But a lot of it distills down to baseline distinct primitives.You’re talking about the DevOps aspect of it, which is—we talk about, is it culture? Is it tools? No, it’s a means to sell conferences, and books, and things like that. But what is it in the context of a cloud service provider? Specifically, Google because let’s be clear here, DevOps apparently for other providers is Azure DevOps. That’s right. It’s a service name, and DevOps Guru on the AWS side because everything is terrible.Nathen: Absolutely. Look, I think that I used to snark that the only DevOps tool was the manager of DevOps. But the truth is that DevOps is… it is tooling, and it is culture, and to separate the two is really a fool’s errand. I think that your tooling amplifies your culture, your culture amplifies your tooling. Together, this is how we make progress.Now, when it comes to Google, what do we mean when we say DevOps? Well, one of the good things is, shortly after I joined Google Cloud, Google Cloud acquired DORA, the DevOps Research and Assessment Organization.Corey: Jez Humble, and Dr. Nicole Forsgren. And then, for all intents and purposes, they googled it. Relatively shortly thereafter, by which I mean, we never really heard from DORA again. In 2020, the “State of DevOps Report” didn’t exist, which was what they were famous for doing. And it was, “Oh, yep. That’s a Google acquisition all right.” Is that what happened? Did I miss some nuance there?Nathen: Yeah. Let’s talk about that. So first, you’re right, it was Dr. Nicole Forsgren, who founded DORA. So, when the acquisition happened, she came along to Google Cloud, Jez Humble came along through that acquisition as well. And frankly, what happened in 2020? Well, Corey, I don’t know if you noticed, but there was a lot happening in 2020, much of it not very good. I think when we look at the global scale, like, 2020 was not a great year for us—Corey: It was a rebuilding year.Nathen: Oh, all right, fair enough. Fair enough. [laugh]. A rebuilding year. But so here’s what happened with DORA, quite frankly. We—Google Cloud—continue to invest in that research program. And really, in a sense, 2020 was a rebuilding year, in that our focus was really about how do we help our customers and our community apply the lessons of DORA?And so one of the things that we’ve done is we’ve released much of the research under cloud.google.com/devops, including right there, a DevOps quick check where, as a team you can go in and, using the metrics and the research program from DORA, you can assess, are you a low, medium, high, or elite performer?And then beyond that assessment, actually use the research to help you identify which capabilities should my team invest in improving. So, those capabilities might be technical capabilities, things like continuous integration; it might be process or measurement capabilities, or in fact, cultural capabilities. So, all of these capabilities come together to help you improve your overall software delivery and operations performance. And so in 2020, the big thing that we did was release and continue to update this Quick Check, release the research, make it fully available. We’ve also spent some time internally on the program that, you know, is not super interesting to talk about on the podcast.But the other thing that we did in 2020 with the DORA research program was update the ROI research, the return on investment research. This is something that maybe your listeners don’t care about, but their managers might care about, their CIOs, CTOs, CFOs might care about. How do we get money back on this transformation thing? And the research paper really digs into exactly that. How do we measure that? What returns can we expect? And so forth. So, that was released in 2020.Corey: I have a whole bunch of angry thoughts about a lot of takes in that space, but this is neither the time nor the place for me to begin ranting incoherently for an hour and a half. But yeah, I get that it was a year that was off, and now you’re doing it again, apparently, in 2021. And the one thing I never really saw historically because I don’t know if I’m playing in the wrong environments, or I’m certainly not the target [laugh] audience now, if I ever really was, but most years, I missed the release of the survey of where people can go to fill in these questions. I would be interested to know where that is now. And then I would be interested to know, how have you been socializing in that in the past? In other words, where are you finding these people?Nathen: Yeah, for sure. So, the place you go to find the survey right now is cloud.google.com/devops, you’ll find a button on the page that says something like, “Take the survey,” or, “Take the 2021 survey.”And what we’ve done in the past, and really what DORA has done in the past is use a number of different ways to get out information about the survey, when the survey is open, and so forth. Primarily Twitter, but also we have partners, and DORA historically has used partners as well to help share that the survey itself is open. So, I would absolutely recommend that you go and check out the survey because I’ll tell you what, one of the things that’s really interesting, Corey, over the years, I’ve talked to a bunch of people that have taken the survey, and that have read the State of DevOps Report that comes out each year, and some of the consistent feedback I’ve heard from folks is that simply taking the survey and considering the questions that are asked as part of the survey gives great insight immediately into how their team can improve. What things, what capabilities are they lacking? Or what capabilities are they doing really well with and they don’t need to make investments on? They can immediately see that just by answering and carefully considering the questions that are part of the survey.Corey: This episode is sponsored by ExtraHop. ExtraHop provides threat detection and response for the Enterprise (not the starship). On-prem security doesn’t translate well to cloud or multi-cloud environments, and that’s not even counting IoT. ExtraHop automatically discovers everything inside the perimeter, including your cloud workloads and IoT devices, detects these threats up to 35 percent faster, and helps you act immediately. Ask for a free trial of detection and response for AWS today at extrahop.com/trial.Corey: Very often, in some cases looking at things like maturity models and the like, the actual report is less valuable than the exercise of filling it out and going through the process. I mean, compliance reports, audit framework, et cetera, often lead to the same outcomes. The question is, are you taking it seriously, or are you one of those folks who is filling out a survey because do this and you’ll be entered to win a $25 gift card somewhere? Probably Blockbuster because it no longer exists. I get those in my email constantly of, “Yeah, give half an hour of your time in return for some paltry chance to win something.”No, I have a job to do. And I worry if at that level of that approach, who are you actually getting that’s going to sit down and fill this thing out? That said, the State of DevOps Reports have been for a long time, sort of the gold standard in this space and I would encourage people listening to this to absolutely take the time to fill that out. cloud.google.com/devops.I’m looking forward to seeing what comes out of it. And I love it because of the casual shade you can use to throw at other companies, too. Like, “Are you an elite team?” With the implicit baked-in sentiment being, no, you’re not, but I want to hear you say it.Nathen: Yeah, one of the things that really sets DORA apart, also, I think, is just the—well, two of the things I guess I would say. One is the length of time that the research program has been running. It’s going on seven years now that this research program has been running, and so given that, you have tens of thousands of IT professionals that have taken the survey and provided insights into sort of what’s the state of our industry today, and where are we heading, but it’s also an academically rigorous survey. The survey and the research itself has always and continues to be completely platform and program agnostic. This is not a survey about Google Cloud.This is not a survey where we’re trying to help understand exactly what products on Google Cloud should you use in order to be an elite performer. No. That’s not what this is about. It is about, truly, capabilities that your team needs in order to improve their software delivery and operations performance. And I think that’s really, really important.Dr. Nicole Forsgren who founded DORA, she didn’t come up with all of these ideas: “Hey, I think that you get better by doing this.” No. Instead, she researched all of these ideas. She got this input from across organizations of all sizes, organizations in every industry, and that, I think, really sets it apart.And our ability to really stay committed to that academic rigor, and the platform-agnostic approach to capturing and investigating these capabilities, I think is so important to this research. And again, this is why you should participate in the survey because you truly are going to help us move the state of the art of our industry.Corey: No, historically, there’s been a challenge where the mantle of thought leadership in conjunction with Google have intersected because there’s a common trope—historical—and I think that it is no longer accurately true. It’s an easy cheap shot, but I don’t think it holds water like it once did. Where, “Oh, Googler. It’s another word for condescending.” And there is an element of “Oh, this is how DevOps should be; this is how we’re moving things forward.” How do you distance it from being Google says you should do it like this?Nathen: Yeah. This comes up a lot. And frankly, I get in conversations with customers asking, “How does Google do this? How does Google do that?” And my answer always is, “You know, I can tell you how Google does something, and that might be interesting, but the fact is, it’s not much more than that, much more than interesting. Because what really matters is how are you going to do this? How are you going to improve your outcomes, whether that’s you’re delivering faster, you’re delivering more reliable, you’re running more reliable services? You’re the experts. As I mentioned earlier, you’re the experts in your teams, in your technology, and your customers. So, I’m here to learn right along with you. How are you going to do this? How are you going to improve?” Knowing how Google does it, eh, it’s interesting, but it’s not the path that you will follow.Corey: I think that’s one of those statements that can’t ever be outright stated on a marketing website, somewhere; it’s one of those shifts that you have to live. And I think that Google’s done a pretty decent job of that. The condescending Googler jokes are dated at this point, and it’s not because there was ever an ad campaign about, we’re not condescending anymore. It was a very subtle shift in the way that Google spoke to its customers, spoke about themselves. I no longer feel the need to stand up in a blinding white rage in the Q&A portion of conference talks given by Google employees.A lot has changed, and it’s not one thing that I can point to, it’s a bunch of different things that all add up to dramatically shifted credibility models. Realistically, I feel like that is a microcosm of a DevOps transformation. It’s not a tool; it’s not a single person being hired; it’s not, we’re taking an agile class for three days for all of engineering, and now things will be better. It’s a whole bunch of sustained work with a whole bunch of thought, and effort put into making it an actual transformation, which is such a toxically overloaded term, I dare not use it.Nathen: Indeed. And there’s no maturity model that shows, are you there yet? And it is something that you don’t flip on or flip off like a switch, right? It doesn’t happen overnight. It takes iteration and iterative change across the entire organization.And just like every change that you have across an organization, there are places where it’s going better than other places. And how do you learn from that? I think that’s really, really important. And to recognize and to bring some of that humility to the table is so important.Corey: So, what’s interesting about folks that I talk to on this show—well, there are many interesting things, but one of the interesting things is, is that they have a higher rate than the general population of having at one point in their careers, written a book of some form, and you are, of course, no exception. You and Emily Freeman co-authored recently, a book entitled 97 Things every Cloud Engineer Should Know. And it’s interesting because it only has one nine in the title. Okay, that is at least an attempt at being available. I know it’s available wherever most books are sold. Tell me more.Nathen: Yeah, so first, let’s start with the 97. Why 97? Corey, I don’t know if this or not, but 100% is the wrong reliability target for just about everything. So, 97. That feels achievable.Corey: It also feels like three people said they would do it and then backed out at the last minute, but that’s my cynicism speaking.Nathen: Well, for better or worse, O’Reilly. Has a whole 97 Things series and this is part of it. So, it is, in fact, 97 things. The other thing that I think is really important about the book: you mentioned that Emily and I wrote it, and the beauty is, for a long time, I’ve wanted to have written a book, and I have never wanted to be writing a book.Corey: That is what every author has ever said. It’s, no one wants to write a book; they want to have written one. And then you get a couple of beers into people and ask them, “So, I’m debating writing a book. Should I?” The response is, “No. Absolutely not. No.” And at some point, when you calmed them down again, and they stop screaming, they tell you the horrifying stories, and you realize, “Oh, wow, I really never want to write a book.”Nathen: [laugh]. Yes. Well, the beauty of 97 Things and this book in particular, or the whole series, really, is its subtitle is Collective Wisdom from the Experts. So, in fact, we had over 80 different contributors sharing things that other cloud engineers should know. And I think this is also really, really important because having 80-plus contributors to this book gave us, not 97 things that Emily and Nathen think every cloud engineer should know, but instead, a wide variety of experience levels, a wide variety of perspectives, and so I think that is the thing that makes the book really powerful.It also means that those 80-some folks that contributed to the book, had to write a very short article. So, of course, with 80 authors and 97 Things, the book is not—it doesn’t weigh 27 pounds, right? It’s less than 300 pages long, where you get these 97 tidbits. But really, the hope and the intent behind the book is to give you an idea about what should you explore deeper and, just as importantly, who are some people that you can, maybe, reach out to and talk to about a particular topic, a particular thing that a cloud engineer should know. Here are 80 people that are here, helping you and really cheering you on as you take this journey into cloud engineering.Corey: I think there’s something to be said from having the stories for this is what we do, this is how we do it. But the lessons learned stories, those are the great ones, and it’s harder to get people on stage to talk about that without turning into, “And that’s how we snagged victory from the jaws of defeat.” No one ever gets on stage and says and that’s why the entire project was a boondoggle and four years later, we’re still struggling to recover. Especially, you know, publicly traded companies tend not to say those things. But it’s true.You wind up with people getting on stage and talking instead about these high-level amazing things that they’ve done in the project went flawlessly, and you turn to the person next to you and say, “Yeah, I wish I could work in a place like that.” And they say, “Yeah, me too.” And you check, and they work at the same place as the presenter. Because it’s conference-ware; it’s never a real story. I’m hoping that these stories go a bit more in-depth into the nitty-gritty of what worked, what didn’t work, and it’s not always ‘author as hero protagonist.’Nathen: Oh, you will definitely find that in this book. These are true stories. These are stories of pain, of heartache, of victory and success, and learnings along the way. Absolutely. And frankly, in the DevOps space, we do an okay job of talking openly about our failures.We often talk about things that we tried that went wrong, or epic failures in our systems, and then how we recovered from them. And yes, oftentimes, those stories have a great sort of storybook ending to them, but there’s a lot of truth in a lot of those stories as well because we all know that no organization is uniformly good at everything. That may be the stories that they want to share most, but, you know, there’s some truth in those stories that hopefully we can find. And certainly, in this book, you will find the good, the bad, the ugly, the learnings, and all of the lessons there.Corey: Where can people find it if they want to buy it?Nathen: Oh, you know, you can find it wherever you buy books. There are of course, ebooks, O’Reilly’s website, you know with the—Corey: Wherever fine books are pirated. Yes, yes.Nathen: That’s a good place to go for books, yeah. For sure.Corey: And we will, of course, throw a link to the book in the [show notes 00:29:12]. Thank you so much for taking the time to speak with me. If people want to learn more about the rest of what you’re up to, how you’re thinking about it, what wise wisdom you have for the rest of us, okay can they find you, other than the book?Nathen: Yeah, a great place to reach out to me is on Twitter. I am at @nathenharvey. But I should warn you, my father misspelled my name. So, it’s N-A-T-H-E-N-H-A-R-V-E-Y. So, you can find me on Twitter; reach out to me there.Corey: And we will of course include links to all of that in the [show notes 00:29:43] as well. Thank you so much for speaking to me today. I really appreciate it.Nathen: Thank you, Corey. It’s been a pleasure.Corey: Nathen Harvey, cloud developer advocate at Google. I’m Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment telling me that I’m completely wrong. You can instantly get DevOps in your environment if I only purchase whatever crap it is your company sells.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Page it to the Limit
97 Things With Nathen Harvey and Emily Freeman

Page it to the Limit

Play Episode Listen Later Feb 1, 2021 30:04


Emily Freeman and Nathen Harvey join Mandi to talk about their book *97 Things Every Cloud Engineer Should Know* and some HugOps.

emily freeman nathen harvey
Google Cloud Platform Podcast
DevOps with Nathen Harvey and Jez Humble

Google Cloud Platform Podcast

Play Episode Listen Later Nov 26, 2019 34:34


Happy Thanksgiving! This week, Aja and Brian are talking DevOps with Nathen Harvey and Jez Humble. Our guests thoroughly explain what DevOps is and why it’s important. DevOps purposely has no official definition but can be thought of as a community of practice that aims to make large-scale systems reliable and secure. It’s also a way to get developers and operations to work together to focus on the needs of the customer. Nathen later tells us all about DevOpsDays, a series of locally organized conferences occurring in cities around the world. The main goal is to bring a cross-functional group of people together to talk about how they can improve IT, DevOps, business strategy, and consider cultural changes the organization might benefit from. DevOpsDays supports this by only planning content for half the conference, then turning over the other half to attendees via Open Spaces. At this time, conference-goers are welcome to propose a topic and start a conversation. Jez then describes the Accelerate State of DevOps Report, how it came to be, and why it’s so useful. It includes items like building security into the software, testing continuously, ideal management practices, product development practices, and more. With the help of the DevOps Quick Check, you can discover the places your company could use some help and then refer back to the report for suggestions of improvements in those areas. Nathen Harvey Nathen Harvey helps the community understand and apply DevOps and SRE practices in the cloud. He is part of the global organizing committee for the DevOpsDays conferences and was a technical reviewer for the 2019 Accelerate State of DevOps Report. Jez Humble Jez Humble is co-author of several books on software including Shingo Publication Award winner “Accelerate” and Jolt Award winner “Continuous Delivery”. He has spent his career tinkering with code, infrastructure, and product development in companies of varying sizes across three continents. He works for Google Cloud as a technology advocate and teaches at UC Berkeley. Cool things of the week It’s a wrap: Key announcements from Next ‘19 UK blog Explainable AI site Hand-drawn Graphviz diagrams blog Add one line to plot in XKCD comic sketchy style site Interview DevOps insights from Google site DevOps Quick Check site DevOpsDays site Agile Alliance site Velocity Conference site DevOps Enterprise Summit site Question of the week Why do you need the Cloud SQL Proxy? Where can you find us next? DevOpsDays has events coming up across the globe, including Galway, Warsaw, Berlin, and Tel Aviv. Nathen and Jez will be at Delivery Conf. Aja will be home drinking tea! Brian will also be home drinking tea!

Achieving DevOps
Nathen Harvey of Google

Achieving DevOps

Play Episode Listen Later Jul 23, 2019 55:43


We have a nice little sit-down with Nathen Harvey, formerly of Chef, and now at a little mom-and-pop startup called Google! Nathen is a heckuva guy and has me rolling around a few times; he brings a ton of enthusiasm and experience to the table. Join us as we talk about the role of configuration management and provisioning tools alongside containers, how to go about getting executive buy-in, and the power of a small success story with something called a DevOps Dojo. We couldn't agree more with Nathen - no more horse manure with DevOps!

The Food Fight Show
Food Fight Show - 126 - The Next Chapter

The Food Fight Show

Play Episode Listen Later Jun 11, 2019 32:46


Listen to your show hosts - Nathen Harvey and Nell Shamrell-Harrington - discuss our favorite memories of Food Fight and our podcasting future.

next chapter food fights fight show nathen harvey nell shamrell harrington
On-Call Nightmares Podcast
Episode 24 - Nathen Harvey - Google

On-Call Nightmares Podcast

Play Episode Listen Later May 23, 2019 38:02


Live from ChefConf 2019, I talk with Nathen Harvey about outages, lunch and a life spent in technology. This was one of my favorite podcast interviews because Nathen is one of my major influences and mentors in what we do in Developer Advocacy and Relations in technology. He's taught me so much over the years and has done his best to check in with me during the tough moments, like another member of the on-call team might do during a rough incident. Nathen Harvey, Cloud Developer Advocate at Google, helps the community understand and apply DevOps and SRE practices in the cloud. Nathen is a co-host of the Food Fight Show, a podcast about Chef and DevOps, and is part of the DevOps Days conferences global organizing committee. Nathen is part of the Google DevRel team and can be found at the following links: https://twitter.com/nathenharvey https://linkedin.com/in/nathen

Community Pulse
Big Company, Little Company (Ep 32)

Community Pulse

Play Episode Listen Later Jan 23, 2019 49:06


In this episode, our hosts discuss the similarities and differences of working in developer relations and advocacy in large organizations compared to small ones (or startups). Joining us in the conversation is Nathen Harvey of Google, Maureen McElaney from IBM, and Matt Asay from Adobe.

Community Pulse
Big Company, Little Company (Ep 32)

Community Pulse

Play Episode Listen Later Jan 23, 2019 49:06


In this episode, our hosts discuss the similarities and differences of working in developer relations and advocacy in large organizations compared to small ones (or startups). Joining us in the conversation is Nathen Harvey of Google, Maureen McElaney from IBM, and Matt Asay from Adobe.

Software Defined Talk
Episode 144: GDPR, Observability, & more on the mystery of serverless, still with half-assed research

Software Defined Talk

Play Episode Listen Later Aug 17, 2018 65:43


“That was the problem: I was always Tech Matt.” The title says it all. Sponsored by Datadog This episode is sponsored by Datadog and this week they Datadog wants you to know about Trace Search & Analytics. Trace Search & Analytics allows you to explore, graph, and correlate application performance data using high-cardinality attributes. You can search and filter request traces using key business and application attributes, such as user IDs, host names, or product SKUs, so you can quickly pinpoint where performance issues are originating and who's being affected. Tight integration with data from logs and infrastructure metrics also lets you correlate these specific trace events to the performance of the underlying infrastructure so you can resolve the problem quickly. Sign up for a free trial (https://www.datadog.com/softwaredefinedtalk) today at https://www.datadog.com/softwaredefinedtalk. Relevant to your interests Observations on Observability (https://posts.google.com/bulletin/share/MXKJKfKL/E16dl-/) https://medium.com/@copyconstruct This GDPR madness has to stop (https://twitter.com/cote/status/1029254783700541441), son! AWS Announces General Availability of Amazon Aurora Serverless (https://www.businesswire.com/news/home/20180809005850/en/AWS-Announces-General-Availability-Amazon-Aurora-Serverless) State of the cloud: Amazon Web Services is bigger than its other four major competitors, combined (https://www.geekwire.com/2018/state-cloud-amazon-web-services-bigger-four-major-competitors-combined/) AWS Serverless Application Repository (https://aws.amazon.com/serverless/serverlessrepo/) Taking Tesla Private (https://www.tesla.com/blog/taking-tesla-private) CNCF Serverless Whitepaper v1.0 (https://github.com/cncf/wg-serverless/tree/master/whitepaper) - not too bad so far. Kelsey Hightower on Serverless… (https://twitter.com/kelseyhightower/status/1029483537840263168) but wait…. Isn’t this just PaaS? A developer’s view (https://frontside.io/blog/2018/08/09/kubernetes-for-the-kubernewbie/) of getting up and running with kubernetes. “Progressive delivery,” (https://launchdarkly.com/blog/progressive-delivery-a-history-condensed/) see also James Governor on the term (https://redmonk.com/jgovernor/2018/08/06/towards-progressive-delivery/). Nonsense It’s raining tacos (https://www.youtube.com/watch?v=A3YmHZ9HMPs) AWS icon quiz (https://docs.google.com/forms/d/e/1FAIpQLSdnEEo0o2JgnIt8VOGffhkcYj-C2h9m5_NFzM0Q1AU-P8d0zA/viewform) To fight the scourge of open offices, ROOM sells rooms (https://techcrunch.com/2018/08/15/room-phone-booths/) SoftBank May Invest up to $750 Million in Robotic Pizza Startup Zume (https://www.barrons.com/articles/softbank-may-invest-up-to-750-million-in-robotic-pizza-startup-zume-1533753812) “I took the cool out of Cool Rick.” (https://www.youtube.com/watch?v=S-Y_3Q5YnTc) Conferences, et. al. Sep 24th to 27th - SpringOne Platform (https://springoneplatform.io/), in DC/Maryland (crabs!) get $200 off registration with the code S1P200_Cote. Also, check out the Spring One Tour - coming to a city near you (https://springonetour.io/)! DevOps Talks Sydney August 27-28 - John Willis, Nathen Harvey! (http://devopstalks.com/devops.html) DevOpsDays Berlin (https://www.devopsdays.org/events/2018-berlin/welcome/), September 12th to 13th. DevOpsDays Paris (https://www.devopsdays.org/events/2018-paris/welcome/), October 16th. Cloud Expo Asia October 10-11 (https://www.cloudexpoasia.com/cloud-asia-2018). DevOps Days Singapore October 11-12 (https://www.devopsdays.org/events/2018-singapore/). DevOps Days Newcastle October 24-25 (https://devopsdaysnewy.org/). DevOps Days Wellington November 5-6 (https://www.devopsdays.org/events/2018-wellington/). Devoxx Belgium (https://devoxx.be/), Antwerp, November 12th to 16th. SpringOne Tour (https://springonetour.io/) - all over the earth! Listener Feedback Andy from Netflix send me a LinkedIn request and got a sticker SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Subscribe to Software Defined Interviews Podcast (http://www.softwaredefinedinterviews.com/) Chris Donaldson on Automation (http://www.softwaredefinedinterviews.com/72) Matthew Brutsché on Amazon Go and Tech Marketing (http://www.softwaredefinedinterviews.com/71) Buy some t-shirts (https://fsgprints.myshopify.com/collections/software-defined-talk)! DISCOUNT CODE: SDTFSG (40% off) Send your name and address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you a sticker. Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Recommendations Brandon: Westworld (https://www.hbo.com/westworld). Matt: Dune fandom (https://theoutline.com/post/5333/dune-revival-2018-david-lynch). As heard on Song Exploder (http://songexploder.net/jon-hopkins): Jon Hopkins Singularity (https://pitchfork.com/reviews/albums/jon-hopkins-singularity/). The Red Atlas (https://www.amazon.com/gp/product/022638957X/). Coté: Hemingway app (http://www.hemingwayapp.com/). Slate Podcast Plus (http://www.slate.com/plus/home). PodCTL on registries (https://blog.openshift.com/podcast-podctl-45-container-registries/).

Software Defined Talk
Episode 143: Serverless now just means “programming”

Software Defined Talk

Play Episode Listen Later Aug 10, 2018 59:43


After some rumination, Coté thins that the people backing “serverless” are just wangling to make it mean “doing programming with containers on clouds.” That is, just programming. At some point, it meant an event based system hosted in public clouds (AWS Lamda). Also, we discuss Cisco buying Duo, potential EBITA problems from Broadcom buying CA, and robot pizza. Of course, with Coté having just moved to Amsterdam, there’s some Amsterdam talk. Sponsored by Datadog This episode is sponsored by Datadog and this week they Datadog wants you to know about Watchdog. Watchdog automatically detects performance problems in your applications without any manual setup or configuration. By continuously examining application performance data, it identifies anomalies, like a sudden spike in hit rate, that could otherwise have remained invisible. Once an anomaly is detected, Watchdog provides you with all the relevant information you need to get to the root cause faster, such as stack traces, error messages, and related issues from the same timeframe. Sign up for a free trial (https://www.datadog.com/softwaredefinedtalk) today at https://www.datadog.com/softwaredefinedtalk. Relevant to your interests Everyone’s favorite Outlook feature, now in G Suite (https://techcrunch.com/2018/07/30/google-calendar-makes-rescheduling-meetings-easier/). Do we know what “serverless” is yet? Someone named that got some funding (https://techcrunch.com/2018/07/30/serverless-inc-lands-10-m-series-a-to-build-serverless-developers-platform/). Related, Istio 1.0 (https://www.theregister.co.uk/2018/07/31/istio_sets_sail_as_red_hat_renovates_openshift_container_ship/): “It is aiming to be a control plane, similar to the Kubernetes control plane, for configuring a series of proxy servers that get injected between application components. It will actually look at HTTP response codes and if an app component starts throwing more than a number of 500 errors, it can redirect the traffic.” MUST BE THIS HIGH TO RIDE (https://k1k1chan.com/post/590832918/do-not-want)! Follow-up: Brenon at 451 says (https://blogs.the451group.com/techdeals/ma/broadcom-cant-get-there-from-here/) Broadcom is gonna have to sell off some stuff to make it’s margin targets. The mainframe profits are too high, while distributed is low enough to throw the margins out of whack. So, sell off distributed to Micro Focus? To PE BMC? Or a bad analysis. Austin Regional Clinic is in Apple Health records. Pretty nifty that it sucks them all in...sort of. Robots make your pizza (https://www.barrons.com/articles/softbank-may-invest-up-to-750-million-in-robotic-pizza-startup-zume-1533753812). Featured in that OKR book. For real. AWS: still makes lots of money, market-leader by revenue (https://www.geekwire.com/2018/state-cloud-amazon-web-services-bigger-four-major-competitors-combined/). See also Gartner on the topic (https://www.gartner.com/newsroom/id/3884500): “The worldwide infrastructure as a service (IaaS) market grew 29.5 percent in 2017 to total $23.5 billion, up from $18.2 billion in 2016, according to Gartner, Inc. Amazon was the No. 1 vendor in the IaaS market in 2017, followed by Microsoft, Alibaba, Google and IBM.” Gartner estimates that AWS is ~4 times as big as the next, in 2017. Tibco might be sold off (https://www.bloomberg.com/news/articles/2018-08-03/vista-equity-is-said-to-weigh-sale-of-software-maker-tibco): “Vista took Tibco private in 2014 in a deal valued at about $4.3 billion including debt. The company, based in Palo Alto, California, makes software that clients use to collect and analyze data in industries from banking to transportation. It currently has about $2.9 billion of debt, according to data compiled by Bloomberg.” Cisco Announces Intent to Acquire Duo Security, $2.35bn (https://duo.com/about/press/releases/cisco-announces-intent-to-acquire-duo-security). What’s this ABN e.dentifier thing (https://nl.wikipedia.org/wiki/E.dentifier)? Apprenda shuts down (https://www.timesunion.com/business/article/Troy-based-Apprenda-stopping-operations-investor-13111235.php). SASSY (https://www.networkworld.com/article/2848762/cloud-computing/hitting-them-where-they-work.html)! Conferences, et. al. Sep 24th to 27th - SpringOne Platform (https://springoneplatform.io/), in DC/Maryland (crabs!) get $200 off registration with the code S1P200_Cote. Also, check out the Spring One Tour - coming to a city near you (https://springonetour.io/)! (http://devopstalks.com/devops.html)- DevOps Talks Sydney August 27-28 - John Willis, Nathen Harvey! (http://devopstalks.com/devops.html) Cloud Expo Asia October 10-11 (https://www.cloudexpoasia.com/cloud-asia-2018) DevOps Days Singapore October 11-12 (https://www.devopsdays.org/events/2018-singapore/) DevOps Days Newcastle October 24-25 (https://devopsdaysnewy.org/) DevOps Days Wellington November 5-6 (https://www.devopsdays.org/events/2018-wellington/) Listener Feedback Lindsay from London got a sticker an tell us: “Really enjoy the podcast, just the right level of humour, sarcasm and facts for a cynical Brit like me.” SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Buy some t-shirts (https://fsgprints.myshopify.com/collections/software-defined-talk)! DISCOUNT CODE: SDTFSG (40% off) Send your name and address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you a sticker. Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Recommendations Brandon: Masters of Doom (https://www.audible.com/pd/Bios-Memoirs/Masters-of-Doom-Audiobook/B008K8BQG6?qid=1533849060&sr=sr_1_3&ref=a_search_c3_lProduct_1_3&pf_rd_p=e81b7c27-6880-467a-b5a7-13cef5d729fe&pf_rd_r=754XS4GQGN71K8XCBNWW&). Matt: Deadpool 2 (https://www.imdb.com/title/tt5463162/). If you liked the first, you’ll like the second. Coté: 1980’s Action Figure tumblr (https://1980sactionfigures.tumblr.com/) - now that I have fast Internet, tumblr is workable. Mask, Cops, sweet Dune figures (https://www.networkworld.com/article/2848762/cloud-computing/hitting-them-where-they-work.html), generic GI Joe figures. Dutch Internet (https://www.ziggo.nl/alles-in-1/max/), son! SHIT DOG!

All Ruby Podcasts by Devchat.tv
RR 348: Continuous Automation - Chef, InSpec, and Habitat with Nathen Harvey and Nell Shamrell-Harrington

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Feb 6, 2018 61:07


Panel: Dave Kimura Eric Berry David Richards Special Guest: Nathen Harvey and Nell Shamrell-Harrington In this episode, the Ruby Rogues panelist speak with Nathen Harvey and Nell Shamrell-Harrington. Nell is the Senior Software Development Engineer at Chef, the CTO at Operation Code. Nathen is the VP Community at Chef. The topic of discussion is about Chef. Chef is a platform that enables teams to collaborate, share, and automate everything. In particular, we dive pretty deep on: What is Dev Ops? A cultural and professional movement, focused on how we build and operate high-velocity organizations, born from the experiences of its practitioners. Chef Automate - the platform that enables teams to collaborate, share, and automate everything. Cultural and Professional Continuous Automation - Chef, InSpec, Habitat 3 Main Focuses: Infrastructure Automation, Compliance Automation, Application Automation Instanbul, AWS Cloud, Etc. AWS Bean Stalk Chef works best at “Massive Scale” Where Chef shines! Tests More on compliance InSpec Things to do at the minimum? Talks about issues with infrastructure issues at Knight Capital Habitat - Application Automation, Build, deploy, run any application, anywhere. If you hate Dev Ops? Chef Community - Slack The best way to learn about each of these - https://learn.chef.io/#/ and much much more. Links:  https://www.linkedin.com/in/nathen Chef - Infrastructure Automation, Infrastructure as Code - https://www.chef.io/chef/ InSpec - Compliance Automation, testing framework for infrastructure - https://www.inspec.io/ In-browser tutorial - https://www.inspec.io/tutorial https://www.habitat.sh/  Tutorials - https://www.habitat.sh/learn/ https://www.linkedin.com/in/nellshamrell https://blog.chef.io/author/nshamrell/ @NellShamrell @NathenHarvey Picks: David Zat Rana -https://medium.com/personal-growth/how-ernest-hemingway-became-an-overnight-success-3277b482c39c Eric Operation Code  Code Sponsor is Back! Dave Kreg Pocket Jig Chuck AirPods  Nell Blue Pearl Animal Clinic Darkest Hour Nathen Dev Ops Days  ChefConf.com The Food Fight Show Podcast

Ruby Rogues
RR 348: Continuous Automation - Chef, InSpec, and Habitat with Nathen Harvey and Nell Shamrell-Harrington

Ruby Rogues

Play Episode Listen Later Feb 6, 2018 61:07


Panel: Dave Kimura Eric Berry David Richards Special Guest: Nathen Harvey and Nell Shamrell-Harrington In this episode, the Ruby Rogues panelist speak with Nathen Harvey and Nell Shamrell-Harrington. Nell is the Senior Software Development Engineer at Chef, the CTO at Operation Code. Nathen is the VP Community at Chef. The topic of discussion is about Chef. Chef is a platform that enables teams to collaborate, share, and automate everything. In particular, we dive pretty deep on: What is Dev Ops? A cultural and professional movement, focused on how we build and operate high-velocity organizations, born from the experiences of its practitioners. Chef Automate - the platform that enables teams to collaborate, share, and automate everything. Cultural and Professional Continuous Automation - Chef, InSpec, Habitat 3 Main Focuses: Infrastructure Automation, Compliance Automation, Application Automation Instanbul, AWS Cloud, Etc. AWS Bean Stalk Chef works best at “Massive Scale” Where Chef shines! Tests More on compliance InSpec Things to do at the minimum? Talks about issues with infrastructure issues at Knight Capital Habitat - Application Automation, Build, deploy, run any application, anywhere. If you hate Dev Ops? Chef Community - Slack The best way to learn about each of these - https://learn.chef.io/#/ and much much more. Links:  https://www.linkedin.com/in/nathen Chef - Infrastructure Automation, Infrastructure as Code - https://www.chef.io/chef/ InSpec - Compliance Automation, testing framework for infrastructure - https://www.inspec.io/ In-browser tutorial - https://www.inspec.io/tutorial https://www.habitat.sh/  Tutorials - https://www.habitat.sh/learn/ https://www.linkedin.com/in/nellshamrell https://blog.chef.io/author/nshamrell/ @NellShamrell @NathenHarvey Picks: David Zat Rana -https://medium.com/personal-growth/how-ernest-hemingway-became-an-overnight-success-3277b482c39c Eric Operation Code  Code Sponsor is Back! Dave Kreg Pocket Jig Chuck AirPods  Nell Blue Pearl Animal Clinic Darkest Hour Nathen Dev Ops Days  ChefConf.com The Food Fight Show Podcast

Devchat.tv Master Feed
RR 348: Continuous Automation - Chef, InSpec, and Habitat with Nathen Harvey and Nell Shamrell-Harrington

Devchat.tv Master Feed

Play Episode Listen Later Feb 6, 2018 61:07


Panel: Dave Kimura Eric Berry David Richards Special Guest: Nathen Harvey and Nell Shamrell-Harrington In this episode, the Ruby Rogues panelist speak with Nathen Harvey and Nell Shamrell-Harrington. Nell is the Senior Software Development Engineer at Chef, the CTO at Operation Code. Nathen is the VP Community at Chef. The topic of discussion is about Chef. Chef is a platform that enables teams to collaborate, share, and automate everything. In particular, we dive pretty deep on: What is Dev Ops? A cultural and professional movement, focused on how we build and operate high-velocity organizations, born from the experiences of its practitioners. Chef Automate - the platform that enables teams to collaborate, share, and automate everything. Cultural and Professional Continuous Automation - Chef, InSpec, Habitat 3 Main Focuses: Infrastructure Automation, Compliance Automation, Application Automation Instanbul, AWS Cloud, Etc. AWS Bean Stalk Chef works best at “Massive Scale” Where Chef shines! Tests More on compliance InSpec Things to do at the minimum? Talks about issues with infrastructure issues at Knight Capital Habitat - Application Automation, Build, deploy, run any application, anywhere. If you hate Dev Ops? Chef Community - Slack The best way to learn about each of these - https://learn.chef.io/#/ and much much more. Links:  https://www.linkedin.com/in/nathen Chef - Infrastructure Automation, Infrastructure as Code - https://www.chef.io/chef/ InSpec - Compliance Automation, testing framework for infrastructure - https://www.inspec.io/ In-browser tutorial - https://www.inspec.io/tutorial https://www.habitat.sh/  Tutorials - https://www.habitat.sh/learn/ https://www.linkedin.com/in/nellshamrell https://blog.chef.io/author/nshamrell/ @NellShamrell @NathenHarvey Picks: David Zat Rana -https://medium.com/personal-growth/how-ernest-hemingway-became-an-overnight-success-3277b482c39c Eric Operation Code  Code Sponsor is Back! Dave Kreg Pocket Jig Chuck AirPods  Nell Blue Pearl Animal Clinic Darkest Hour Nathen Dev Ops Days  ChefConf.com The Food Fight Show Podcast

The Web Platform Podcast
96: DevOps & Chef

The Web Platform Podcast

Play Episode Listen Later Jul 7, 2016 52:47


Nathen Harvey (@nathenharvey), VP of Community Development at Chef Software, joins us to discuss modern devops culture, tools, and practices as well as how Chef Software can help teams automate, scale, and reproduce tasks, and environments. Nathen defined devops as how to build high velocity organizations by reducing build and deployment cycles. Topics includes how to manage your infrastructure like code, devops community, Chef cookbooks and recipes, and improving your devops knowledge and processes as web developer.   Resources Chef & Habitat http://www.chef.io - Main Website http://learn.chef.io - Tutorials and such for getting started with Chef & DevOps DevOps Kung Fu - A talk from ChefConf 2015 that describes and defines the Principles, Forms, and Application of DevOps.  It's also where I get my definition of DevOps from: https://www.youtube.com/watch?v=_DEToXsgrPc - YouTube video of the talk https://github.com/chef/devops-kungfu - GitHub repository of the talk http://chef.github.io/devops-kungfu/#/ - slides from the talk http://chef.github.io/devops-kungfu/#/15 - definition of DevOps - A cultural and professional movement, focused on how we build and operate high velocity organizations, born from the experiences of its practitioners. Open source Chef maintenance policy - https://github.com/chef/chef-rfc/blob/master/rfc030-maintenance-policy.md https://www.habitat.sh/ - Habitat website which includes an overview of Habitat, online demo, and tutorials Podcasts Food Fight Show - http://foodfightshow.org Arrested DevOps - https://www.arresteddevops.com/ DevOps Cafe - http://devopscafe.org/ The Ship Show - http://theshipshow.com/   Conferences DevOpsDays - http://www.devopsdays.org/ ChefConf - https://chefconf.chef.io/ Surge - https://surge.omniti.com/2016 Velocity - http://conferences.oreilly.com/velocity DevOps Enterprise Summit - http://events.itrevolution.com/ Puppet - https://github.com/puppetlabs/puppet

DevOps Chat
DevOps Chat with Nathen Harvey, Chef on security & compliance scanning

DevOps Chat

Play Episode Listen Later Jun 2, 2016 15:55


DevOps.com editor-in-chief Alan Shimel sits down with Nathen Harvey, VP of Community Development. Nathen gave us his insight into the importance of security and compliance scanning while code is still on the developers work station. Security and compliance is everyone's responsibility and the earlier in the process it is done, the easier it is. Chef is playing a leading role in this mission and Nathen tells us a little bit about what they are doing. You can find more at http://www.chef.io/compliance.

Arrested DevOps
Creating DevOps Communities and Events With Andy Burgin, Dustin Collins, and Nathen Harvey

Arrested DevOps

Play Episode Listen Later Oct 14, 2015


Andy Burgin, Dustin Collins, and Nathen Harvey join Matt and Trevor to talk about what it takes to create DevOps events, Meetups, and communities.

events communities devops meetups burgin nathen harvey dustin collins
Arrested DevOps
Creating DevOps Communities and Events With Andy Burgin, Dustin Collins, and Nathen Harvey

Arrested DevOps

Play Episode Listen Later Oct 14, 2015


Andy Burgin, Dustin Collins, and Nathen Harvey join Matt and Trevor to talk about what it takes to create DevOps events, Meetups, and communities.

events communities devops meetups burgin nathen harvey dustin collins
The Cloudcast
The Cloudcast #208 - Infrastructure as Code

The Cloudcast

Play Episode Listen Later Aug 13, 2015 37:14


Brian talks with Nathen Harvey (@nathenharvey, Community Manager @chef) about how he became a Community Manager, his passion for DevOps, The Food Fight podcast, the future of configuration management and the best first steps to developing the skills to build infrastructure-as-code at your company. Interested in the Tech Reckoning? Our friend John Troyer (@jtroyer) does an outstanding job building communities. He's hosting an awesome event in Half Moon Bay, CA on Sept.13-14 for IT professionals and leaders that are shaping the future of the industry. You don't want to miss this one! Sign up here! http://signup.techreckoning.com Save $100 on registration by using code "cloudcast" Sign up for the weekly newsletter - http://techreckoning.com/ Links from the show: Chef Homepage Learn Chef Nathen's Blog Nathen on GitHub The Food Fight Show (podcast) Topic 1 - Tell us about your background and how you evolved into doing Community Management - and what does community management mean for a mix of open source and commercial “stuff”? Topic 2 - We listen to Michael Ducy’s Goat Farm podcast, and hear a number of people from Chef speak at various events. It feels like what Chef is focused on is more about hands-on cultural change than technology. Is that a fair assessment of how it’s evolving? Topic 3 - I heard you speak recently at a Triangle DevOps event about Infrastructure-as-Code, which is a big concept, but it’s grounded in actual technology. But that topic always gets wrapped up in DevOps and all these other analogies (Unicorn, Goats, etc..). Does that get old for you, or is it just the nature of working on stuff that’s trying to change 20yrs of previous habits and culture? Topic 4 - Let’s talk about config management. There’s this new believe/buzz that maybe Docker eliminates the needs for previous config-mgmt system. Why are we hearing that discussion, and what are the broader realities of config-mgmt (and Infrastructure as Code)? Topic 5 - I feel like we have a big problem brewing, if this Cloud Native apps (Microservices, 12-Factor, etc.) stuff takes off, because a lot of the principles of DevOps are so foreign in today’s Ops teams. What do you recommend to people to get to learning and doing things “the right way” more quickly?

Arrested DevOps
How to Eff Up Devops

Arrested DevOps

Play Episode Listen Later Jul 7, 2014


DevOps 'Thought Leaders' Pete Cheslock, Nathen Harvey, and Randi Harper help us understand all the things you can do wrong when 'doing the DevOps'.

thought leaders devops nathen harvey pete cheslock
Arrested DevOps
How to Eff Up Devops

Arrested DevOps

Play Episode Listen Later Jul 7, 2014


DevOps 'Thought Leaders' Pete Cheslock, Nathen Harvey, and Randi Harper help us understand all the things you can do wrong when 'doing the DevOps'.

thought leaders devops nathen harvey pete cheslock
All Ruby Podcasts by Devchat.tv
113 RR DevOps with Nathen Harvey

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Jul 10, 2013 71:33


In this episode, the Rogues talk about DevOps with Nathen Harvey of Chef.

chefs devops rogues nathen harvey
Devchat.tv Master Feed
113 RR DevOps with Nathen Harvey

Devchat.tv Master Feed

Play Episode Listen Later Jul 10, 2013 71:33


In this episode, the Rogues talk about DevOps with Nathen Harvey of Chef.

chefs devops rogues nathen harvey
Ruby Rogues
113 RR DevOps with Nathen Harvey

Ruby Rogues

Play Episode Listen Later Jul 10, 2013 71:33


In this episode, the Rogues talk about DevOps with Nathen Harvey of Chef.

chefs devops rogues nathen harvey