POPULARITY
Many engineering leaders are eager to embrace AI, but few have a clear way to measure what's working. In this episode, we dig into how teams can track AI adoption, assess the real impact of AI tools and agents, and make smarter decisions about where to invest. You'll hear from Abi Noda, CEO and Co-founder at DX, and Chris Westerhold, Global Practice Director at Thoughtworks, as they share how leading engineering orgs are turning AI hype into measurable value. Host: Kimberly Boyd Guests: Abi Noda, CEO & Co-founder at DX Chris Westerhold, Global Practice Director at Thoughtworks
In this episode of Engineering Enablement, host Laura Tacho talks with Bruno Passos, Product Lead for Developer Experience at Booking.com, about how the company is rolling out AI tools across a 3,000-person engineering team.Bruno shares how Booking.com set ambitious innovation goals, why cultural change mattered as much as technology, and the education practices that turned hesitant developers into daily users. He also reflects on the early barriers, from low adoption and knowledge gaps to procurement hurdles, and explains the interventions that worked, including learning paths, hackathon-style workshops, Slack communities, and centralized procurement. The result is that Booking.com now sits in the top 25 percent of companies for AI adoption.Where to find Bruno Passos:• LinkedIn: https://www.linkedin.com/in/brpassos/• X: https://x.com/brunopassosWhere to find Laura Tacho:• LinkedIn: https://www.linkedin.com/in/lauratacho/• X: https://x.com/rhein_wein• Website: https://lauratacho.com/• Laura's course (Measuring Engineering Performance and AI Impact) https://lauratacho.com/developer-productivity-metrics-courseIn this episode, we cover:(00:00) Intro(01:09) Bruno's role at Booking.com and an overview of the business (02:19) Booking.com's goals when introducing AI tooling(03:26) Why Booking.com made such an ambitious innovation ratio goal (06:46) The beginning of Booking.com's journey with AI(08:54) Why the initial adoption of Cody was low(13:17) How education and enablement fueled adoption(15:48) The importance of a top-down cultural change for AI adoption(17:38) The ongoing journey of determining the right metrics(21:44) Measuring the longer-term impact of AI (27:04) How Booking.com solved internal bottlenecks to testing new tools(32:10) Booking.com's framework for evaluating new tools(35:50) The state of adoption at Booking.com and efforts to expand AI use(37:07) What's still undetermined about AI's impact on PR/MR quality(39:48) How Booking.com is addressing lagging adoption and monitoring churn(43:24) How Booking.com's Slack community lowers friction for questions and support(44:35) Closing thoughts on what's next for Booking.com's AI planReferenced:Measuring AI code assistants and agentsDX Core 4 FrameworkBooking.comSourcegraph SearchCody | AI coding assistant from SourcegraphGreyson Junggren - DX | LinkedIn
In this episode of Engineering Enablement, DX CTO Laura Tacho and CEO Abi Noda break down how to measure developer productivity in the age of AI using DX's AI Measurement Framework. Drawing on research with industry leaders, vendors, and hundreds of organizations, they explain how to move beyond vendor hype and headlines to make data-driven decisions about AI adoption.They cover why some fundamentals of productivity measurement remain constant, the pitfalls of over-relying on flawed metrics like acceptance rate, and how to track AI's real impact across utilization, quality, and cost. The conversation also explores measuring agentic workflows, expanding the definition of “developer” to include new AI-enabled contributors, and avoiding second-order effects like technical debt and slowed PR throughput.Whether you're rolling out AI coding tools, experimenting with autonomous agents, or just trying to separate signal from noise, this episode offers a practical roadmap for understanding AI's role in your organization—and ensuring it delivers sustainable, long-term gains.Where to find Laura Tacho:• X: https://x.com/rhein_wein• LinkedIn: https://www.linkedin.com/in/lauratacho/• Website: https://lauratacho.com/Where to find Abi Noda:• LinkedIn: https://www.linkedin.com/in/abinoda • Substack: https://substack.com/@abinoda In this episode, we cover:(00:00) Intro(01:26) The challenge of measuring developer productivity in the AI age(04:17) Measuring productivity in the AI era — what stays the same and what changes(07:25) How to use DX's AI Measurement Framework (13:10) Measuring AI's true impact from adoption rates to long-term quality and maintainability(16:31) Why acceptance rate is flawed — and DX's approach to tracking AI-authored code(18:25) Three ways to gather measurement data(21:55) How Google measures time savings and why self-reported data is misleading(24:25) How to measure agentic workflows and a case for expanding the definition of developer(28:50) A case for not overemphasizing AI's role(30:31) Measuring second-order effects (32:26) Audience Q&A: applying metrics in practice(36:45) Wrap up: best practices for rollout and communication Referenced:DX Core 4 Productivity FrameworkMeasuring AI code assistants and agentsAI is making Google engineers 10% more productive, says Sundar Pichai - Business Insider
In this special episode of the Engineering Enablement podcast, recorded live at LeadDev London, DX CTO Laura Tacho explores the growing gap between AI headlines and the reality inside engineering teams—and what leaders can do to close it.Laura shares data from nearly 39,000 developers across 184 companies, highlights the Core 4 and introduces the AI Measurement Framework, and offers a practical playbook for using data to improve developer experience, measure AI's true impact, and build better software without compromising long-term performance.Where to find Laura Tacho:• X: https://x.com/rhein_wein• LinkedIn: https://www.linkedin.com/in/lauratacho/• Website: https://lauratacho.com/In this episode, we cover:(00:00) Intro: Laura's keynote from LDX3(01:44) The problem with asking how much faster can we go with AI?(03:02) How the disappointment gap creates barriers to AI adoption(06:20) What AI adoption looks like at top-performing organizations(07:53) What leaders must do to turn AI into meaningful impact(10:50) Why building better software with AI still depends on fundamentals(12:03) An overview of the DX Core 4 Framework(13:22) Why developer experience is the biggest performance lever(15:12) How Block used Core 4 and DXI to identify 500,000 hours in time savings(16:08) How to get started with Core 4(17:32) Measuring AI with the AI Measurement Framework(21:45) Final takeaways and how to get started with confidenceReferenced:LDX3 by LeadDev | The Festival of Software Engineering Leadership | LondonSoftware engineering with LLMs in 2025: reality checkSPACE framework, PRs per engineer, AI researchThe AI adoption playbook: Lessons from Microsoft's internal strategyDX Core 4 Productivity FrameworkNicole ForsgrenMargaret-Anne StoreyDropbox.comEtsyPfizerDrew Houston - Dropbox | LinkedInBlockCursorDora.devSourcegraphBooking.com
In this episode of the Engineering Enablement podcast, host Abi Noda is joined by Quentin Anthony, Head of Model Training at Zyphra and a contributor at EleutherAI. Quentin participated in METR's recent study on AI coding tools, which revealed that developers often slowed down when using AI—despite feeling more productive. He and Abi unpack the unexpected results of the study, which tasks AI tools actually help with, and how engineering teams can adopt them more effectively by focusing on task-level fit and developing better digital hygiene.Where to find Quentin Anthony: • LinkedIn: https://www.linkedin.com/in/quentin-anthony/• X: https://x.com/QuentinAnthon15Where to find Abi Noda:• LinkedIn: https://www.linkedin.com/in/abinoda In this episode, we cover:(00:00) Intro(01:32) A brief overview of Quentin's background and current work(02:05) An explanation of METR and the study Quentin participated in (11:02) Surprising results of the METR study (12:47) Quentin's takeaways from the study's results (16:30) How developers can avoid bloated code bases through self-reflection(19:31) Signs that you're not making progress with a model (21:25) What is “context rot”?(23:04) Advice for combating context rot(25:34) How to make the most of your idle time as a developer(28:13) Developer hygiene: the case for selectively using AI tools(33:28) How to interact effectively with new models(35:28) Why organizations should focus on tasks that AI handles well(38:01) Where AI fits in the software development lifecycle(39:40) How to approach testing with models(40:31) What makes models different (42:05) Quentin's thoughts on agents Referenced:DX Core 4 Productivity FrameworkZyphraEleutherAIMETRCursorClaudeLibreChatGoogle GeminiIntroducing OpenAI o3 and o4-miniMETR's study on how AI affects developer productivityQuentin Anthony on X: "I was one of the 16 devs in this study."Context rot from Hacker NewsTracing the thoughts of a large language modelKimiGrok 4 | xAI
This week, we discuss Windsurf being acquired (again), how much AI agents really help with coding, and AWS launching Kiro. Plus, Matt attempts to design a new closet. Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/pgKJWAsku44?si=lGPcDqWYHBlbObMh) 529 (https://www.youtube.com/live/pgKJWAsku44?si=lGPcDqWYHBlbObMh) Runner-up Titles "The backlog will always be with you." The things I dream about… Closet debt It's about the money They call it business for a reason The children are the future The best and brightest are going to be making better advertising Ghost Developers are Architects TaskRabbit LLM Crabs in a pot You're on a temporary plateau on your way to Product Management Rundown Windsurf and Coding Agents Is Cursor having their Docker moment? (https://www.thecloudcast.net/2025/07/is-cursor-having-their-docker-moment.html) OpenAI's Windsurf deal is off — and Windsurf's CEO is going to Google (https://www.theverge.com/openai/705999/google-windsurf-ceo-openai) Windsurf's CEO goes to Google; OpenAI's acquisition falls apart (https://techcrunch.com/2025/07/11/windsurfs-ceo-goes-to-google-openais-acquisition-falls-apart/) Cognition, maker of the AI coding agent Devin, acquires Windsurf (https://techcrunch.com/2025/07/14/cognition-maker-of-the-ai-coding-agent-devin-acquires-windsurf/) It's about the research. Ignore the money. (https://www.threads.com/@theinformation/post/DMGueMrvzTn?xmt=AQF0WtC9LAsGMA1nuFM0j9mjJf0eMGdA47zkkL1Z1oN67Q) How much does AI impact development speed? (https://newsletter.getdx.com/p/how-much-does-ai-impact-development-speed) Measuring the actual impact of AI coding with Abi Noda, CEO of DX (https://changelog.com/friends/101) Introducing Kiro (https://kiro.dev/blog/introducing-kiro/?ck_subscriber_id=512840665&utm_source=convertkit&utm_medium=email&utm_campaign=[Last%20Week%20in%20AWS]%20Issue%20#431:%20AWS%20Sovereign%20Cloud,%20Now%20With%20Slightly%20More%20Pretend%20Sovereignty%20-%2018285791) Relevant to your Interests How much money has Linda Yaccarino made as X´s CEO during her tenure? (https://x.com/i/grok/share/XVRnINH6W2aEbaQ1mhJ6CAD2W) NLWeb: Tapping MCP for natural language web search (https://www.infoworld.com/article/4019814/nlweb-tapping-mcp-for-natural-language-web-search.html) Introducing Comet: Browse at the speed of thought (https://www.perplexity.ai/hub/blog/introducing-comet) AWS reportedly set to launch agentic AI marketplace with Anthropic next week (https://siliconangle.com/2025/07/10/report-aws-set-launch-agentic-ai-marketplace-anthropic-next-week/?utm_source=aitangle.com&utm_medium=newsletter&utm_campaign=xai-s-grok-4-makes-its-grand-debut&_bhlid=72efd0cf0dd44bda8e27006d78fe69b861fa58a3) New Smart Glasses Block Out All Real-Life Advertising (https://futurism.com/new-smart-glasses-block-advertising) Exclusive | SpaceX to Invest $2 Billion Into Elon Musk's xAI (https://www.wsj.com/tech/spacex-to-invest-2-billion-into-elon-musks-xai-413934de?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axiosprorata&stream=top) Figma's $300k Daily AWS Bill Isn't the Scandal You Think It Is (https://www.duckbillgroup.com/blog/figmas-300k-daily-aws-bill-isnt-the-scandal-you-think-it-is/?ck_subscriber_id=512840665&utm_source=convertkit&utm_medium=email&utm_campaign=[Last%20Week%20in%20AWS]%20Issue%20#431:%20AWS%20Sovereign%20Cloud,%20Now%20With%20Slightly%20More%20Pretend%20Sovereignty%20-%2018285791) DraftGPT: The Brave New World of AI Hits the NBA (https://www.theringer.com/2025/06/24/nba-draft/nba-draft-artificial-intelligence) Mira Murati's Thinking Machines Lab is worth $12B in seed round (https://techcrunch.com/2025/07/15/mira-muratis-thinking-machines-lab-is-worth-12b-in-seed-round/) AI Tooling, Evolution and The Promiscuity of Modern Developers (https://redmonk.com/sogrady/2025/07/09/promiscuity-of-modern-developers/) Sponsor Subscribe to the Fork Around and Find Out Podcast (https://www.fafo.fm) Listener Feedback Wes recommends reMarkable eReader (https://remarkable.com/?utm_source=google&utm_medium=cpc&utm_campaign=shopping&gad_source=1&gad_campaignid=22593422101&gbraid=0AAAAACTQ8CzkVek3GTLjvSLFt4ccN2-t4&gclid=Cj0KCQjwm93DBhD_ARIsADR_DjGUSc3YfqWEvYNrjNW_4DcWZI6s7Oil2eNQ-OHKoJTNo18Y4LzIJ6gaAs8zEALw_wcB) Jeroen recommends BooX (https://shop.boox.com/products/go7?gad_source=1&gad_campaignid=16914647485&gbraid=0AAAAAC9FI31gPCwkpC9nr-Idt_chAx7Am&gclid=Cj0KCQjwm93DBhD_ARIsADR_DjHLo9qcDLkckjtDu4Aq65rGZLnpdnJDLCbVe9xGBMsrK3mfE_c6eQYaAv8CEALw_wcB) Conferences Sydney Wizdom Meet-Up (https://www.wiz.io/events/sydney-wizdom-meet-up-aug-2025), Sydney, August 7. Matt will be there. SpringOne (https://www.vmware.com/explore/us/springone?utm_source=organic&utm_medium=social&utm_campaign=cote), Las Vegas, August 25th to 28th, 2025. See Coté's pitch (https://www.youtube.com/watch?v=f_xOudsmUmk). Explore 2025 US (https://www.vmware.com/explore/us?utm_source=organic&utm_medium=social&utm_campaign=cote), Las Vegas, August 25th to 28th, 2025. See Coté's pitch (https://www.youtube.com/shorts/-COoeIJcFN4). SREDay London (https://sreday.com/2025-london-q3/), Coté speaking, September 18th and 19th. Civo Navigate London (https://www.civo.com/navigate/london/2025), Coté speaking, September 30th. Texas Linux Fest (https://2025.texaslinuxfest.org), Austin, October 3rd to 4th. CFP closes August 3rd (https://www.papercall.io/txlf2025). CF Day EU (https://events.linuxfoundation.org/cloud-foundry-day-europe/), Frankfurt, October 7th, 2025. AI for the Rest of Us (https://aifortherestofus.live/london-2025), Coté speaking, October 15th to 16th, London. SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: F1 The Movie (https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://tv.apple.com/us/movie/f1-the-movie/umc.cmc.3t6dvnnr87zwd4wmvpdx5came&ved=2ahUKEwizoovRncKOAxXrl2oFHfpUCjAQFnoECGkQAQ&usg=AOvVaw1-tMtproZanE5kpD3oyHMO) Matt: Dune Prophecy (https://www.imdb.com/title/tt10466872) Photo Credits Header (https://unsplash.com/photos/a-walk-in-closet-filled-with-lots-of-clothes-9KVtDmNnFP4)
Abi Noda from DX is back to share some cold, hard data on just how productive AI coding tools are actually making developers. Teaser: the productivity increase isn't as high as we expected. We also discuss Jevons paradox, AI agents as extensions of humans, which tools are winning in the enterprise, how development budgets are changing, and more.
Abi Noda from DX is back to share some cold, hard data on just how productive AI coding tools are actually making developers. Teaser: the productivity increase isn't as high as we expected. We also discuss Jevons paradox, AI agents as extensions of humans, which tools are winning in the enterprise, how development budgets are changing, and more.
In this episode, Abi Noda talks with Frank Fodera, Director of Engineering for Developer Experience at CarGurus. Frank shares the story behind CarGurus' transition from a monolithic architecture to microservices, and how that journey led to the creation of their internal developer portal, Showroom. He outlines the five pillars of the IDP, how it integrates with infrastructure, and why they chose to build rather than buy. The conversation also explores how CarGurus is approaching AI tool adoption across the engineering team, from experiments and metrics to culture change and leadership buy-in.Where to find Frank Fodera : • LinkedIn: https://www.linkedin.com/in/frankfodera/Where to find Abi Noda:• LinkedIn: https://www.linkedin.com/in/abinoda In this episode, we cover:(00:00) Intro: IDPs (Internal Developer Portals) and AI (02:07) The IDP journey at CarGurus(05:53) A breakdown of the people responsible for building the IDP(07:05) The five pillars of the Showroom IDP(09:12) How DevX worked with infrastructure(11:13) The business impact of Showroom(13:57) The transition from monolith to microservices and struggles along the way(15:54) The benefits of building a custom IDP(19:10) How CarGurus drives AI coding tool adoption (28:48) Getting started with an AI initiative(31:50) Metrics to track (34:06) Tips for driving AI adoptionReferenced:DX Core 4 Productivity Framework Internal Developer Portals: Use Cases and Key ComponentsStrangler Fig Pattern - Azure Architecture Center | Microsoft LearnSpotify for BackstageThe AI adoption playbook: Lessons from Microsoft's internal strategy
In this episode, Abi Noda speaks with Gilad Turbahn, Head of Developer Productivity, and Amy Yuan, Director of Engineering at Snowflake, about how their team builds and sustains operational excellence. They break down the practices and principles that guide their work—from creating two-way communication channels to treating engineers as customers. The conversation explores how Snowflake fosters trust, uses feedback loops to shape priorities, and maintains alignment through thoughtful planning. You'll also hear how they engage with teams across the org, convert detractors, and use Customer Advisory Boards to bring voices from across the company into the decision-making process.Where to find Amy Yuan: • LinkedIn: https://www.linkedin.com/in/amy-yuan-a8ba783/Where to find Gilad Turbahn:• LinkedIn: https://www.linkedin.com/in/giladturbahn/Where to find Abi Noda:• LinkedIn: https://www.linkedin.com/in/abinoda In this episode, we cover:(00:00) Intro: an overview of operational excellence(04:13) Obstacles to executing with operational excellence(05:51) An overview of the Snowflake playbook for operational excellence(08:25) Who does the work of reaching out to customers(09:06) The importance of customer engagement(10:19) How Snowflake does customer engagement (14:13) The types of feedback received and the two camps (supporters and detractors)(16:55) How to influence detractors and how detractors actually help (18:27) Using insiders as messengers(22:48) An overview of Snowflake's customer advisory board(26:10) The importance of meeting in person (learnings from Warsaw and Berlin office visits)(28:08) Managing up(30:07) How planning is done at Snowflake(36:25) Setting targets for OKRs, and Snowflake's philosophy on metrics (39:22) The annual plan and how it's shared Referenced:CTO buy-in, measuring sentiment, and customer focusSnowflakeBenoit Dageville - Snowflake Computing | LinkedInThierry Cruanes - Snowflake Computing | LinkedIn
In this episode, Abi Noda speaks with DX CTO Laura Tacho about the real obstacles holding back AI adoption in engineering teams. They discuss why technical challenges are rarely the blocker, and how fear, unclear expectations, and inflated hype can stall progress. Laura shares practical strategies for driving adoption, including how to model usage from the top down, build momentum through champions and training programs, and measure impact effectively—starting with establishing a baseline before introducing AI tools.Where to find Laura Tacho: • LinkedIn: https://www.linkedin.com/in/lauratacho/• Website: https://lauratacho.com/Where to find Abi Noda:• LinkedIn: https://www.linkedin.com/in/abinoda In this episode, we cover:(00:00) Intro: The full spectrum of AI adoption(03:02) The hype of AI(04:46) Some statistics around the current state of AI coding tool adoption(07:27) The real barriers to AI adoption(09:31) How to drive AI adoption (15:47) Measuring AI's impact (19:49) More strategies for driving AI adoption (23:54) The Methods companies are actually using to drive impact(29:15) Questions from the chat (39:48) Wrapping upReferenced:DX Core 4 Productivity FrameworkThe AI adoption playbook: Lessons from Microsoft's internal strategyMicrosoft CEO says up to 30% of the company's code was written by AI | TechCrunchViral Shopify CEO Manifesto Says AI Now Mandatory For All EmployeesDORA | Impact of Generative AI in Software DevelopmentGuide to AI assisted engineeringJustin Reock - DX | LinkedIn
In this episode, Abi Noda speaks with Derek DeBellis, lead researcher at Google's DORA team, about their latest report on generative AI's impact on software productivity.They dive into how the survey was built, what it reveals about developer time and “flow,” and the surprising gap between individual and team outcomes. Derek also shares practical advice for leaders on measuring AI impact and aligning metrics with organizational goals.Where to find Derek DeBellis: • LinkedIn: https://www.linkedin.com/in/derekdebellis/Where to find Abi Noda:• LinkedIn: https://www.linkedin.com/in/abinoda In this episode, we cover:(00:00) Intro: DORA's new Impact of Gen AI report(03:24) The methodology used to put together the surveys DORA used for the report (06:44) An example of how a single word can throw off a question (07:59) How DORA measures flow (10:38) The two ways time was measured in the recent survey(14:30) An overview of experiential surveying (16:14) Why DORA asks about time (19:50) Why Derek calls survey results ‘observational data' (21:49) Interesting findings from the report (24:17) DORA's definition of productivity (26:22) Why a 2.1% increase in individual productivity is significant (30:00) The report's findings on decreased team delivery throughput and stability (32:40) Tips for measuring AI's impact on productivity (38:20) Wrap up: understanding the data Referenced:DORA | Impact of Generative AI in Software DevelopmentThe science behind DORAYale Professor Divulges Strategies for a Happy Life Incredible! Listening to ‘When I'm 64' makes you forget your ageSlow Productivity: The Lost Art of Accomplishment without BurnoutDORA, SPACE, and DevEx: Which framework should you use?SPACE framework, PRs per engineer, AI research
In this episode, Abi Noda is joined by Laura Tacho, CTO at DX, engineering leadership coach, and creator of the Core 4 framework. They explore how engineering organizations can avoid common pitfalls when adopting metrics frameworks like SPACE, DORA, and Core 4.Laura shares a practical guide to getting started with Core 4—beginning with controllable input metrics that teams can actually influence. The conversation touches on Goodhart's Law, why focusing too much on output metrics can lead to data distortion, and how leaders can build a culture of continuous improvement rooted in meaningful measurement.Where to find Laura Tacho: • LinkedIn: https://www.linkedin.com/in/lauratacho/• Website: https://lauratacho.com/Where to find Abi Noda:• LinkedIn: https://www.linkedin.com/in/abinoda In this episode, we cover:(00:00) Intro: Improving systems, not distorting data(02:20) Goal setting with the new Core 4 framework(08:01) A quick primer on Goodhart's law(10:02) Input vs. output metrics—and why targeting outputs is problematic(13:38) A health analogy demonstrating input vs. output(17:03) A look at how the key input metrics in Core 4 drive output metrics (24:08) How to counteract gamification (28:24) How to get developer buy-in(30:48) The number of metrics to focus on (32:44) Helping leadership and teams connect the dots to how input goals drive output(35:20) Demonstrating business impact (38:10) Best practices for goal settingReferenced:DX Core 4 Productivity FrameworkEngineering Enablement PodcastDORA's software delivery metrics: the four keysThe SPACE of Developer Productivity: There's more to it than you thinkDevEx: What Actually Drives ProductivityDORA, SPACE, and DevEx: Which framework should you use?Goodhart's law Nicole Forsgren - Microsoft | LinkedInCampbell's law Introducing Core 4: The best way to measure and improve your product velocityDX Core 4: Framework overview, key design principles, and practical applicationsDX Core 4: 2024 benchmarks - by Abi Noda
Brian Houck from Microsoft returns to discuss effective strategies for driving AI adoption among software development teams. Brian shares his insights into why the immense hype around AI often serves as a barrier rather than a facilitator for adoption, citing skepticism and inflated expectations among developers. He highlights the most effective approaches, including leadership advocacy, structured training, and cultivating local champions within teams to demonstrate practical use cases. Brian emphasizes the importance of honest communication about AI's capabilities, avoiding over-promises, and ensuring that teams clearly understand what AI tools are best suited for. Additionally, he discusses common pitfalls, such as placing excessive pressure on individuals through leaderboards and unrealistic mandates, and stresses the importance of framing AI as an assistant rather than a replacement for developer skills. Finally, Brian explores the role of data and metrics in adoption efforts, offering practical advice on how to measure usage effectively and sustainably.Where to find Brian Houck: • LinkedIn: https://www.linkedin.com/in/brianhouck/ • Website: https://www.microsoft.com/en-us/research/people/bhouck/ Where to find Abi Noda:• LinkedIn: https://www.linkedin.com/in/abinoda In this episode, we cover:(00:00) Intro: Why AI hype can hinder adoption among teams(01:47) Key strategies companies use to successfully implement AI(04:47) Understanding why adopting AI tools is uniquely challenging(07:09) How clear and consistent leadership communication boosts AI adoption(10:46) The value of team leaders ("local champions") demonstrating practical AI use(14:26) Practical advice for identifying and empowering team champions(16:31) Common mistakes companies make when encouraging AI adoption(19:21) Simple technical reminders and nudges that encourage AI use(20:24) Effective ways to track and measure AI usage through dashboards(23:18) Working with team leaders and infrastructure teams to promote AI tools(24:20) Understanding when to shift from adoption efforts to sustained use(25:59) Insights into the real-world productivity impact of AI(27:52) Discussing how AI affects long-term code maintenance(29:02) Updates on ongoing research linking sleep quality to productivityReferenced:DX Core 4 Productivity FrameworkEngineering Enablement PodcastDORA MetricsDropbox Engineering BlogEtsy Engineering BlogPfizer Digital InnovationBrown Bag Sessions – A GuideIDE Integration and AI ToolsDeveloper Productivity Dashboard Examples
Today's guest is Abi Noda, the CEO and founder of DX, one of the leading engineering intelligence platforms.With Abi, we talked about measuring developer experience. We started with the early days of Accelerate and why we feel like most people got the book wrong. And then we continued to present days and how research focuses on driving great developer experience. And finally, we couldn't avoid talking about AI and why it seems to be a game changer for entrepreneurs, but not so much for teams yet.01:23 Introduction02:45 Abi's journey in tech08:19 The four key metrics10:41 Metrics' reliability13:41 Diagnostic metrics16:06 A metric analogy18:23 Find productivy metrics drive22:03 What makes a developer experience good?29:44 The importance of comparison31:53 Common issues in developer experience34:55 Are meetings bad?36:16 AI in development process—This episode is brought to you by https://sleuth.io—You can also find this at:•
In this episode of the Steering Engineering Podcast, Brent Stewart and Danny Brian dive into the business case for developer experience with expert Abi Noda. They discuss how a high-quality developer experience leads to increased productivity, software quality, and business impact, despite the fact that many organizations underinvest in it. Using insights from Gartner's Developer Experience Assessment Survey, they explore why developer satisfaction remains low and what companies can do to improve. They also examine the role of AI-augmented software engineering tools and whether an “AI divide” is emerging. Tune in to hear whether developer experience is truly worth the investment — and what it takes to get it right.About the GuestAbi Noda is a programmer, researcher, and entrepreneur focused on helping organizations improve developer productivity. Abi is the CEO and Co-Founder of DX, a platform for measuring and improving developer experience. In addition to running DX, Abi runs the Engineering Enablement podcast and newsletter, covering the latest research and perspectives on developer productivity. Before DX, he held CTO roles at several companies and was the founder and CEO of Pull Panda, which was acquired by GitHub in 2019.
In this episode, we're joined by author and researcher Gene Kim for a wide-ranging conversation on the evolution of DevOps, developer experience, and the systems thinking behind organizational performance. Gene shares insights from his latest work on socio-technical systems, the role of developer platforms, and how AI is reshaping the shape of engineering teams. We also explore the coordination challenges facing modern organizations, the limits of tooling, and the deeper principles that unite DevOps, lean, and platform engineering.Mentions and links:Phoenix ProjectDecoding the DNA of the Toyota Production SystemWiring the Winning OrganizationETLS VegasFind Gene on LinkedInDiscussion points:(0:00) Introduction(2:12) The evolving landscape of developer experience(10:34) Option Value theory, and how GenAI helps developers(13:45) The aim of developer experience work(19:59) The significance of layer three changes(23:23) Framing developer experience(32:12) GenAI's part in ‘the death of the stubborn developer”(36:05) GenAI's implications on the workforce(38:05) Where Gene's work is heading
In this episode, Airbnb Developer Productivity leader Anna Sulkina shares the story of how her team transformed itself and became more impactful within the organization. She starts by describing how the team previously operated, where teams were delivering but felt they needed more clarity and alignment across teams. Then, the conversation digs into the key changes they made, including reorganizing the team, clarifying team roles, defining strategy, and improving their measurement systems. Mentions and linksFollow Anna on LinkedInFor A deeper look into how our Engineers and Data Scientists build a world of belonging, check out The Airbnb Tech BlogDiscussion points:(0:00) Intro(1:40) Skills that make a great developer productivity leader(4:36) Challenges in how the team operated previously(10:49) Changing the platform org's focus and structure(16:04) Clarifying roles for EM's, PM's, and tech leads(20:22) How Airbnb defined its infrastructure org's strategy(28:23) Improvements they've seen to developer experience satisfaction(32:13) The evolution of Airbnb's developer experience survey
Many teams struggle to use developer productivity data effectively because they don't know how to use it to decide what to do next. We know that data is here to help us improve, but how do you know where to look? And even then, what do you actually do to put the wheels of change in motion? Listen to this conversation with Abi Noda and Laura Tacho (CEO and CTO at DX) about data-driven management and how to take a structured, analytical approach to using data for improvement.Mentions and Links:Measuring developer productivity with the DX Core 4Laura's developer productivity metrics courseDiscussion points:(0:00) Intro(2:07) The challenge we're seeing(6:53) Overview on using data(8:58) Use cases for data-engineering organizations(15:57) Use cases for data - engineering systems teams(21:38) Two types of metrics - Diagnostics and Improvement(38:09) Summary
In this episode, David Betts, leader of Twilio's developer platform team, shares how Twilio leverages developer sentiment data to drive platform engineering initiatives, optimize Kubernetes adoption, and demonstrate ROI for leadership. David details Twilio's journey from traditional metrics to sentiment-driven insights, the innovative tools his teams have built to streamline CI/CD workflows, and the strategies they use to align platform investments with organizational goals.Mentions and links:Find David on LinkedInMeasuring developer productivity with the DX Core 4Ask Your Developer by Jeff Lawson, former CEO of TwilioDiscussion points:(0:00) Introduction(0:49) Twilio's developer platform team(2:03) Twilio's approach to release engineering and CD(4:10) How they use sentiment data and telemetry metrics(7:27) Comparing sentiment data and telemetry metrics(10:25) How to take action on sentiment data(13:16) What resonates with execs(15:44) Proving DX value: sentiment, efficiency, and ROI(19:15) Balancing quarterly and real-time developer feedback
Chris Chandler is a Senior Member of the Technical Staff for Developer Productivity at T-Mobile. Chris has led several major initiatives to improve developer experience including their internal developer portal, Starter Kits (a patented developer platform that predates Backstage), and Workforce Transformation Bootcamps for onboarding developers faster.Mentions and links:Follow Chris on LinkedInMeasuring developer productivity with the DX Core 4Listen to Decoder with Nilay Patel.Discussion points:(0:47) From developer experience to developer productivity(7:03) Getting executive buy-in for developer productivity initiatives(13:54) What Chris's team is responsible for(17:02) How they've built relationships with other teams(20:57) How they built and got funding for Dev Console and Starter Kits(27:23) Homegrown solution vs Backstage
In this episode, Abi and Laura dive into the 2024 DX Core 4 benchmarks, sharing insights across data from 500+ companies. They discuss what these benchmarks mean for engineering leaders, how to interpret key metrics like the Developer Experience Index, and offer advice on how to best use benchmarking data in your organization. Mentions and Links:DX core 4 benchmarksMeasuring developer productivity with the DX Core 4Developer experience index (DXI)Will Larson's article on the Core 4 and power of benchmarking dataDiscussion points:(0:42) What benchmarks are for(3:44) Overview of the DX Core 4 benchmarks(6:07) PR throughput data (11:05) Key insights related to startups and mobile teams (14:54) Change fail rate data (19:42) How to best use benchmarking data
In this episode, Abi and Laura introduce the DX Core 4, a new framework designed to simplify how organizations measure developer productivity. They discuss the evolution of productivity metrics, comparing Core 4 with frameworks like DORA, SPACE, and DevEx, and emphasize its focus on speed, effectiveness, quality, and impact. They explore why each metric was chosen, the importance of balancing productivity measures with developer experience, and how Core 4 can help engineering leaders align productivity goals with broader business objectives. Mentions and Links:Measuring developer productivity with the DX Core 4Laura's developer productivity metrics courseDiscussion Points:(2:42) Introduction to the DX Core 4(3:42) Identifying the Core 4's target audience and key stakeholders(4:38) Origins and purpose(9:20) Building executive alignment(14:15) Tying metrics to business value through output-oriented measures(24:45) Defining impact(32:42) Choosing between DORA, SPACE, and Core 4 frameworks
In this episode, Brian Houck, Applied Scientist, Developer Productivity at Microsoft, covers SPACE, DORA, and some specific metrics the developer productivity research team is finding useful. The conversation starts by comparing DORA and SPACE. Brian explains why activity metrics were included in the SPACE framework, then dives into one metric in particular: pull request throughput. Brian also describes another metric Microsoft is finding useful, and gives a preview into where his research is heading. Mentions and linksConnect with Brian on LinkedInThe SPACE of Developer Productivity: There's More to It Than You ThinkMeasuring developer productivity with the DX Core 4DevEx in actionDORA, SPACE, and DevEx: Which framework should you use?Discussion points(0:48) SPACE framework's growth and adoption(3:47) Comparing DORA and SPACE(6:30) SPACE misconceptions and common implementation challenges(9:34) Whether PR throughput is useful (15:13) Real-world example of using PR throughput (21:33) Talking about metrics like PR throughput internally (24:39) Where Brian's research is heading
Click here to view the episode transcript. In this episode, Snowflake's Gilad Turbahn, Head of Developer Productivity, and Amy Yuan, Director of Engineering, dive into how they elevated developer productivity to a top company priority. They discuss the pivotal role of Snowflake's CTO, who personally invested over half his time to guide the initiative, and how leadership's hands-on involvement secured buy-in across teams. The conversation also explores the importance of collaboration between engineering and product management, and how measuring user sentiment helped them deliver meaningful, long-lasting improvements.Mentions and linksConnect with Gilad and Amy on LinkedInMeasuring developer productivity with the DX Core 4Discussion Points(0:48) The need for a shift at Snowflake(3:59) Leadership involvement and prioritization of developer productivity(8:56) The partnership between engineering and product managers(20:01) From feature factory to customer outcome-focused development(27:36) Shifting measurement focus to user sentiment and customer outcomes(39:13) Gaining buy-in for sentiment metrics and tying them to business impact(51:11) How Snowflake's CTO and volunteers accelerated developer productivity improvements.
Click here to view the episode transcript. In this episode, Emanuel Mueller Ramos, Head of Developer Experience at Skyscanner, discusses the evolution of his team as they transitioned from focusing on frameworks and middleware to becoming a customer-centric, impact-driven organization. Emanuel details the strategies he used to gain stakeholder buy-in, why it's crucial to rethink traditional productivity metrics, and how they made a cultural shift to prioritize developer happiness and effectiveness. This conversation highlights the steps necessary to build a developer experience function that delivers meaningful impact.Mentions and links:Follow Emanuel on LinkedInMeasuring developer productivity with the DX Core 4Discussion points:(1:14) The beginning of Skyscanner's developer productivity division(3:53) Gaining stakeholder buy-in and refocusing the teams(5:57) Redefining success metrics for developer productivity(8:57) Pitching the developer experience focus to leadership(17:26) Moving from frameworks to feedback loops(20:45) Fostering a customer-centric culture(23:20) Defining the collaboration between platform and developer experience teams(26:41) Choosing the right metrics for developer experience success (31:31) Risks and challenges ahead
Abi Noda, co-founder and CEO at DX, joins the show to talk through data shared from the Stack Ocverflow 2024 Developer Survey, why devs are really unhappy, and what they're doing at DX to help orgs and teams to understand the metrics behind their developer's happiness and productivity.
Abi Noda, co-founder and CEO at DX, joins the show to talk through data shared from the Stack Ocverflow 2024 Developer Survey, why devs are really unhappy, and what they're doing at DX to help orgs and teams to understand the metrics behind their developer's happiness and productivity.
In this episode we dive into another awesome article from Abi Noda, Using AI to encourage best practices in the code review process. This article covers a recent research paper released from Google outlining the performance, pitfalls, and process of their in-house AI code review bot. We talk about the role of AI in code reviews, our personal views on what code review is all about, and get existential on AI taking our jobs (again). Despite the AI title, this one is just as much about code review in general as it is about AI so if you're sick of AI content - there's still something here for you.
Trying to measure developer effectiveness or productivity isn't a new problem. However, with the rise of fields like platform engineering and a new wave of potential opportunities from generative AI, the issue has come into greater focus in recent years. In this episode of the Technology Podcast, hosts Scott Shaw and Prem Chandrasekaran speak to Abi Noda, CEO of software engineering intelligence platform DX, about measuring developer experience using the DevEx Framework — which Abi developed alongside Nicole Forsgren, Margaret-Anne Storey and Michaela Greiler. Taking in everything from the origins of the DevEx framework in SPACE metrics, to how technologists can better 'sell' the importance of developer experience to business stakeholders, listen for a fresh perspective on a topic that's likely to remain at the top of the industry's agenda for the forseeable future. Read the DevEx Framework paper: https://queue.acm.org/detail.cfm?id=3595878 Read Abi's article (co-authored with Tim Cochran) on martinfowler.com: https://martinfowler.com/articles/measuring-developer-productivity-humans.html Listen to Abi's Engineering Enablement podcast: https://getdx.com/podcast/
In this week's episode, Abi is joined by industry leaders Idan Gazit from GitHub, Anna Sulkina from Airbnb, and Alix Melchy from Jumio. Together, they discuss the impact of GenAI tools on developer productivity, exploring challenges in measurement and enhancement. They delve into AI's evolving role in engineering, from overcoming friction points to exploring real-world applications and the future of technology. Gain insights into how AI-driven chat assistants are reshaping workflows and the vision for coding.Links: How to measure GenAI adoption and impact
This is Part 2 of our Top 10 Challenges series! In this episode, we're focusing on three common team challenges that eng leaders face: how to increase velocity without losing quality, measure productivity & create meaningful metrics, and work cross-functionally with other teams. We identified these challenges based on conversations with hundreds of eng leaders from podcast episodes, ELC events, and more. For this ep, we've pulled insights from various eng leaders, including Richard Wong @ enrich, Fatemah Alavizadeh @ Notion, Andrew Fong @ Prodvana, Randall Koutnik @ Jellyfish, Abi Noda @ DX, Barbara Nelson @ InfluxDB, Laura Fay @ L Fay Associates, and Jeremy Henrickson @ Rippling.Join us at ELC Annual 2024!ELC Annual is our 2 day conference bringing together engineering leaders from around the world for a unique experience help you expand your network and empower your leadership & career growth.Don't miss out on this incredible opportunity to expand your network, gain actionable insights, ignite new ideas, recharge, and accelerate your leadership journey!Secure your ticket at sfelc.com/annual2024And use the exclusive discount code "podcast10" (all lowercase) for a 10% discountSHOW NOTES:Increasing Velocity (Without Losing Quality): Understanding the speed vs. quality dilemma w/ Richard Wong (0:58)Defining velocity & its impact on users' ROI w/ Fatemah Alavizadeh & Andrew Fong (6:17)Measuring Productivity and Creating Meaningful Metrics: What drives productivity & makes for meaningful metrics w/ Randall Koutnik (11:31)The DevEx framework for improving developer productivity w/ Abi Noda (21:02)Working Cross-Functionally with Other Teams: Why it's important to have cross-functional excellence between eng & product w/ Barbara Nelson & Laura Fay (26:28)Cross-functional communication strategies for addressing misaligned priorities w/ Jeremy Henrickson (37:13)LINKS AND RESOURCESSpeed vs. Quality with Richard WongHow to Create Sustainable Velocity in Your Team with Fatemeh Alavizadeh and Andrew FongBanish Bad Management with Metrics that Don't Suck with Randall KoutnikThe next evolution to measure & improve developer productivity & experience with Abi NodaBridging the Divide: Strategies for Cross-Functional Excellence between Engineering and Product Management with Barbara Nelson and Laura FayAlign & Scale Engineering AND Product with Jeremy HenricksonThis episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/
In this week's episode, Abi welcomes Jared Wolinsky, Vice President of Platform Engineering at SiriusXM, to delve into the inner workings of platform engineering at SiriusXM. Jared sheds light on their innovative approach to prioritizing projects, emphasizing alignment with overarching business goals. They explore how these strategies boost developer speed and drive technological advancement within the organization.Links: When is the right time to establish a DevProd team report
In this episode, Michelle Swartz, Vice president of Developer Enablement American Express, shares insights on improving developer experience. She discusses the creation of an onboarding bootcamp and the development of the AmEx Way Library for better knowledge management. Michelle explains how AmEx balances standardization and flexibility with the concept of Paved Roads. She also highlights the importance of measuring success, fostering community, and elevating the company's tech credibility.Mentions and linksGenAI Guide
This week's episode is a recording from a recent event hosted by Abi Noda (CEO of DX) and Laura Tacho (CTO at DX). The episode begins with an overview of the DORA, SPACE, and DevEx frameworks, including where they overlap and common misconceptions about each. Laura and Abi discuss the advantages and drawbacks of each framework, then discuss how to choose which framework to use.Discussion points:2:50- DORA, SPACE, DevEx overview10:35- Choosing which framework to use13:15- Using DORA22:42-Using SPACEMentions and Links: Dora.dev
In this week's episode, we welcome Derek DeBellis, lead researcher on Google's DORA team, for a deep dive into the science and methodology behind DORA's research. We explore Derek's background, his role at Google, and how DORA intersects with other research disciplines. Derek takes us through DORA's research process step by step, from defining outcomes and factors to survey design, analysis, and structural equation modeling.Discussion points:(3:00) Derek's transition from Microsoft to the DORA team at Google(4:28) Derek talks about his connection to surveys(6:16) Derek's journey to becoming a quantitative user experience researcher(7:48) Derek simplifies DORA(8:19) DORA - Philosophy vs practice(11:09) Understanding desired outcomes(12:45) Self reported outcomes vs objective outcomes(16:16) Derek and Abi discuss the nuances of literature review(19:57) Derek details survey development(27:55) Pretesting issues(29:30) Designing surveys for other companies(35:02) Derek simplifies model analysis and validation techniques(38:48) Benchmarks: Balancing data limitations with method sensitivityMentions and Links:Derek DeBellis on LinkedInDX's guide to measuring GenAI adoption and impact2023 Accelerate State of DevOps Report
This week we're joined by Sean Mcllroy from Slack's Release Engineering team to learn about how they've fully automated their deployment process. This conversation covers Slack's original release process, key changes Sean's team has made, and the latest challenges they're working on today. Discussion points:(1:34): The Release Engineering team(2:13): How the monolith has served Slack (3:24): How the deployment process used to work (6:23): The complexity of the deploy itself(7:39): Early ideas for improving the deployment process(9:07): Why anomaly detection is challenging(10:32): What a Z-score is(13:23): Managing noise with Z-scores(16:49): Presenting this information to people that need it(19:54): Taking humans out of the process(23:13): Handling rollbacks(25:27): Not overloading developers with information(28:26): Handling large deploymentsMentions and links:Read Sean's blog post, The Scary Thing About DeploysFollow Sean on LinkedIn
This week's episode is the recording of a live conversation between Abi and Chris Westerhold (Thoughtworks Head of Developer Experience). This conversation is useful for anyone early in their journey with developer portals or platforms: Abi and Chris discuss common approaches to solving these problems, pitfalls to avoid, building vs. buying, and more. Discussion points:(3:09) Why there's an increased interest in developer portals(5:33) Chris' background with dev portals(6:37) Homegrown solutions for developer portals(9:22) How developer portal initiatives begin(11:24) Internal developer portal vs service catalogs and IDPs(16:18) Mistakes companies make with developer portals(21:05) Approaches to solving this problem(24:28) How can developer portals drive value(32:07) Common traps to avoidMentions and LinksFollow Chris on LinkedInWatch the recording of this conversation Watch part 2 of this conversation on the market landscapeLearn about PlatformX, DX's product mentioned in the conversation
On this week's episode, Abi interviews Kent Wills, Director of Engineering Effectiveness at Yelp. He shares insights into the evolution of their developer productivity efforts over the past decade. From tackling challenges with their monolithic architecture to scaling productivity initiatives for over 1,300 developers. Kent also touches on his experience in building a business case for developer productivity.Discussion points:(1:42) Forming the developer productivity team(3:25) Naming the team engineering effectiveness(4:30) Getting leadership buy-in for focusing on this work(7:54) Managing code ownership in Yelp's monolith(12:23) Supporting the design system(16:00) The business case for forming a dedicated team (19:45) How to standardize (23:50) How their approach to standardization might be different in another company(27:08) Demonstrating the value of their work (32:21) Building an insights platform(38:47) How Yelp is using LLM'sMentions and LinksConnect with Kent Wills on LinkedInWatch Kent's 2023 talk at ElevateListen to the interview with Peter Seibel (“Let 1,000 flowers bloom”)Download the recently published benchmarks on developer productivity team headcount
This week we're joined by Gail Carmichael, Principal Instructional Engineer at Splunk. At Splunk, Gail's team is responsible for improving developer onboarding, which they do through a multi-day learning program. Here, Gail shares how this program works and how they measure developer onboarding. The conversation also covers what instructional engineers are generally, and how Gail demonstrates the impact of her team's work. Discussion points:(1:16) The Engineering Enablement & Engagement Team at Splunk(8:01) What an Instructional Engineer is(14:36) The developer onboarding program at Splunk(16:05) Components of a good onboarding program(21:11) Why having an onboarding program matters(28:17) Measuring onboarding at Shopify (Gail's previous company)(31:39) Measuring developer onboarding at SplunkMentions and LinksConnect with Gail on LinkedInDownload the report on Developer productivity metrics at top tech companies
In this episode we're joined by Adam Rogal, who leads Developer Productivity and Platform at DoorDash. Adam describes DoorDash's journey with their internal developer portal, and gives advice for other teams looking to follow a similar path. Adam also describes how his team delivered value quickly and drove adoption for their developer platform.Discussion points:(1:47) Why DoorDash explored implementing a developer portal(6:59) The initial vision for the developer portal 12:19 Funding ongoing development 16:01 Deciding what to include in the portal 19:15 Coming up with a name for the portal 20:01 Advice for interested beginners23:55 Putting together a business case32:32 Getting adoption for the portal 37:27 Driving initial awareness 41:29 Getting feedback from developers48:33 What Adam would have done differentlyMentions and links:Adam Rogal on LinkedInGet started (API)New testing and monitoring tools
In this episode, Abi has a fascinating conversation with Rebecca Parsons, ThoughtWorks's CTO, Camilla Crispim, and Erik Dörnenburg on the ThoughtWorks Tech Radar. The trio begins with an overview of Tech Radar and its history before delving into the intricate process of creating each report involving multiple teams and stakeholders. The conversation concludes with a focus on the evolution of Tech Radar's design and process and potential future changes. This episode offers Tech Radar fans an exclusive behind-the-scenes look at its history and production.Discussion points:1:20-An introduction to the Tech Radar6:06-Common terms used in this episode6:27-The origin of the Tech Radar8:50-Problems that the Tech Radar was aiming to solve12:23-The impact on internal decision making-a tool for driving change14:30-The teams philosophy behind Tech Radar18:33-What sets the Tech Radar apart21:11-Why maintaining independence is crucial for their audience25:08-How Tech Radar publishes their reports29:36-A look into Thoughtworks live meeting sessions34:51-Tech Radars Git repository42:20-Recent changes and upcoming shiftsMentions and links:ThoughtWorks TechRadarRebecca Parsons on LinkedInCamilla Crispim on LinkedInErik Dörnenburg on LinkedInThoughtworks Git repository
This week's guest is Eirini Kalliamvakou, a staff researcher at GitHub focused on AI and developer experience. Eirini sits at the forefront of research into GitHub Copilot. Abi and Eirini discuss recent research on how AI coding assistance impacts developer productivity. They talk about how leaders should build business cases for AI tools. They also preview what's to come with AI tools and implications for how developer productivity is measured.Discussion points:(1:49) Overview of GitHub's research on AI(2:59) The research study on Copilot(4:48) Defining and measuring productivity for this study(7:44) Exact measures and factors studied(8:16) Key findings from the study(9:45) How the study was conducted (11:17) Most surprising findings for the researchers(14:01) The motivation for conducting a follow-up study(15:34) How the follow-up study was conducted(18:42) Findings from the follow-up study(21:13) Is AI just hype? (26:34) How to begin advocating for AI tools(34:44) How to translate data into dollars(37:06) How to roll out AI tools to an organization(38:47) The impact of AI on developer experience(43:24) Implications of AI on how we measure productivityMentions and links:Eirini Kalliamvakou on LinkedInResearch on the impact of Copilot Crossing the Chasm by Geoffrey Moore
“The three core dimensions of developer experience are feedback loops, cognitive load, and flow state." Today's clip is from Tech Lead Journal episode 134 with Margaret-Anne (Peggy) Storey and Abi Noda, the coauthors of the ACM paper “DevEx: What Actually Drives Productivity”. In this clip, they shared their view on the well-known SPACE and DORA metrics, and pointed out the danger of misusing and abusing the DORA metrics. Peggy and Abi then explained the three core dimensions of developer experience from their latest paper, which are feedback loops, cognitive load, and flow state. Listen out for: SPACE & DORA Metrics - [00:00:26] Misuse and Abuse of DORA Metrics - [00:05:43] New Developer Experience Paper - [00:09:20] Developer Experience - [00:11:46] 3 Core Dimensions - [00:15:03] _____ Margaret-Anne Storey's BioMargaret-Anne (Peggy) Storey is a professor of computer science at the University of Victoria and holds a Canada Research Chair in human and social aspects of software engineering. Her research focuses on improving processes, tools, communication, and collaboration in software engineering. She serves as chief scientist at DX and consults with Microsoft to improve developer productivity. Abi Noda's BioAbi Noda is the founder and CEO at DX, where he leads the company's strategic direction and R&D efforts. His work focuses on developing measurement methods to help organizations improve developer experience and productivity. Before joining DX, Noda held engineering leadership roles at various companies and founded Pull Panda, which was acquired by GitHub in 2019. For more information, visit his website at abinoda.com. Follow Margaret: LinkedIn – linkedin.com/in/margaret-anne-storey-8419462/ Twitter – @margaretstorey Follow Abi: LinkedIn – linkedin.com/in/abinoda/ Twitter – @abinoda Newsletter – newsletter.abinoda.com _____ Our Sponsors Are you looking for a new cool swag? Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available. Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags. Like this episode? Show notes & transcript: techleadjournal.dev/episodes/134. Follow @techleadjournal on LinkedIn, Twitter, and Instagram. Buy me a coffee or become a patron.
“Developer experience is an approach to thinking about engineering excellence and maximizing engineering performance by increasing the capacity and performance of the individuals and the team as a whole." Today's clip is from Tech Lead Journal episode 112 with Abi Noda, the CEO & co-founder of DX. In this clip, Abi shared what developer experience is, why it is becoming an industry trend nowadays, and the different ways of how it is being implemented in the industry. Abi explained why the traditional metrics normally used to measure developer productivity do not really work and can even provide perverse incentives. Abi then touched on the two popular researches widely known in the industry, i.e. the DORA report and SPACE framework. Listen out for: Developer Productivity Industry Trend - [00:00:26] Developer Experience for Developers - [00:02:40] Different Names of Developer Experience - [00:04:42] Traditional Metrics - [00:08:27] DORA & SPACE - [00:12:28] _____ Abi Noda's BioAbi is the founder and CEO of getdx.com, which helps engineering leaders measure and improve developer experience. Abi formerly founded Pull Panda, which was acquired by GitHub. Follow Abi: LinkedIn – linkedin.com/in/abinoda Twitter – @abinoda Website – abinoda.com DX – getdx.com Software Engineering Research – abinoda.substack.com _____ Our Sponsors Are you looking for a new cool swag? Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available. Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags. Like this episode? Show notes & transcript: techleadjournal.dev/episodes/112. Follow @techleadjournal on LinkedIn, Twitter, and Instagram. Buy me a coffee or become a patron.
Christopher Sanson is a product manager at Airbnb who is dedicated to enhancing developer productivity and tooling. Today, we learn more about Airbnb's developer productivity team and how various teams use metrics, both within and outside the organization. From there, we dive even deeper into their measurement journey, highlighting their implementation of DORA metrics and the challenges they overcame throughout the process.Discussion points: (2:43) Who is the developer productivity customer (4:49) The evolution of developer productivity at Airbnb (9:26) Approach before DORA metrics (14:29) Getting buy-in for DORA metrics (17:49) Planning how to deliver new metrics to the organization (21:12) How Airbnb calculates deployment frequency (23:29) Implementing a proof of concept (27:20) Statistical measurement strategies and tactics (31:11) Operationalizing developer productivity metrics (34:26) How Airbnb reviews data (35:41) How Airbnb uses DORA metrics Mentions and links: Christopher Sanson on LinkedIn Christopher's talk at DPE Summit How Top Companies Measure Developer Productivity
In this episode, Abi speaks with Ana Petkovska, who is currently leading the developer experience team at Nexthink. Ana takes us through her journey of leading a DevOps team that underwent multiple transformations. She explains how her team went from being a DevOps team to EngProd and eventually DevEx. Ana elaborates on her team's challenges and the reasons behind the shift in focus. She also shares how she discovered EngProd and used data from companies like Google to convince her company to invest in EngProd. Finally, Ana explains how DevEx came into the picture and changed how her team approaches and measures their work.Discussion points: (00:28) Creating and leading a DevOps team (05:04) Shifting from DevOps to EngProd (07:28) Inspiration from Google (10:05) Building the case for EngProd (13:42) Ratio of engineers to DevEx engineers (15:10) Team mission and charter (16:53) Learning about DevEx (20:05) The difference between EngProd and DevEx (22:32) Nexthink's focus today Mentions and links: Ana Petkovska on LinkedIn Engineering Productivity @Google (Michael Bachman)
In this episode, Abi chats with Grant Jenks, Senior Staff SWE, Engineering Insights @ LinkedIn. They dive into LinkedIn's developer insights platform, iHub, and its backstory. The conversation covers qualitative versus quantitative metrics, sharing concerns about these terms and exploring their correlation. The episode wraps up with technical topics like winsorized means, thoughts on composite scores, and ways AI can benefit developer productivity teams.(1:10) Insights in the productivity space(7:13) LinkedIn's metrics platform, iHub(12:52) Making metrics actionable(15:35) Choosing the right and wrong metrics(19:39) The difficulty of answering simple questions(26:23) Top-down vs. bottom-up approach to metrics(32:12) Winsorized mean and selecting measurements(39:25) Using composite metrics(46:57) Using AI in developer productivity
Find out the best way to measure Developer Experience
This week Adam is joined by Abi Noda, founder and CEO of DX to talk about DX AKA DevEx (or the long-form Developer Experience). Since the dawn of software development there has been this push to understand what makes software teams efficient, but more importantly what does it take to understand developer productivity? That's what Abi has been focused on for the better part of the last 8 years of his career. He started a company called Pull Panda that was acquired by GitHub, spent a few years there on this problem before going out on his own to start DX which helps startups to the fortune 500 companies gather real insights that leads to real improvement.