POPULARITY
In today's session, we are thrilled to be joined by Chris Hood, a distinguished keynote speaker and author of "Customer Transformations," who brings his expertise on merging customer success with digital strategy. This is a topic that is not just relevant, but crucial to developers, business strategists, and tech enthusiasts interested in customer success, digital strategy, and DevOps. Chris will explore the seven-step strategy outlined in his book, which emphasizes solving customer problems and fostering collaboration between business and developer teams. As part of our discussion, we'll explore the importance of understanding customer needs, the evolving landscape of APIs and security, and how to integrate customer-centric thinking into organizational culture. We also highlight these strategies' real-world implications, citing personal experiences and illustrating the critical impact of latency on app performance. Join us as we uncover the layers of customer journeys, marketing strategies, and ecosystem building, all of which aim to place the customer at the forefront of your development efforts. Whether you're a developer, business strategist, or tech enthusiast, this episode promises a wealth of knowledge to help you elevate your DevOps toolchain. Try out SmartBear's Bugsnag for free, today. No credit card required: https://testguild.me/bugsnag
Today, we're diving deep into the intersection of legal expertise and cybersecurity with our special guest, Jonathan Steele. Jonathan, a family law attorney turned cybersecurity expert and founder of Steele Fortress, brings a unique perspective by leveraging his legal background to address pressing privacy and security concerns. Jonathan illuminates the increasing accountability of companies for data breaches, offering actionable advice on minimizing risks, including data encryption and using machine learning firewalls. We also delve into the complexities of liability for data leaks, the importance of collecting minimal customer data, and the role of AI in both the legal and cybersecurity fields. Jonathan's insights into identifying data leaks and his emphasis on the necessity for lawyers to adapt to AI advancements are not just informative, but also serve as a motivating call to action to stay competitive in this dynamic landscape. Join us as we unravel the layers of cybersecurity with a legal twist and learn how to protect your data effectively in today's dynamic landscape. Try out SmartBear's Bugsnag for free, today. No credit card required: https://testguild.me/bugsnag
In this episode, Peter McKee, the Vice President of Developer Relations and Community at Sonar, has over 25 years of industry experience and takes us through the intricate balance between writing quality, maintainable code, and the pressures of rapid development. Try out SmartBear's Bugsnag now https://testguild.me/bugsnag We'll explore the potential and risks of using code generation tools like Geni and AI, emphasizing the critical need for static code analysis to catch vulnerabilities early. Peter also shares his predictions on AI's role in security over the next few years and stresses the importance of a vigilant approach when adopting cloud services. You'll hear valuable insights on tackling technical debt, integrating security practices into everyday coding, and the role of open-source tools in project security. Plus, we'll discuss how tools from Sonar can assist developers in maintaining high-quality, secure code through static analyzers, linters, and IDE integrations. This episode is packed with actionable advice for DevOps teams, aiming to prioritize code quality and security without compromising speed. The practicality of the advice shared will equip you with the knowledge to enhance your software development lifecycle and protect your projects' integrity, making you feel reassured and confident. Try out SmartBear's Bugsnag for free, today. No credit card required: https://testguild.me/bugsnag
Today, we have a special guest – Jay Aigner, a seasoned software testing and quality assurance expert. Jay brings a wealth of knowledge from his experience founding and running a top-tier QA agency. In this episode, we delve into topics highly relevant to your daily work in DevOps software testing and quality assurance. We'll discuss the importance of maintaining a paper trail for daily updates, the intricate process of evaluating and selecting automation tools, and the dynamic nature of tool selection. We'll also explore the significance of proofs of concept (POCs), the challenges in integrating automation into software development, and the critical role of communication and alignment within organizations. Jay shares practical insights on balancing manual and automated testing, navigating common pitfalls in CI/CD pipelines, and the evolving landscape of QA, including the impact of AI and future trends. Whether you're dealing with poor releases, bandwidth issues, or need expert advice on tool selection and implementation, this episode is packed with actionable takeaways to help enhance your QA processes. Try out SmartBear's Bugsnag for free, today. No credit card required. https://links.testguild.com/bugsnag
hift-left strategies for sensitive data protection and privacy compliance. We'll also spotlight an AI-driven security solution called Hound Dog AI. The company's founder, Amjad Afanah, will join us. He brings a wealth of knowledge from his extensive background in cybersecurity. In this episode, we'll explore how Hound Dog AI takes a proactive stance in preventing PII leaks and ensuring compliance with regulations like GDPR. Amjad will share insights on the different types of PII leaks, the importance of protecting sensitive data even in development phases, and how their solution seamlessly integrates with major CI pipelines. We'll also discuss how this tool can significantly save your time and costs associated with remediating data leaks. Its high accuracy in detecting vulnerabilities, supported by advanced AI techniques, is a testament to its efficiency. Amjad underlines the importance of educating DevSecOps and preventive controls in data security. Whether you're a security leader, a developer, or handling privacy concerns at your company, this episode is packed with valuable information. Learn how to try out Hound Dog's free scanner to safeguard your code. Try out SmartBear's Bugsnag for free, today. No credit card required. https://links.testguild.com/bugsnag
Today, we are honored to be in conversation with Eran Grabiner, a seasoned professional in the field of Product Management, currently serving as the Director at SmartBear. With his rich experience, including a stint at the observability startup Aspecto, Eran brings a wealth of knowledge and insights to our discussion. Learn more about Observability Meets AI: https://testguild.me/bugsnagai In this insightful conversation, we'll dive deep into observability, exploring how different developers utilize various tools and types to monitor software behavior, the role AI plays in enhancing these processes, and how the landscape is evolving with the integration of advanced technologies. Eran provides a glimpse into the future of observability, where AI-driven systems could revolutionize data collection and storage, potentially leading to significant cost reductions and efficiency improvements. He also introduces the intriguing concept of an AI observability copilot, a tool that could assist developers in complex tasks like debugging, all while maintaining a conversational interface. However, Eran also underlines the challenges that come with such advancements, such as data exposure and the need for context and long-term memory in AI models. Throughout the episode, we emphasize AI's transformative power in development, its implications for developers' future roles, and the necessary guardrails to ensure data integrity and security. Join us as we delve into these topics, navigating the pivotal shifts in software development and observability with expert insights from Eran Grabiner. Try out SmartBear's Bugsnag for free, today. No credit card required. https://links.testguild.com/bugsnag
Today, we have a special guest, Harpreet Singh, the co-founder and co-CEO of Launchable. Harpreet joins us to discuss an exciting frontier in software testing—using machine learning to predict failures and streamline the testing process. Imagine a tool that can intelligently shift tests left in the development cycle, prioritize the most critical tests based on risk, and notify you via Slack when issues arise—all while continually learning and improving. This is precisely what Harpreet and his team have achieved with Launchable. Their platform integrates seamlessly into CI/CD pipelines and provides engineering teams with valuable insights to speed up the process of identifying and fixing test failures. In this episode, we'll delve into how Launchable aids in catching issues earlier, the profound influence of AI on the testing landscape, and real-life examples like BMW that illustrate the solution's effectiveness. We'll also explore Harpreet's advice on adopting a targeted approach to AI in DevOps and how their solution alleviates cognitive load, saving developers time and effort. Try out SmartBear's Bugsnag for free, today. No credit card required. https://links.testguild.com/bugsnag
In this episode, we dive deep into the transformative power of artificial intelligence with our guest, Ian Harris. An experienced technology professional, Ian unpacks how AI revolutionizes customer service by understanding sentiment and handling frustrations, ultimately reshaping call center operations. He explores the game-changing impact of AI in software development in DevOps, emphasizing its role in enhancing code reviews and pull requests and automating mundane tasks, freeing up human creativity. Joe and Ian also delve into the practical side of AI in business, discussing powerful tools like Google's Gemini models for data processing and OpenRouter for comparing AI model responses. The conversation doesn't shy away from the challenges, including data security concerns and the need to keep pace with rapid advancements in AI from major players like Google, Amazon, and OpenAI. Whether you're keen on understanding AI's role in communication efficiency, or its potential in creating cost-effective, high-quality podcasts, this episode is packed with insights that you won't want to miss! Try out SmartBear's Bugsnag for free, today. No credit card required. https://links.testguild.com/bugsnag
Welcome to this episode of the DevOps Toolchain podcast! Today, host Joe Colantonio and expert entrepreneur Ken Pomella dive deep into the transformative world of cloud-native technologies and AI. Ken shares his extensive wisdom on navigating the rapid pace of technological advancements, mainly focusing on impactful tools like IoT, cloud-native services, and AI. He praises AWS for its pioneering approach and offers practical advice on leveraging AWS Amplify for newcomers to the DevOps space and Bedrock Studio for those interested in AI solutions. Ken also covers the cost-effectiveness of cloud services, the intricacies of transitioning to cloud-native environments, and the immense potential AI holds in revolutionizing these spaces. Whether you're a seasoned developer or just starting out, this episode is packed with invaluable insights on the future of technology in business. So, tune in as we explore how embracing these advanced technologies can propel your projects forward! Try out SmartBear's Bugsnag for free, today. No credit card required. https://links.testguild.com/bugsnag
Welcome to another episode of the TestGuild DevSecOps News Show. Today, we are privileged to have Jamie George, a seasoned professional in the tech industry and the CEO and co-founder of Codacy, join us. He brings with him a wealth of knowledge and experience to explore the significant role of code quality and standards in today's fast-paced tech environment. Jamie will delve into how Codacy helps developers maintain high security and quality standards despite the pressures to ship quickly. We'll also discuss the integration of AI in coding, the challenges of ensuring security compliance, and Codacy's strategic focus on cloud environments and AI augmentation. Additionally, Jamie will explain how Codacy empowers DevOps teams to manage and improve code quality for better and safer software development. Stay tuned as we explore these topics, ensuring you have the tools and perspectives to enhance your DevSecOps efforts. Try out SmartBear's Bugsnag for free, today. No credit card required.
In this DevOps Toolchain episode, we explore the cutting-edge junction where AI meets software testing. Join host Joe Colantonio, Fitz Nowlan, and Todd McNeal as they unravel SmartBear's game-changing Reflect integration with Zephyr Scale. Discover:
In this episode, we're unwrapping the highlights from Laracon AU, with a special focus on Laravel Pulse leading our discussion. Taylor takes the reins to guide us through the origins and functionality of Laravel Pulse, a health monitoring tool for your Laravel applications.We then shift our discussion to Laravel first party packages. Taylor openly shares insights into his decision-making process—revealing how he selects packages to join the Laravel family and when it's time to bid them farewell.Our conversation doesn't end there though. We also look at the future of Laravel and examine the strategies used for continually injecting innovation and fresh ideas into the Laravel ecosystem. Taylor Otwell's Twitter - https://twitter.com/taylorotwell Matt Stauffer's Twitter - https://twitter.com/stauffermatt Laravel Twitter - https://twitter.com/laravelphp Laravel Website - https://laravel.com/ Tighten.co - https://tighten.com/ Laravel Pulse: https://pulse.laravel.com/ Laracon AU - https://laracon.au/ Bugsnag: https://www.bugsnag.com/ Cashier: https://laravel.com/docs/10.x/billing Docker: https://www.docker.com Forge - https://forge.laravel.com/ Herd: https://herd.laravel.com/ Horizon: https://laravel.com/docs/10.x/horizon Inertia - https://inertiajs.com/ Livewire: https://laravel-livewire.com/ Lumen: https://lumen.laravel.com/docs/10.x Mix: https://laravel-mix.com/ Next.js: https://nextjs.org/ Passport: https://laravel.com/docs/10.x/passport Pennant: https://laravel.com/docs/10.x/pennant Sentry: https://sentry.io/for/php/ Tailwind: https://tailwindcss.com/ Telescope: https://laravel.com/docs/10.x/telescope Tony Messias Twitter: https://twitter.com/tonysmdev Valet: https://laravel.com/docs/10.x/valet Vapor - https://vapor.laravel.com/ -----Editing and transcription sponsored by Tighten.
In this jam-packed episode, we dive deep into the world of app development, exploring the essential choices and tools that shape a successful project from start to finish. Join us as we share our preferred tech stacks for launching a brand new app, discuss the intricacies of hosting and deploying Laravel applications, and explore the myriad of options available.Whether you're a seasoned developer or just embarking on your coding journey, consider this episode your roadmap to cultivating a robust and efficient app development process. Taylor Otwell's Twitter - https://twitter.com/taylorotwell Matt Stauffer's Twitter - https://twitter.com/stauffermatt Laravel Twitter - https://twitter.com/laravelphp Laravel Website - https://laravel.com/ Tighten.co - https://tighten.com/ Laracon AU - https://laracon.au/ Forge - https://forge.laravel.com/ Livewire: https://laravel-livewire.com/ Inertia - https://inertiajs.com/ Tailwind: https://tailwindcss.com/ Blade - https://laravel.com/docs/10.x/blade Breeze - https://laravel.com/docs/10.x/starter-kits#laravel-breezeJetstream: Herd: https://herd.laravel.com/ Valet: https://laravel.com/docs/10.x/valet Docker: https://www.docker.com DBngin: https://dbngin.com/ Homebrew: https://brew.sh/ Takeout: https://github.com/tighten/takeout VS code: https://code.visualstudio.com/ PHPstorm: https://www.jetbrains.com/phpstorm/ Sublime Text: https://www.sublimetext.com/ Sarah Drasner Nightowl Theme: https://vscodethemes.com/e/sdras.night-owl/night-owl Bugsnag: https://www.bugsnag.com/ Sentry: https://sentry.io/for/php/ Pusher: https://pusher.com/docs/beams/reference/server-sdk-php/ Envoyer - https://envoyer.io/ Vapor - https://vapor.laravel.com/ Postmark: https://postmarkapp.com/send-email/php Github actions: https://github.com/features/actions Honeybadger: https://docs.honeybadger.io/lib/php/ Flare: https://flareapp.io/ Chipper CI: https://chipperci.com/ Algolia: https://www.algolia.com/ Oh Dear: https://ohdear.app/ Telescope: https://laravel.com/docs/10.x/telescope Horizon: https://laravel.com/docs/10.x/horizon Papertrail: https://www.papertrail.com -----Editing and transcription sponsored by Tighten.
In this insightful episode, Justin Collier, Sr. Director of Product Management, and Eran Grabiner, Director of Product Management at SmartBear, delve deep into the world of achieving observability goals with BugSnag. Justin and Eran share their passion for enhancing the end-user experience, emphasizing the pivotal role of observability in understanding and alleviating the issues end users face. Listeners will gain insights into the importance of pinpointing specific errors and performance issues swiftly, an integral process for optimizing user experience and cost optimization. Discover how, with real-time data at their fingertips, developers and testers can focus on their core competencies, cut costs, and find fulfillment in their roles. Join us for a journey that traverses the landscape of observability, testing, productivity, and business perspectives, offering listeners an awesome view of the intricate dance between technology and user experience. Learn more about SmartBear's developer-focused Observability solutions at https://testguild.com/bugsnag
2023-05-23 Weekly News - Episode 196Watch the video version on YouTube at https://youtube.com/live/3F5all2U5Pk?feature=share Hosts: Gavin Pickin - Senior Developer at Ortus Solutions Dan Card - Senior Developer at Ortus Solutions Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there. A few ways to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week BOXLife store: https://www.ortussolutions.com/about-us/shop Buy Ortus's Books 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Learn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes Patreon Support (proficient)We have 40 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsAdobe ColdFusion 2023 released!!!!We are thrilled to announce the highly anticipated release of Adobe ColdFusion 2023! Packed with cutting-edge features and enhanced performance, this release takes ColdFusion to new heights of innovation.Experience accelerated development, robust security measures, and seamless integration with modern technologies. From rapid application development to scalable enterprise solutions, Adobe ColdFusion empowers developers to build dynamic web applications with ease. Discover the limitless possibilities and stay ahead in the digital era.Upgrade to the latest version now and harness the true potential of ColdFusion. Elevate your coding experience with Adobe ColdFusion – the ultimate platform for unmatched productivity and success. LDAP and SAML integration Central Configuration Server GraphQL client HTML to PDF Cloud Services JWT integration in CF Whats new - https://helpx.adobe.com/coldfusion/using/whats-new.htmlhttps://coldfusion.adobe.com/2023/05/coldfusion2023-release/ ICYMI - Into the Box - Recap Keynote - Day 1 - https://t.co/42DozsZ0G9 Keynote - Day 2 - https://youtube.com/live/TOhOaNVy0dM Sessions Hands on Pre Conference Happy Box Hackathon New Releases and UpdatesLots of Releases So many - we are still waiting on the blogs and release notes for a lot of them, but ITB came with ColdBox7, CommandBox 5.9, Testbox 5, CBWire 3, Testbox CLI, Coldbox CLI, Quick, Qb, CBQ V1 and V2, cbDebugger 3, ContentBox 6 We will discuss some of them belowColdBox 7 ReleasedColdBox 7 has been released! Install it via ForgeBox using `coldbox`. Release at ITB 2023!What's New With ColdBox 7.0.0? Engine Support ColdBox CLI WireBox Updates Transient Request Cache Delegators Property Observers Lazy Properties New `onInjectorMissingDependency` event Population Enhancements (including mass assignment protection) Hierarchical Injectors (for Module Dependencies) Module Config Object Override files App Mode Helpers `redirectBack` included as `back` `DateTimeHelper` component Whoops! Upgrades More data for development REST exception responses JSON Pretty Printing in LogBox Output Exception Pretty Printing in LogBox Output Combine `canXXX` checks with logging using callback functions `event.setRequestTimeout()` - useful for testing https://coldbox.ortusbooks.com/v/7.x/intro/release-history/whats-new-with-7.0.0CBWIRE 3.0.0 ReleasedWe are very excited to announce the release of version 3 of CBWIRE, our ColdBox module that makes building modern, reactive apps a breeze. This version brings with it a new component syntax, 19 enhancements and bug fixes, and improved documentation. Our biggest goal with this release was to improve the developer experience and to provide a low barrier to entry to getting started with CBWIRE.https://www.ortussolutions.com/blog/cbwire-300-released TestBox v5.0.0 Released!We are excited to announce the release of TestBox version 5, which brings a host of new features and improvements for developers. TestBox is a powerful and flexible tool that helps developers write comprehensive BDD/TDD tests for their applications, ensuring code quality and reducing the likelihood of bugs and errors. With TestBox v5, developers can take advantage of new features such as batch code coverage testing, improved reporting capabilities, method spies, and better integration with other tools in the Ortus suite.These new features make TestBox even more versatile and user-friendly, and provide developers with a powerful tool for building high-quality, reliable applications.https://www.ortussolutions.com/blog/testbox-v500-released FusionReactor 10 released, May 18If you're using FusionReactor, note that a new version 10 (10.0.0) released yesterday, May 18. While it's a new major release number, most of the items listed as new aren't really things that you will "see" as changed in the interface. I don't quite want to call it just "plumbing"--the folks had their reason to regard the new and changed features as warranting the major version number increase.https://www.carehart.org/blog/2023/5/19/fusionreactor_10_0_released/https://docs.fusion-reactor.com/release-notes/ ColdBox CLI 1.x ReleasedWe are thrilled to announce the release of our new ColdBox CLI tool! This powerful command-line interface is designed to help developers streamline their workflows and simplify their ColdBox development experience. With its intuitive syntax and powerful capabilities, the ColdBox CLI tool allows developers to easily create, test, and deploy ColdBox applications with just a few simple commands. Whether you are a seasoned ColdBox developer or just getting started with this powerful framework, the ColdBox CLI tool is the perfect addition to your toolkit.This tool used to be embedded in the CommandBox core, but it now has a new home (https://github.com/ColdBox/coldbox-cli) and can have it's own life-cycles including LTS support for our ColdBox Framework as well.https://www.ortussolutions.com/blog/coldbox-cli-1x-releasedICYMI - TestBox CLI 1.x ReleasedWe're excited to unveil our latest **TestBox CLI ** tool! This robust command-line interface is specifically crafted to assist developers in streamlining their workflows and enhancing their TestBox BDD/TDD development process. Boasting an intuitive syntax and potent functionalities, the TestBox CLI tool empowers developers to create, test, and generate reports on their ColdFusion (CFML) applications with ease, using only a handful of commands. Whether you're a seasoned ColdFusion (CFML) developer or a newcomer to this potent framework, the TestBox CLI tool is a valuable asset to add to your toolkit.This tool used to be embedded in the CommandBox core, but it now has a new home (https://github.com/ortus-solutions/testbox-cli) and can have it's own life-cycles.https://www.ortussolutions.com/blog/testbox-cli-1x-releasedNew Ortus Supported ORM Extension for Lucee.Other Releases: cbDedugger 3, ContentBox 6Webinar / Meetups and WorkshopsPOSTPONED - Adobe - Road to Fortuna Series: ColdFusion 2023 in Docker on Google Cloud PlatformMay 23, 2023 - MAYBE IN JUNE10 AM - 11 AM PTDuring GCP centric webinar, Mark Takata will explore how to run a containerized ColdFusion 2023 server on Google Cloud Platform's Kubernetes powered containerization system. He will demonstrate how the powerful new Google Cloud Platform features added to ColdFusion 2023 can help optimize application development, provisioning and delivery. This will be the first time ColdFusion 2023 will be shown running in containers publicly, and the session is designed to showcase the ease of working in this popular method of software delivery.Speaker - Mark Takata - ColdFusion Technical Evangelist, Adobehttps://docker-gcp-coldfusion.meetus.adobeevents.com/ CFCasts Content Updateshttps://www.cfcasts.comRecent Releases 2023 ForgeBox Module of the Week Series - 1 new Video https://cfcasts.com/series/2023-forgebox-modules-of-the-week 2023 VS Code Hint tip and Trick of the Week Series - 1 new Video https://cfcasts.com/series/2023-vs-code-hint-tip-and-trick-of-the-week Just added 2019 Into the Box Videos Watch sessions from previous ITB years Into the Box 2022 - https://cfcasts.com/series/itb-2022 Into the Box 2021 - https://cfcasts.com/series/into-the-box-2021 Into the Box 2020 - https://cfcasts.com/series/itb-2020 Into the Box 2019 - https://cfcasts.com/series/into-the-box-2019 Coming Soon Into the Box 2023 Videos will soon be available for purchase as an EXCLUSIVE PREMIUM package. Subscribers will get access to premium packages after a 6 month exclusive window. More ForgeBox and VS Code Podcast snippet videos ColdBox Elixir from Eric Getting Started with Inertia.js from Eric 10 Testing Techniques by Dan? Feature Testing Deployment with Docker by Dan? Conferences and TrainingICYMI - Into the Box 2023 - 10th EditionMay 17-19, 2023 The conference will be held in The Woodlands (Houston), Texas - This year we will continue the tradition of training and offering a pre-conference hands-on training day on May 17th and our live Mariachi Band Party! However, we are back to our Spring schedule and beautiful weather in The Woodlands! Also, this 2023 will mark our 10 year anniversary. So we might have two live bands and much more!!!IN PERSON ONLY https://intothebox.orghttps://itb2023.eventbrite.com/ Can't wait? Watch videos from the last 4 years on CFCasts Into the Box 2022 - https://cfcasts.com/series/itb-2022 Into the Box 2021 - https://cfcasts.com/series/into-the-box-2021 Into the Box 2020 - https://cfcasts.com/series/itb-2020 Into the Box 2019 - https://cfcasts.com/series/into-the-box-2019 THIS WEEK - VueConf.usNEW ORLEANS, LA • MAY 24-26, 2023Jazz. Code. Vue.Workshop day: May 24Main Conference: May 25-26https://vueconf.us/ CFCamp - Pre-Conference - Ortus has 4 TrainingsJune 21st, 2023Held at the CFCamp venue at the Marriott Hotel Munich Airport in Freising. Eric - TestBox: Getting started with BDD-TDD Oh My! Luis - Coldbox 7 - from zero to hero Dan - Legacy Code Conversion To The Modern World Brad - CommandBox Server Deployment for the Modern Age https://www.cfcamp.org/pre-conference.html CFCampJune 22-23rd, 2023Marriott Hotel Munich Airport, FreisingCheck out all the great sessions: https://www.cfcamp.org/sessions.htmlCheck out all the great speakers: https://www.cfcamp.org/cfcamp-conference-2023/speakers.html Register now: https://www.cfcamp.org/THAT ConferenceHowdy. We're a full-stack, tech-obsessed community of fun, code-loving humans who share and learn together.We geek-out in Texas and Wisconsin once a year but we host digital events all the time.WISCONSIN DELLS, WI / JULY 24TH - 27TH, 2022A four-day summer camp for developers passionate about learning all things mobile, web, cloud, and technology.https://that.us/events/wi/2023/Our very own Daniel Garcia is speaking there https://that.us/activities/R3eAGT1NfIlAOJd2afY7Adobe CF Summit WestLas Vegas 2-4th of October.Get your early bird passes now. Session passes @ $99 Professional passes @ $199. Only till May 31st, 2023!Can you spot ME - Gavin - Apparently I'm in 3 of the photos!Call for Speakers is OPENhttps://cfsummit.adobeevents.com/ https://cfsummit.adobeevents.com/speaker-application/Ortus Training - ColdBox Zero to HeroDates and VenueMore conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/Blogs, Tweets, and Videos of the Week5/10/23 - Blog - Ben Nadel - Using BugSnag As A Server-Side Logging Service In ColdFusionI've been on the lookout for a better error logging service; and, over on Facebook, Jay Bronson recommended that I look at BugSnag. They have a free-tier, so I signed up to try it out. And, I must say, I'm very pleased with the User Interface (UI) and the basic functionality. That said, I could not get the Java SDK (Software Development Kit) working with JavaLoader. As such, I hacked together some ColdFusion code that would do just enough to send data to the BugSnag API. What I have is far from feature complete; but, I thought it might be worth sharing.https://www.bennadel.com/blog/4462-using-bugsnag-as-a-server-side-logging-service-in-coldfusion.htm 5/11/23 - Blog - Luis Majano - TestBox v5.0.0 Released!We are excited to announce the release of Testbox version 5, which brings a host of new features and improvements for developers. TestBox is a powerful and flexible tool that helps developers write comprehensive BDD/TDD tests for their applications, ensuring code quality and reducing the likelihood of bugs and errors. With TestBox v5, developers can take advantage of new features such as batch code coverage testing, improved reporting capabilities, method spies, and better integration with other tools in the Ortus suite.These new features make TestBox even more versatile and user-friendly, and provide developers with a powerful tool for building high-quality, reliable applications.https://www.ortussolutions.com/blog/testbox-v500-released5/12/23 - Blog - Brian - Why You Don't Want To Use CFMX_COMPAT EncryptionThis is the first of what may be a couple of posts about my presentation from ColdFusion Summit East 2023, which was held in April in Washington, DC.Let's talk about ColdFusion and encryption. Specifically -- about the CFMX_COMPAT algorithm. The encrypt() function was introduction in ColdFusion 4 (ca. November 1998), and CFMX_COMPAT was the only algorithm available. The release of ColdFusion 7 (ca. February 2005) added native support for AES, 3DES, DES, and Blowfish. But CFMX_COMPAT remains the default algorithm used by the encrypt() function. https://hoyahaxa.blogspot.com/2023/05/why-you-dont-want-to-use-cfmxcompat.html 5/13/23 - Blog - Nolan Erck - Speaking at Into The Box 2023It's official...next week I'll be speaking at Into The Box in Houston!If you're not already familiar with it, Into The Box is the most modern leaning conference for CFML! But really the CFML-specific portion is complimented by a heavy dose of content that is applicable to many other platforms. A quick look at the agenda will show you sessions ranging from web security, to AWS pub/sub mechanisms, to OAuth and more!https://southofshasta.com/blog/speaking-at-into-the-box-2023/ 5/14/23 - Blog - Ben Nadel - Maintaining White Space Using jSoup And ColdFusionjSoup is a Java library for parsing and manipulating HTML strings. For the last few years, I've been using jSoup to clean-up and normalize my blog posts. And now, I'm looking to use jSoup to help me transform and cache GitHub Gists. At the time of this writing, Gist code is rendered in an HTML with cells that use white-space: pre as the means of controlling white space output. jSoup doesn't parse the CSS; so, it does understand that it needs to maintain this white space when serializing the document back into HTML. If we want to keep this white space in the resultant document, we have to disable pretty printing.https://www.bennadel.com/blog/4463-maintaining-white-space-using-jsoup-and-coldfusion.htm5/16/23 - Blog - Adobe ColdFusion Portal - Introducing the 2023 Release of Adobe ColdFusionWe are thrilled to announce the highly anticipated release of Adobe ColdFusion 2023! Packed with cutting-edge features and enhanced performance, this release takes ColdFusion to new heights of innovation.https://coldfusion.adobe.com/2023/05/coldfusion2023-release/ 5/16/23 - Blog - Luis Majano - Ortus Solutions - ColdBox 7.0.0 ReleasedIntroducing ColdBox 7: Revolutionizing Web Development with Cutting-Edge Features and Unparalleled PerformanceWe are thrilled to announce the highly anticipated release of ColdBox 7, the latest version of the acclaimed web development HMVC framework for ColdFusion (CFML). ColdBox 7 introduces groundbreaking features and advancements, elevating the development experience to new heights and empowering developers to create exceptional web applications and APIs.Designed to meet the evolving needs of modern web development, ColdBox 7 boasts a range of powerful features that streamline the development process and enhance productivity. With its robust HMVC architecture and developer-friendly tools, ColdBox 7 enables developers to deliver high-performance, scalable, and maintainable web applications and APIs with ease.https://www.ortussolutions.com/blog/coldbox-700-released 5/16/23 - Blog - Ben Nadel - Parsing GitHub Gist Embeds Into A Normalized Data Structure Using jSoup In ColdFusionAs I mentioned yesterday, I've been using GitHub Gists to add the syntax highlighting / formatting in my blog post content. This has been working great; but, I've never liked the idea of having to reach out to a 3rd-party system at render time in order to provide my full content experience. As such, I've been considering ways to cache the GitHub Gist data locally (in my system) for both better control and better performance. Unfortunately, GitHub Gists aren't provided in the most user-friendly format. To that end, we can use jSoup in ColdFusion to read-in, parse, and normalize the Gist contents.https://www.bennadel.com/blog/4464-parsing-github-gist-embeds-into-a-normalized-data-structure-using-jsoup-in-coldfusion.htm 5/16/23 - Blog - Nolan Erck - My Into The Box 2023 ScheduleInto The Box 2023 starts tomorrow! After a flight that included several delay, I finally arrived at the hotel a few minutes ago. As per usual, there is a ton of great content this year; deciding which sessions to attend is like the techie equivalent of Sophie's Choice! Here's my best guess as to where you can find me:Wednesday: Async Programming & Scheduling workshophttps://southofshasta.com/blog/my-into-the-box-2023-schedule/ 5/17/23 - Blog - Charlie Arehart - ColdFusion 2023 released, May 17 2023: resources and thoughtsColdFusion 2023 has been released today, May 17 2023. For more on the many features, see the following several Adobe blog posts and substantial documentation resources they released also today, about which I offer some additional comment below.I also discuss changes in OS support (saving you having to compare the docs discussing that), as well as the change to CF2023 running on Java 17 (which you could miss, as it's not highlighted by Adobe in any of the announcement resources.) I also discuss changes in the licensing document/EULA (again, to save you having to do that comparison), as well as an observation about pricing (it has not changed since CF2021).I also discuss some migration considerations and close by pointing out the Hidden Gems in CF2023 talk that I did, based on the prerelase. I plan to update that in time based on this final release.https://www.carehart.org/blog/2023/5/17/cf2023_released/ 5/18/23 - Blog - Ben Nadel - Using CSS Flexbox To Create A Simple Bar Chart In ColdFusionI'm a huge fan of CSS Flexbox layouts. They're relatively simple to use and there's not much to remember in terms of syntax. One place that I love using Flexbox is when I need to create a simple bar chart. I don't do much charting in my work, so I never have need to pull in large, robust libraries like D3. But, for simple one-off visualizations, CSS Flexbox is my jam. I thought it might be worth sharing a demo of how I do this in ColdFusion.https://www.bennadel.com/blog/4466-using-css-flexbox-to-create-a-simple-bar-chart-in-coldfusion.htm 5/18/23 - Blog - Charlie Arehart - FusionReactor 10 released, May 18: resources and thoughtsIf you're using FusionReactor, note that a new version 10 (10.0.0) released yesterday, May 18. While it's a new major release number, most of the items listed as new aren't really things that you will "see" as changed in the interface. I don't quite want to call it just "plumbing"--the folks had their reason to regard the new and changed features as warranting the major version number increase.For more, read on.Of course, I had just last week blogged on the release of FR 9.2.2, released March 1. I'm not letting as much time pass with this post. :-)https://www.carehart.org/blog/2023/5/19/fusionreactor_10_0_released/5/22/23 - Blog - Grant Copley - CBWIRE 3.0.0 ReleasedWe are very excited to announce the release of version 3 of CBWIRE, our ColdBox module that makes building modern, reactive apps a breeze. This version brings with it a new component syntax, 19 enhancements and bug fixes, and improved documentation. Our biggest goal with this release was to improve the developer experience and to provide a low barrier to entry to getting started with CBWIRE.https://www.ortussolutions.com/blog/cbwire-300-released CFML JobsSeveral positions available on https://www.getcfmljobs.com/Listing over 67 ColdFusion positions from 43 companies across 32 locations in 5 Countries.4 new jobs listed this weekFull-Time - ColdFusion Programmer at Tulsa, OK - United States May 23https://www.getcfmljobs.com/jobs/index.cfm/united-states/ColdFusion-Programmer-at-Tulsa-OK/11575 Full-Time - ColdFusion Engineer at Remote - United States May 21https://www.getcfmljobs.com/jobs/index.cfm/united-states/ColdFusionEngineer-at-Remote/11574 Full-Time - ColdFusion Lead at Pune, Maharashtra - India May 11https://www.getcfmljobs.com/jobs/index.cfm/india/ColdFusion-Lead-at-Pune-Maharashtra/11573 Full-Time - ColdFusion Developer at Pune, Maharashtra - India May 09https://www.getcfmljobs.com/jobs/index.cfm/india/ColdFusion-Developer-at-Pune-Maharashtra/11571 Other Job LinksThere is a jobs channel in the CFML slack team, and in the Box team slack now tooForgeBox Module of the WeekTestBoxTestBox is a Behavior Driven Development (BDD) and Test Driven Development (TDD) framework for ColdFusion (CFML). It also includes mocking and stubbing capabilities via its internal MockBox library.V5 Release NotesWe are excited to announced the release of Testbox version 5, which brings a host of new features and improvements for developers. TestBox is a powerful and flexible tool that helps developers write comprehensive BDD/TDD tests for their applications, ensuring code quality and reducing the likelihood of bugs and errors. With TestBox v5, developers can take advantage of new features such as batch code coverage testing, improved reporting capabilities, method spies, and better integration with other tools in the Ortus suite.These new features make TestBox even more versatile and user-friendly, and provide developers with a powerful tool for building high-quality, reliable applications. You can read more about TestBox in our comprehensive documentation online: https://testbox.ortusbooks.com/ https://www.forgebox.io/view/testbox VS Code Hint Tips and Tricks of the WeekVisual Studio Code Remote - SSH - PreviewBy Microsoft The Remote - SSH extension lets you use any remote machine with a SSH server as your development environment. This can greatly simplify development and troubleshooting in a wide variety of situations. You can:Develop on the same operating system you deploy to or use larger, faster, or more specialized hardware than your local machine.Quickly swap between different, remote development environments and safely make updates without worrying about impacting your local machine.Access an existing development environment from multiple machines or locations.Debug an application running somewhere else such as a customer site or in the cloud.No source code needs to be on your local machine to gain these benefits since the extension runs commands and other extensions directly on the remote machine. You can open any folder on the remote machine and work with it just as you would if the folder were on your own machine.https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-sshWorks well with: Visual Studio Code Remote - SSH: Editing Configuration Fileshttps://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh-edit Thank you to all of our Patreon SupportersThese individuals are personally supporting our open source initiatives to ensure the great toolings like CommandBox, ForgeBox, ColdBox, ContentBox, TestBox and all the other boxes keep getting the continuous development they need, and funds the cloud infrastructure at our community relies on like ForgeBox for our Package Management with CommandBox. You can support us on Patreon here https://www.patreon.com/ortussolutionsDon't forget, we have Annual Memberships, pay for the year and save 10% - great for businesses. Bronze Packages and up, now get a ForgeBox Pro and CFCasts subscriptions as a perk for their Patreon Subscription. All Patreon supporters have a Profile badge on the Community Website All Patreon supporters have their own Private Forum access on the Community Website All Patreon supporters have their own Private Channel access BoxTeam Slack https://community.ortussolutions.com/Top Patreons (proficient) John Wilson - Synaptrix Tomorrows Guides Jordan Clark Gary Knight Mario Rodrigues Giancarlo Gomez David Belanger Dan Card Jeffry McGee - Sunstar Media Dean Maunder Nolan Erck Abdul Raheen And many more PatreonsYou can see an up to date list of all sponsors on Ortus Solutions' Websitehttps://ortussolutions.com/about-us/sponsors Thanks everyone!!! ★ Support this podcast on Patreon ★
More Than Just Code podcast - iOS and Swift development, news and advice
This week we discuss the new M2 Max, M2 Pro and Mac mini, MacBook Pros 14 & 16. We follow up on Stable Diffusion, ChatGPT and updated Apple Design Resources. We also cover augmenting accessibility with localized image names and the 2nd generation HomePod. In our Picks; Improving Console Output, SwiftUI Views Life Cycle, SwiftUI 4 adds tap location, DIY iOS Static Analysis, Gitignore.io, Getting Started with Xcode Cloud, and How to professionally say...
Lots of technical talk in this episode as your nice hosts wade into these two large topics! Flappy Dragon - Google Play Store Cameras 0:09:27 Stephen McGregorGame DesignProgrammingLinear Interpolation - WikipediaSuper Mario World Camera Logic Review - Shaun Inman, YouTubeCamera Movements for 2D Platformers: How Do I Know Which One to Choose - Sam Hu, Sam J H HuPro Camera 2D - Luís Pedro Fonseca, Unity Asset Store Bug Tracking and Triage 0:31:52 Ellen Burns-JohnsonProductionProgrammingParkinson's Law - WikipediaResponsible Bug Reporting and Triage - SmartbearAuction-based serious game for bug tracking - """Çağdaş Üsfekes,Eray Tüzün,Murat Yılmaz,Yagup Macit,Paul Clarke""", The Institution of Engineering and TechnologyBug Triaging Principles - BugsnagGame Development Essentials: Bugtracking (or how we ended up writing our own bu… - Andre Weissflog, The Brain DumpProduction Testing and Bug Tracking - Jamie Fristrom, Game DeveloperWhat makes a good bug report? - Nick Barrett, GamesIndustry.bizAzure DevOps - MIcrosoft Ellen mentioned Agile Development in this episode. Agile Development We also talked about Bugs in one of our first episodes. "Bananas, from here to eternity."
In this week's episode, we're joined by Kirti Dewan, VP of Marketing at Bugsnag (a SmartBear Company), who has had a career as a marketing leader at different technology companies for almost a decade. Kirti takes us through her journey as a marketing leader transitioning from large, in-house work to fast-paced start-ups, a few years consulting, and back to tech start-ups. She also talks about her roles leading marketing teams and how she has taken all of her cumulative experience to build strong value-based teams that do amazing work- without burnout. So, if you're in a leadership position (or hope to be one day) and are looking to learn more about hiring practices, this is definitely the episode for you! And be sure to sync up with https://www.linkedin.com/in/kirti-dewan-496441/ (Kirti on LinkedIn).
In this week's episode we spoke about the recent outages that we've seen around the industry and how organizations can use defensive programming to prevent these software bugs from crashing applications.Joining us is James Smith, the senior vice president of products for the Bugsnag group at SmartBear.
Nate Berkopec is the author of the Complete Guide to Rails Performance, the creator of the Rails Performance Workshop, and the co-maintainer of Puma. He talks with Steph about being known as "The Rails Speed Guy," and how he ended up with that title, publishing content, working on workshops, and also contributing to open source projects. (You could say he's kind of a busy guy!) Speedshop (https://www.speedshop.co/) Puma (https://github.com/puma/puma/commits/master?author=nateberkopec) The Rails Performance Workshop (https://www.speedshop.co/rails-performance-workshop.html) The Complete Guide to Rails Performance (https://www.railsspeed.com/) How To Use Turbolinks to Make Fast Rails Apps (https://www.speedshop.co/2015/05/27/100-ms-to-glass-with-rails-and-turbolinks.html) Sidekiq (https://sidekiq.org/) Follow Nate Berkopec on Twitter (https://twitter.com/nateberkopec) Visit Nate's Website (https://www.nateberkopec.com/) Sign up for Nate's Speedshop Ruby Performance Newsletter (https://speedshop.us11.list-manage.com/subscribe?u=1aa0f43522f6d9ef96d1c5d6f&id=840412962b) Transcript: STEPH: All right. I'll kick us off with our fancy intro. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. And this week, Chris is taking a break. But while he's away, I'm joined by Nate Berkopec, who is the owner of Speedshop, a Ruby on Rails performance consultancy. And, Nate, in addition to running a consultancy, you're the co-maintainer of Puma. You're also an author as you wrote a book called The Complete Guide to Rails Performance. And you run the workshop called The Rails Performance Workshop. So, Nate, I'm sensing a theme here. NATE: Yeah, make code go fast. STEPH: And you've been doing that for quite a while, haven't you? NATE: Yeah. It's pretty much been since 2015, or so I think. It all started when I actually wrote a blog post about Turbolinks that got a lot of pick up. My hot take at the time was that Turbolinks is actually a good thing. That take has since become uncontroversial, but it was quite controversial in 2015. So I got a lot of pick up on that, and I realized I liked working on performance, and people seem to want to hear about it. So I've been in that groove ever since. STEPH: When you started down the path of really focusing on performance, were you running your own consultancy at that point, or were you working for someone else? NATE: I would say it didn't really kick off until I actually published The Complete Guide to Rails Performance. So after that came out, which was, I think, March of 2016…I hope I'm getting that right. It wasn't until after that point when it was like, oh, I'm the Rails performance guy now. And I started getting emails inbound about that. I didn't really have any time when I was actually working on the CGRP to do that sort of thing. I just made that my full-time job to actually write, and market, and publish that. So it wasn't until after that that I was like, oh, I'm a performance consultant now. This is the lane I've driven myself into. I don't think I really had that as a strategy when I was writing the book. I wasn't like, okay, this is what I'm going to do. I'm going to build some reputation around this, and then that'll help me be a better consultant with this. But that's what ended up happening. STEPH: I see. So it sounds like it really started more as a passion and something that you wanted to share. And it has manifested to this point where you are the speed guy. NATE: Yeah, I think you could say that. I think when I started writing about it, I just knew...I liked it. I liked the work of performance. In a lot of ways, performance is a much more concrete discipline than a lot of other sub-disciplines of programming where I joke my job is number go down. It's very measurable, and it's very clear when you've made a difference. You can say, “Hey, this number was this, and now it's this. Look what I did.” And I always loved that concreteness of performance work. It makes it actually a lot more like a real kind of engineering discipline where I think of performance engineering as clarifying requirements and the limitations and then building a project that meets the requirements while staying within those limitations and constraints. And that's often not quite as clear for other disciplines like general feature work. It's kind of hard to say sometimes, like, did you actually make the user's life better by implementing such and such? That's more of a guess. That's more of a less clear relationship. And with performance, nobody's going to wake up ten years from today and wish that their app was slower. So we can argue about the relative importance of performance in an application, but we don't really argue about whether or not we made it faster because we can prove that. STEPH: Yeah. That's one area that working with different teams (as I tend to shift the clients that I'm working with every six months) where we often push hard around feature work to say, “How can we measure this? How can we know that we are delivering something valuable to users?” But as you said, that's really tricky. It's hard to evaluate. And then also, when you add on the fact that if I am leaving that project in six months, then I don't have the same insights to understand how something went for that team. So I can certainly appreciate the satisfaction that comes from knowing that, yes, you are delivering a faster app. And it's very measurable, given the time that you're there, whether it's a short time or if it's a long time that you're with that team. NATE: Yeah, totally. My consulting engagements are often really short. I don't really do a lot of super long-term stuff, and that's usually fine because I can point to stuff and say, “Yep. This thing was at A, and now it's at B. And that's what you hired me to do, so now it's done.” STEPH: I am curious; given that you have so many different facets where you are running your consultancy, you are also often publishing a lot of content and working on workshops and then also contributing to open source projects. What does a typical week look like for you? NATE: Well, right now is actually a decent example. I have client work two or three days a week. And I'm actually working on a new product right now that I'm calling Sidekiq in Practice, which is a course/workshop about scaling Sidekiq from zero to 1000 jobs per second. And I'll spend the other days of the week working on that. My content is...I always struggle with how much time to spend on blogging specifically because it takes so much time for me to come up with a post and publish that. But the newsletter that I write, which I try to write two once a week, I haven't been doing so well with it lately. But I think I got 50 newsletters done in 2020 or something like that. STEPH: Wow. NATE: And so I do okay on the per-week basis. And it's all content I've never published anywhere else. So that actually is like 45 minutes of me sitting down on a Monday and being like rant, [chuckles] slam keyboard and rant and then hit send. And my open source work is mostly 15 minutes a day while I'm drinking morning coffee kind of stuff. So I try to spread myself around and do a lot of different stuff. And a lot of that means, I think, pulling back in terms of thinking how much you need to spend on something, especially with newsletters, email newsletters, it was very easy to overthink that and spend a lot of time revising and whatever. But some newsletter is better than no newsletter. And especially when it comes to content and marketing, I've learned that frequency and regularity is more important than each and every post is the greatest thing that's ever come out since sliced bread. So trying to build a discipline and a practice around doing that regularly is more important for me. STEPH: I like that, some newsletter is better than no newsletter. I was listening to your chat with Brittany Martin on the Ruby on Rails podcast. And you said something very honest that I appreciated where you said, “Writing is really hard, and writing sucks.” And that made me laugh in the moment because even though I do enjoy writing, I still find it very hard to be disciplined, to sit down and make it happen. And then you go into that editor mode where you critique everything, and then you never really get it published because you are constantly fixing it. It sounds like...you've mentioned you set aside about 45 minutes on a Monday, and you crank out some work. How do you work through that inner critic? How do you get past it to the point where then you just publish? NATE: You have to separate the steps. You have to not do editing and first drafting at the same time. And the reason why I say it sucks and it's hard is because I think a lot of people don't do a lot of regular writing, maybe get intimidated when they try to start. And they're like, “Wow, this is really hard. This is not fun.” And I'm just trying to say that's everybody's experience and if it doesn't get any better, because it doesn't, [chuckles] there's nothing wrong with you, that's just writing, it's hard. For me, especially with the newsletter, I just have to give myself permission not to edit and to just hit send when I'm done. I try to do some spell checking,, and that's it. I just let it go. I'm not going back and reading it through again and making sure that I was very clear and cogent in all my points and that there's a really good flow through that newsletter. I think it comes with a little bit of confidence in your own ideas and your own experience and knowledge, believing that that's worth sharing and that's worth somebody's time, even if it's not a perfect expression of what's in your head. Like, a 75% expression is good enough, especially in a newsletter format where it's like 500 to 700 words. And it's something that comes once a week. And maybe not everyone's amazing, but some of them are, enough of them are that people stay subscribed. So I think a combination of separating editing and first drafting and just having enough confidence and the basis where you have to say, “It doesn't have to be perfect every single time.” STEPH: Yeah, I think that's something that I learned a while back to apply to my coding process where I had to separate those two steps of where I have to let the creator in me just create and write some code and make it work, and then come back to the editing process, and taking a similar approach with writing. As you may be familiar with thoughtbot, we're big advocates when it comes to sharing content and sharing things that we have learned throughout the week and different projects that we're working on. And often when people join thoughtbot, they're very excited to contribute to the blog. But it is daunting for that first post because you think it has to be this really grand novel. And it has to be something that is really going to appeal to everybody, and it's going to help everyone. And then over time, you learn it's like, oh well, actually it can be this very just small thing that I learned that maybe only helps 20 people, but it still helped those 20 people. And learning to publish more frequently versus going for those grand pieces is more favorable and often more helpful for people. NATE: Yeah, totally. That's something that is difficult for people at first. But everything in my experience has led me to believe that frequency and regularity is just as, if not more important than the quality of any individual piece of content that I put out. So that's not to say that...I guess it's weird advice to give because people will take it too far the other way and think that means he's saying quality doesn't matter. No, of course, it does, but I think just everyone's internal biases are just way too tuned towards this thing must be perfect. I've also learned we're just really bad judges internally of what is useful and good for people. Stuff that I think is amazing and really interesting sometimes I'll put that out, and nobody cares. [chuckles] And the other stuff I put out that's just like the 45-minute banging out newsletter, people email me back and say, “This is the most helpful thing anyone's ever read.” So that quality bias also assumes that you know what is good and actually we're not really good at that, knowing every time what our audience needs is actually really difficult. STEPH: That's totally fair. And I have definitely run into that too, where I have something that I'm very proud of and excited to share, and I realize it relates to a very small group of people. But then there's something small that I do every day, and then I just happen to tweet about it or talk about it, and suddenly that's the thing that everybody's really excited about. So yeah, you never know. So share it all. NATE: Yeah. And it's important to listen. I pay attention to what people get interested in from what I put out, and I will do more of that in the future. STEPH: You mentioned earlier that you are working on another workshop focused on Sidekiq. What can you tell me about that? NATE: So it's meant to be a guide to scaling Sidekiq from zero to 1000 requests per second. And it's meant to be a missing guide to all the things that happen, like the situations that can crop up operationally when you're working on an application that does a lot of work with Sidekiq. Whereas Mike Sidekiq, Wiki, or the docs are great about how do, you do this? What does this setting mean? And the basics of getting it just running, Sidekiq in practice, is meant to be the last half of that. How do you get it to run 1,000 jobs per second in a day-to-day application? So it's the collected wisdom and collected battle scars from five years of getting called in to fix people's Sidekiq installations and very much a product of what are the actual problems that people experience, and how do you fix and deal with those? So stuff about memory and managing Sidekiq memory usage, how to think about queues. Like, what should your queue structure be? How many should you have? Like, how do you organize jobs into queues, and how do you deal with problems like some client is dropping 10,000, 20,000 jobs into a queue. And now the other jobs I put in that queue have 20,000 jobs in front of them. And now this other job I've got will take three hours to get through that queue. How do you deal with problems like that? All the stuff that people have come to me over the years and that I've had to help them fix. STEPH: That sounds really great. Because yeah, I find that teams who are often in this space with Sidekiq we just let it run until there's a fire. And then suddenly, we start to care as to how it's processing, and we care about our queue structure and how many workers that we have that are pulling from that queue. So that sounds really helpful. When you're building a workshop, do you often go back to any of those customers and pull more ideas from them, or do you find that you just have enough examples from your collective work with clients that that itself creates a course? NATE: Usually, pretty much every chapter in the workshop I've probably implemented like three-plus times, so I don't really have to go back to any individual customer. I have had some interesting stuff with my current client, Gusto. And Gusto is going through some background job reorganization right now and actually started to implement a lot of the things that I'm advocating in the workshop actually without talking to me. It was a good validation of hey, we all actually think the same here. And a lot of the solutions that they were implementing were things that I was ready to put down into those workshops. So I'd like to see those solutions implemented and succeed. So I think a lot of the stuff in here has been pretty battle-tested. STEPH: For the Rails Performance Workshop, you started off doing those live and in-person with teams, and then you have since switched to now it is a CLI course, correct? NATE: That's correct. Yep. STEPH: I love that very much. When you've talked about it, it does feel very appropriate in terms of developers and how we like to consume content and learn. So that is really novel and also, it seems like a really nice win for you. So then other people can take this course, but you are no longer the individual that has to deliver it to their team, that they can independently take the course and go through it on their own. Are you thinking about doing the same thing for the Sidekiq course, or what are your plans for that one? NATE: Yeah, it's the exact same structure. So it's going to be delivered via the command line. Although I would say Sidekiq in practice has more text components. So it's going to be a combination of a very short manual or book, and some video, and some hands-on exercises. So, an equal blend between all three of those components. And it's a lot of stuff that I've learned over having to teach; I guess intermediate to advanced programming concepts for the last five years now that people learn at different paces. And one of the great things about this kind of format is you can pick it up, drop it off, and move at your own speed. Whereas a lot of times when I would do this in person, I think I would lose people halfway through because they would get stuck on something that I couldn't go back to because we only had four hours of the day. And if you deliver it in a class format, you're one person, and I've got 24 other people in this room. So it's infinitely pausable and replayable, and you can go back, or you can just skip ahead. If you've got a particular problem and you're like, hey, I just want to figure out how to fix such and such; you can do that. You can just come in and do a particular thing and then leave, and that's fine. So it's a good format that way. And I've definitely learned a lot from switching to pre-recorded and pre-prepared stuff rather than trying to do this all live in person. STEPH: That is one of the lessons that I've learned as well from the couple of workshops that I've led is that doing them in person, there's a lot of energy. And I really enjoy that part where I get to see people respond to the content. And then I get a lot of great feedback from people about what type of questions they have, where they are getting stuck. And that part is so important to me that I always love doing them live first. But then you get to the point, as you'd mentioned, where if you have a room full of 20 people and you have two people that are stuck, how do you help them but then still keep the class going forward? And then, if you are trying to tailor this content for a wide audience…so maybe beginners could take the Rails Performance Workshop, or they could also take the Sidekiq course. But you also want the more senior engineers to get something out of it as well. It's a very challenging task to make that content scale for everyone. NATE: Yeah. What you said there about getting feedback and learning was definitely something that I got out of doing the Rails Performance Workshop in person like three dozen times, was the ability to look over people's shoulders and see where they got stuck. Because people won't email me and say, “Hey, this thing is really confusing.” Or “It doesn't work the way you said it does for me.” But when I'm in the same room with them, I can look over their shoulder and be like, “Hey, you're stuck here.” People will not ask questions. And you can get past that in an in-person environment. Or there are even certain questions people will ask in person, but they won't take the time to sit down and email me about. So I definitely don't regret doing it in person for so long because I think I learned a lot about how to teach the material and what was important and how people...what were the problems that people would encounter and stuff like that. So that was useful. And definitely, the Rails Performance Workshop would not be in the place that it is today if I hadn't done that. STEPH: Yeah, helping people feel comfortable asking questions is incredibly hard and something I've gone so far in the past where I've created an anonymous way for people to submit questions. So during class, even if you didn't want to ask a question in front of everybody, you could submit a question to this forum, and I would get notified. I could bring it up, and we could answer it together. And even taking that strategy, I found that people wouldn't ask questions. And I guess it circles back to that inner critic that we have that's also preventing us from sharing knowledge that we have with the world because we're always judging what we're going to share and what we're going to ask in front of our peers who we respect. So I can certainly relate to being able to look over someone's shoulder and say, “Hey, I think you're stuck. We should talk. Let me walk you through this or help you out.” NATE: There are also weird dynamics around in-person, not necessarily in a small group setting. But I think one thing I really picked up on and learned from RailsConf2021 which was done online, was that in-person question asking requires a certain amount of confidence and bravado that you're not...People are worried about looking stupid, and they won't ask things in a public or semi-public setting that they think might make them look dumb. And so then the people that do end up asking questions are sometimes overconfident. They don't even ask a question. They just want to show off how smart they are about a particular issue. This is more of an issue at conferences. But the quality of questions that I got in the Q&A after RailsConf this year (They did it as Discord chats.) was way better. The quality of questions and discussion after my RailsConf talk was miles better than I've ever had at a conference before. Like, not even close. So I think experimenting with different formats around interaction is really good and interesting. Because it's clear there's no perfect format for everybody and experimenting with these different settings and different methods of delivery has been very useful to me. STEPH: Yeah, that makes a ton of sense. And I'm really glad then for those opportunities where we're discovering that certain forums will help us get more feedback and questions from people because then we can incorporate that and to future conferences where people can speak up and ask questions, and not necessarily be the one that's very confident and enjoys hearing their own voice. For the Rails Performance Workshop, what are some of the general things that you dive into for that workshop? I'm curious, what is it like to attend that workshop? Although I guess one can't attend it anymore. But what is it like to take that workshop? NATE: Well, you still can attend it in some sense because I do corporate bookings for it. So if you want to buy 20 seats, then I can come in and basically do a Q&A every week while everybody takes the workshop. Anyway, I still do that. I have one coming up in July, actually. But my overall approach to performance is to always start with monitoring. So the course starts with goals and monitoring and understanding where you want to go and where you are when it comes to performance. So the first module of the Rails Performance Workshop is actually really a group exercise that's about what are our performance requirements and how can we set those? Both high-level and low-levels. So what is our goal for page load time? How are we going to measure that? How are we going to use that to back into lower-level metrics? What is our goal for back-end response times? What is our goal for JavaScript bundle sizes? That all flows from a higher-level metric of how fast you want the page to load or how fast you want a route to change in a React app or something, and it talks about those goals. And then where should you even start with where those numbers should be? And then how are you going to measure it? What are the browser events that matter here? What tools are available to help you to get that data? Because without measurement, you don't really have a performance practice. You just have people guessing at what stuff is faster and what is not. And I teach performance as a scientific process as science and engineering. And so, in the scientific method, we have hypotheses. We test those hypotheses, and then we learn based on those tests of our hypotheses. So that requires us to A, have a hypothesis, so like, I think that doing X makes this faster. And I talk about how you generate hypotheses using profiling, using tools that will show you where all the time goes when you do this particular operation of your software—and then measuring what happens when you do that? And that's benchmarking. So if you think that getting rid of method X or changing method X will speed up the app, benchmarking tells you did you actually speed it up or not? And there are all sorts of little finer points to making sure that that hypothesis and that experiment is tested in a valid way. I spend a lot of time in the workshop yapping about the differences between development/local environments and production environments and which ones matter. Because what differences matter, it's not often the ones that we think about, but instead it's differences like actually in Rails apps the asset packaging and asset pipeline performs very differently in production than it does in development, works very differently. And it makes it one of the primary reasons development is slower than production, so making sure that we understand how to change those settings to more production-like settings. I talk a lot about data. It's the other primary difference between development and production is production has a million users, and development has 10. So when you call things like User.all, that behavior is very different in production than it is locally. So having decent production-like data is another big one that I like to harp on in the workshops. So it's a process in the workshop of you just go lesson by lesson. And it's a lot of video followed up by hands-on exercises that half of them are pre-baked problems where I'm like, hey, take a look at this Turbolinks app that I've given you and look at it in DevTools. And here's what you should see. And then the other half is like, go work on your application. And here are some pull requests I think you should probably go try on your app. So it's a combination of hands-on and videos of the actual experience going through it. STEPH: I love how you start with a smaller application that everyone can look at and then start to learn how performant is this particular application that I'm looking at? Versus trying to assess, let's say, their own application where there may be a number of other variables that they have to consider. That sounds really nice. You'd mentioned one of the first exercises is talking about setting some of those goals and perhaps some of those benchmarks that you want to meet in terms of how fast should this page load, or how quickly should a response from the API be? Do you have a certain set of numbers for those benchmarks, or is it something that is different for each product? NATE: Well, to some extent, Google has suddenly given us numbers to work with. So as of this month, I think, June 2021, Google has started to use what they're calling Core Web Vitals in their ranking of search results. They've always tried to say it's not a huge ranking factor, et cetera, et cetera, but it does exist. It is being used. And that data is based on Chrome user telemetry. So every time you go to a website in Chrome, it measures three metrics and sends those back to Google. And those three metrics are Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS). And First Input Delay and Cumulative Layout shift are more important for your single-page apps kind of stuff. It's hard to screw those up with a Golden Path Rails app that just does Turbolinks or Hotwire or whatever. But Largest Contentful Paint is an easy one to screw up. So Google's line in the sand that they've drawn is 2.5 seconds for Largest Contentful Paint. So that's saying that from clicking on your website in a Google search result, it should take 2.5 seconds for the page to paint the largest component of that new page. That's often an image or a video or a large H1 tag or something like that. And that process then will help you to...to get to 2.5 seconds in Largest Contentful Paint; there are things that have to happen along the way. We have to download and execute all JavaScript. We have to download CSS. We have to send and receive back-end responses. In the case of a simple Hotwire app, it's one back-end response. But in the case of a single-page app, you got to download the document and then maybe download several XHR fetches or whatever. So there's a chain of events that has to happen there. And you have to walk that back now from 2.5 seconds in Largest Contentful Paint. So that's the line that I'm seeing getting drawn in the sand right now with Google's Core Web Vitals. So pretty much any meaningful web application performance metric can be walked back from that. STEPH: Okay. That's super helpful. I wasn't aware of the Core Web Vitals and that particular stat that Google is using to then rank the sites. I was going to ask, this kind of blends in nicely into when do you start caring about performance? So if you have a new application that you are just starting to get to market, based on the fact that Google is going to start ranking you right away, you do have to care some right out of the gate. But I am curious, when do you start caring more about performance, and are there certain tools and benchmarking that you want to have in place from day one versus other things that you'll say, “Well, we can wait until we have X numbers of users or other conditions before we add more profiling?” NATE: I'd say as an approach, I teach people not to have a performance strategy of monitoring. So if your strategy is to have dashboards and look at them regularly, you're going to lose. Eventually, you're not going to look at that dashboard, or more often, you just don't understand what you're looking at. You just install New Relic or Datadog or whatever, and you don't know how to turn a dashboard into actual action. Also, it seems to just wear teams out, and there's no clear mechanism when you just have a dashboard of turning that into oh, well, this has to now be something that somebody on our team has to go work on. Contrast that with bugs, so teams usually have very defined processes around bugs. So usually, what happens is you'll get an Exception Notification through Sentry or Bugsnag or whatever your preferred Exception Notification service is. That gets read by a developer. And then you turn that into a Jira ticket or a Kanban board or whatever. And then that is where work is done and prioritized. Contrast that with performance; there's often no clear mechanism for turning metrics into stuff that people actually work on. So understanding at your organization how that's going to work and setting up a process that automatically will turn performance issues into actual work that people get done is important. The way that I generally teach people to do this is to focus instead of dashboards and monitoring, on alerts, on automated thresholds that get tripped and then sends somebody's an email or put something in the Kanban board or whatever. It just has to be something that automatically gets fired. Different tools have different ways of doing this. Datadog has pretty much built their entire product around monitoring and what they call monitors. That's a perfectly fine way to do it, whatever your chosen performance monitoring tool, which I would say is a required thing. I don't think there's really any good excuse in 2021 for not having a performance monitoring tool. There are a million different ways to slice it. You can do it yourself with OpenTelemetry and then like statsD, I don't know, or pay someone else like everyone else does for Datadog or New Relic or AppSignal or whatever. But you got to have one installed. And then I would say you have to have some sort of automated alerting. Now that alerting means that you've also decided on thresholds. And that's the hard work that doesn't get done when your strategy is just monitoring. So it's very easy to just install a dashboard and say, “Hey, I have this average page time load dashboard. That means I'm paying attention to performance.” But if you don't have a clear answer to what number is good and what number is bad, then that dashboard cannot be turned into real action. So that's why I push monitoring so hard is because it allows people to ignore performance is all that matters, and it forces you to make the decision upfront as to what number matters. So that is what I would say, install some kind of performance monitoring. I don't really care what kind. Nowadays, I also think there's probably no excuse to not have Real User Monitoring. So there's enough GDPR compliance Real User Monitoring now that I think everyone should be using it. So for industry terms, Real User Monitoring is just performance monitoring in the browser. So it's just users' browser APIs and sends those back to you or your third-party provider, so having that so you actually are collecting back-end and front-end performance metrics. And then making decisions around what is bad and what is good. Probably everybody should just start with a page load time monitor, Largest Contentful Paint monitor. And if you've got a single-page app, probably hooking up some stuff around route changes or whatever your app...because you don't actually have page loads on every single time you navigate. You have to instrument whatever those interactions are. So having those up and then just drawing some lines that say, “Hey, we want our React route changes to always be one second or less.” So I will set an alert that if the 95th percentile is one second or more, I'm going to get alerted. There's a lot of different ways to do that, and everybody will have different needs there. But having a handful of automated monitors is probably a place to start. STEPH: I like how you also focus on once you have decided those thresholds and have that monitoring in place, but then how do you make it actionable? Because I have certainly been part of teams where we get those alerts, but we don't necessarily...what you just mentioned, prioritize that work to get done until we have perhaps a user complaint about it. Or we start actually having pages that are timing out and not loading, and then they get bumped up in the priority queue. So I really like that idea that if we agree upon those thresholds and then we get alerted, we treat that alert as if it is a user that is letting us know that a page is too slow and that they are unable to use our application, so then we can prioritize that work. NATE: And it's not all that dissimilar to bugs, really. And I think most teams have processes around correctness issues. And so, all that my strategy is really advocating for is to make performance fail loudly in the same way that most exceptions do. [chuckles] Once you get to that point, I think a lot of teams have processes around prioritization for bugs versus features and all that. And just getting performance into that conversation at least tends to make that solve itself. STEPH: I'm curious, as you're joining teams and helping them with their performance issues, are there particular buckets or categories of performance issues that are the most common in terms of, let's say, 50% of issues are SQL-related N+1 issues? What tends to be the breakdown that you see? NATE: So, when it comes to why something is slow in a Ruby application, I teach a method that I call DRM. And that doesn't have anything to do with actual DRM. It's just memorable because it reminds me of things I don't like. DRM stands for Database Ruby and Memory and in that order. So the most common issue is database, the second most common issue is issues with your Ruby code. The least common issue is memory. Specifically, I'm talking about allocation of objects, creating lots of objects. So probably 80% of your issues are in some way database-related. In Rails, it's 50% of those are probably N+1. And then 30% of database issues are probably what I would call unnecessary SQL. So it's not necessarily N+1, but it's a SQL query for information that you already had, or you could do in a more efficient way. So a common thing for unnecessary SQL would be people will filter an ActiveRecord::Collection like ten different ways when they could have just loaded the whole collection, filtered it with Ruby in the ten different ways afterwards, and that works really well if the collection that you're loading is like 10, 20. Turning that into one database query, plus a bunch of calls to innumerable methods is often way faster than doing that as ten separate database queries. Also, that tends to be a more robust approach. This doesn't happen in most companies, but what could happen is the database is like a shared resource. It's a resource that everybody is affected by. So a performance degradation to the database is the worst possible scenario because everything is affected. But if you screw up what's happening at an individual Rails process, then only that Rails process is affected. The blast radius is tiny. It's just that one request. So doing less stuff in the database while it can actually seem like, oh, that doesn't feel right. I'm supposed to do a lot of stuff in the database. It actually can reduce blast radiuses of performance issues because you're not doing it on this database that everyone has to have access to. There are a lot of areas of gray here. And I talk a lot in all my other material like why -- There's a lot of nuance here. So database is the main stuff. Issues in how you write your Ruby code is probably the other one. Usually, that's just what I would call code that goes bump in the night. It's code that you don't know is running but actually is. Profilers are what help us figure that out. So oftentimes, I'll have someone open up a profiler on their controller action for the first time. And they're like, wait a minute, I had no idea that such and such was running during this controller action, and actually, we don't need to do that at all. So why is it here? So that's the second most common issue. And then the third issue that really doesn't actually come up all that often is object allocation, numbers of objects that get created. So primarily, this is a problem in index actions or actions transactions that deal with big collections. So in Ruby, we often get overly focused on garbage collection, but garbage collection doesn't take any time if you just don't create objects. And object creation itself takes time. So looking at code through the lens of what object does this code create? And trying to get rid of those object allocations can often be a pretty productive way to make stuff faster. STEPH: You said a lot of amazing things there. So I'm debating on which one to follow up on. I think the one that stuck out to me the most where I have felt pain around this is you mentioned identifying code that goes bump in the night or code that is running, but it doesn't need to be run. And that is something that I've run into with applications where we have a code path that seems important, but yet I can't prove that it's being executed and exactly why it's there and what flow it's supporting. And I'm curious, do you have any tips or tricks in how you've helped teams identify that this code path isn't used and it's something that we can remove and then that itself will help speed up the performance of that particular endpoint? NATE: Like, there's no performance cost to like 100 models in an application that never actually get used. There's really no performance downside to code in an app that doesn't actually ever get run. But instead, what happens is code gets added into callbacks that usually is probably the biggest offender that's like, always do this thing after you do X. But then, two years later, you don't always need to do that thing after you do X. So the callbacks always run, but sometimes requirements change, and they don't always need to be run. So usually, it's enough to just pop the profiler now on something. And I have people look at it, and they're like, “I don't know why any of this is happening.” Like, it's usually a pretty big Eureka moment once we look at a flame graph for the first time and people understand how to read those, and they understand what they're looking at. But sometimes there's a bit of a process where especially in a bigger app where it's like, “Such and such is running, and this was an entire other team that's working on this. I have no idea what this even does.” So on bigger apps, there's going to be more learning that has to get done there. You have to learn about other parts of the application that maybe you've never learned about before. But profiling helps us to not only see what code is running but also what that relative importance is. Like, okay, maybe this one callback runs, and you don't know what it does, and it's probably unnecessary. But if it only takes 1% of the total time to run this action, that's probably less important than something that takes 20% of total time. And so profilers help us to not only just see all the code that's being run but also to know where that time goes and what time corresponds to what parts of the code. STEPH: Yeah, that's often the code that makes me the most nervous is where it's code that I suspect is being run or maybe being run, but I don't understand why it's there and then figuring out if it can be removed and then figuring out ways to perhaps even log when a call is being made to that code to determine if it's truly in use or not or at least supported by a code path that a user is hitting. You have a blog post that I read recently that I really appreciated that talks about essentially gaming benchmarking where you talk about the importance of having context around benchmarks. So if someone says, “I've improved something where it is now 10% faster.” It's like, well, what is that 10% relative to? And if it's a tool that other people are using, what does that mean for them? Or did you improve something that was already very fast, and you made it 10% faster? Was that a really valuable use of your time? NATE: Yeah. You know, something that I read recently that made me think of that again was this Hacker News post that went viral. That was like, how I optimize an AWS EC2 instance to take 1.5 million requests per second on my JSON API. And out of the box, it was like 500 requests per second, and then he got it to 1.5 million. And the whole article was presented with relative numbers. So it was like, “I made this change, and things got 33% faster. And if you do the whole thing right, 500 to 1.5 million requests per second, it's like my app is three times faster now,” or whatever. And that's true, but it would probably be more accurate to say, “I've taken three-millionth of a second out of every request in my app.” That's two ways of saying the same thing because latency and throughput are just related that way. But it's probably more accurate and more useful to say the absolute number, but it doesn't make for great blog posts, so that doesn't tend to get said. The kinds of improvements that were discussed in this article were really, really low-level stuff. That was like if you turn off...I think it was like turn off iptables or something like that. And it's like, that shaves a microsecond off of every time we make a syscall or something. And that is useful if your performance goal is to serve 1.5 million requests per second Hello World responses off of my EC2 instance, which is what this person admittedly was doing. But there's a tendency to walk that back to if I do all things in this article, my application will be three times faster. And that's just not what the evidence says. It's not what you were told. So there's just a tendency to use relative numbers when absolute numbers would be more useful to giving you the context of like, oh, well, this will improve my app or it won't. We get this a lot in Puma. We get benchmarks that are like, hey, this thing is going to help us to do 50,000 requests per second in Puma instead of 10,000. And another way of saying that is you took a couple of nanoseconds off of the overhead of every single request to Puma. And most Puma applications have a hundred millisecond response time. So it's like, yeah, I guess it's cool that you took a nanosecond off, and I'm sure it's going to help us have cool benchmarks, but none of our users are going to care. No one that's used Puma is going to care that their requests are one nanosecond faster now. So what did we really gain here? STEPH: Yeah, it makes sense that people would want to share those more...I want to call them sparkly stats and something that catches your attention, but they're not necessarily something that's going to translate to us in the way that we hoped that they will in terms of it's not going to speed up our app 30% or have those same rewards or benefits. Speaking of Puma, how is it being a co-maintainer of Puma? And how do you balance that role with all of your other work? NATE: Actually, it doesn't take all that much of my time. I try to spend about 15 minutes a day on it. And that's really possible because of the philosophy I have around open-source maintenance. I think that open source projects are fundamentally about collaboration and about sharing our hard-fought extractions and fixes and knowledge together. And it's not about a single super contributor or super maintainer who is just out of the goodness of their heart releasing all of their incredible work and time into the public domain or into a free software license. Puma is a pretty popular piece of Ruby software, so a lot of people use it. And I have things on my back burner of if I ever got 20 hours to work on Puma, here's stuff I would do. But there are a lot of other people that have more time than me to work on Puma. And they're just as smart, and they have other tools they've got in their locker that I don't have. And I realized that it was more important that I actually find ways to recruit and then unblock those people than it was for me to devote as much time as I could to Puma. And so my work on Puma now is really just more like management than anything else. It's more trying to recruit new contributors and trying to give them what they need to help Puma. And contributing to open source is a really fraught experience for a lot of people, especially their first time. And I think we should also be really conscious of that. Like, 95% of software developers have really never contributed to open source in a meaningful way. And that's a huge talent pool of people that could be helping us that aren't. So I'm less concerned about the problems of the 5% that are currently contributing than I am about why there are 95% of us that don't do anything. So that's what gets me excited to work on Puma now, is trying to change that ratio. STEPH: I really like that mindset of where you are there to provide guidance but then essentially help unblock others as they're making contributions to the project but then still be there to have the history and full context and also provide a path forward of a good direction for Puma to head. In regards to encouraging more people to contribute to open source projects, I've often heard people say how challenging that is, where they have an open-source project that they would really love people to contribute to but finding people is really hard or just letting people know that they're interested in contributions. Have you had any strategies that have been successful for you in encouraging people to contribute? NATE: Yeah. So first thing, the easiest thing is we have a contributing.md file. So that's something I think more projects should adopt is have an actual file in your project that says everything about how to contribute. Like, what kinds of contributions do you want? Different projects have different things that they want. Like, Rails doesn't want to refactor PRs. Don't send a refactor PR to Rails because they'll reject it. Puma, I'm happy to accept those. So letting people know like, “Hey, here's how we work here. Here's the community we're creating, and here's how it works. Here's how to get involved.” And I think of it as hanging out the shingle and saying, “Yes, I want your contributions. Here's how to do it.” That alone puts you a step above other projects. The second thing I would say is you need to have contributor-only communication channels. So we have Matrix chat. So Matrix is like this successor to IRC. So we have a chat channel basically, but it's like contributors only. I don't enforce that, but I just don't want support requests in there. I don't want people coming in there and being like, “My Puma config doesn't work.” And instead, it's just for people that want to contribute to Puma, and that want to help out. If you have a question and come in there, anyone can answer it. And then finally, another thing that I've had success with is doing one-on-one stuff. So I will actually...I have a Calendly invite that I think is in contributing.md now that you can just book 30 minutes with me anytime about contributing to Puma. And I will get on a Zoom call with you and talk to you about what are your concerns? Where do I think you can help? And I give my time away that way. The way I see it is like if I do that 20 times and I create one super contributor to Puma, that is worth more than me spending 10 hours on Puma because that person can contribute 100, 200, 1,000 hours over their lifetime of contributing to Puma. So that's actually a much more higher leverage contribution, really from my perspective. It's actually helping other people contribute more. STEPH: Yeah, that's huge to offer people to say, “Hey, you can book time with me, and I will walk you through and let you know where you can start making an impactful contribution right away,” or “Here are some areas that I think you'd be interested, to begin with.” That seems like such a nice onboarding for someone who says, “I'm interested, but I'm nervous,” or “I'm just not sure about where to get started.” Also, I love your complaint department voice for the person who their Puma config doesn't work. That was delightful. [chuckles] NATE: I think it's a little bit part of my open-source philosophy that, especially at a certain scale like Puma is at that we really kind of over-prioritize users. And I'm not really here to do support; I'm here to make the project better. And users don't actually contribute to open source projects. Users use the thing, and that's great. That's the whole reason we're open-sourcing is so more people use it. But it's important not to prioritize that over people who want to make the project better. And I think a lot of times; people get caught up in this almost clout chasing of getting the most GitHub stars that they think they need and users they think they need. And you don't get paid for having users, and the product doesn't get any better either. So I don't prioritize users. I prioritize the quality of the project and getting contributors. And that will create a better project, which will then create more users. So I think it's easy to get sidetracked by people that ask for your time when they're not giving anything back to the project in return. And especially at Puma's scale, we have enough people that want my time or the time of other maintainers at Puma so that they can contribute to the project. And putting user support requests ahead of that is not good for the project. It's not the biggest, long-term value increase we could be making, so I don't prioritize them anymore. STEPH: Yep. That sounds like more the pursuit of sparkly stats and looking for all those GitHub stars or all of those likes. Well, Nate, if you're game, I have two listener questions that I'd like to run by you because I shared with some folks that you are going to be on The Bike Shed today. And they're very excited and have two questions that they'd like me to run by you. How does that sound? NATE: Yeah, all right. STEPH: So the first question is, are there any paradigms or trends in Rails that inherently hurt performance? NATE: Yeah. I get this question a lot, and I will preface it with saying that I'm the performance guy, and I'm not the software design guy. And I get a lot of questions about does such and such software design...how does that impact performance? And usually, there's like a way to do anything in a performant way. And I'm just here to help people to find the performant way and not to prescribe “You must always do X, Y, or Z,” or “ActiveRecord is bad. Never use it.” That's not my job here. And in my experience, there's a fast way to do almost anything. Now, one thing that I think is dying, I guess, or one approach or one common...I don't know what to call it. One common mistake that is clearly wrong is to not do any form of server-side rendering in a web application. So I am anti-client-side app. But there are ways to do that and to do it quickly. But rendering a basically blank document, which is what most of these applications will do when they're using Rails as a back-end…you'll serve this basically blank document or a document with maybe some Chrome in it. And then, the client-side app has to execute, compile JavaScript, make XHR requests, and then render the page. That is just by definition slower than serving somebody a server-side rendered page. Now, I am 100% agnostic on how you want to generate that server-side rendering. There are some people that are working on better ways to do that with Rails and client-side apps. Or you could just go the Hotwire Turbolinks way. And it's more progressive enhancement where the back-end is always just serving the server-side rendered response. And then you do some JavaScript on top of that. So I think five years from now, nobody will be doing this approach of serving blank documents and then booting client-side apps into that. Or at least it will be seen as outdated enough that you should never design a project that way anymore. It's one of those few things where it's like, yeah, just by definition, you're adding more steps into a rendering flow. That means, by necessity, it has to be slower. So I think everybody should be thinking about server-side rendering in their project. Again, I'm totally agnostic on how you want to implement that. With React, whatever front-end flavor of the month you want to go with, there's plenty of ways to do that, but I just think you have to be prioritizing that now. STEPH: All right. Well, I like that five-year projection of where we're headed. I have found that it's often the admin-side where people will still bring in a lot of JavaScript rendering, just to touch on a bit of what you're saying, in terms of let's favor the server-rendered HTML versus over-optimizing a space that one, probably isn't a profitable space in terms that we do want our admins to have a great experience for our product. But if they are not necessarily our users, then it also doesn't need to be anything that is over the top or fancy or probably uses a lot of JavaScript. And instead, we can start simple. And there's a number of times that I've been on projects where we have often walked the admin back to be more server-rendered because we got to a point where someone was very excited to make the admin very splashy and quick but then couldn't keep up with the requests because then they were having to prioritize the user experience first. So it was almost like optimizing the admin, but then it got left out in the cold. So then it's just sort of this poor experience. NATE: Yes. Shopify famously walked back their admin from I think it was Backbone to Turbolinks. And I think that that has now moved back to React is my understanding. But Shopify is a huge company, so they have plenty of time and resources to be able to do that. But I just remember that happening at the time where I was like, oh wow, they just rolled the whole thing back to Turbolinks again. And now, with the consolidation that's gone on in the React world, it's a little bit easier to pipe a server-side rendering into a React app. Whereas with Backbone, it was like no one knew what you were doing. So there was less knowledge about how to server-side render this stuff. Now it doesn't seem to be so much of a problem. But yeah, I mean, Rails is really good at CRUD apps, and admin is like 99% CRUD. And adhering as closely as possible to the Rails Golden Path there in an admin seems to be the most productive way to work on that kind of feature. STEPH: All right. Ready for your second question? NATE: Yes. STEPH: Okay. This one's a bit more in-depth. They also mentioned a particular project name. So I am going to swap it out with a different name. So on project cinnamon roll, we found a really gnarly time-consuming API endpoint that's getting hammered. And on a first pass, we addressed a couple of N+1 issues and tuned the performance, and felt pretty confident that they had addressed the issue. But it was still fairly slow. So then they took some additional incremental steps. So they swapped out to use OJ for serialization that shaved off an additional 10% but was still slow. They also went the route of going straight to Rails cache with a one-minute expiration. So that way, they could avoid mucking with cache busting because they confirmed with the client that data could be slightly stale. And this was great. It worked out well. So it dropped their average response time down to less than 70 milliseconds. With all that said, that journey took a few hours over a few days, and multiple production deploys. And had they gone straight to the cache, then they would have had a 15-minute fix with a single deploy. So this person's wondering, are there any other examples like that where, rather than taking these incremental seemingly obvious performance whims, there are situations where you want to be much more direct with your path? NATE: I guess I'd say that profiling can help you to understand and form better hypotheses about what will make things faster and what won't. Because a profiler can't really lie to you about where time goes, either you spent 20% of your time in this method, or you didn't. So I don't spend any time in any of my material talking about what JSON serializer you use. Because really that's actually never...that's really never anybody's bottleneck. It's never a huge proportion of people's total percentage of time. And I know that because I've looked at enough profiles that the issues are usually in other places. So I would say that if your hypotheses that you're generating are not working, it's because you're not generating good enough hypotheses. And profiling is the place to do that. So having profilers running in production is probably the biggest level-upscale-like that most teams could take. So having profilers that you can access as on production servers as a user is probably the biggest level up that you could make to generating the hypotheses because that'll have real production data, real production servers, real production environment. And it's pretty common now that pretty much every team that I work with either has that already, or we work on implementing it. It's something that I've seen in production at GitHub and Shopify. You can do it yourself with rack-mini-profiler. It's all about setting up the authorization, just making sure that only authorized users get to see every single SQL query generated in the flame graph and all that. But other than that, there's no reason you shouldn't do it. So I would say that if you're not generating the right hypotheses or you don't...if the last hypothesis out of 10 is the one that works, you need better hypotheses, and the best way to do that is better profiling. STEPH: Okay, better profiling. And yeah, it sounds like there's also a bit of experience in there in terms of things that you're used to seeing, that you've noticed that could be outliers in terms of that they're not necessarily the thing that you want to improve. Like you mentioned spending time on how you're serializing your JSON is not somewhere that you would look. But then there are other areas that you've gained experience that you know would be likely more beneficial to then focus on to form that hypothesis. NATE: Yeah, that's a long way of saying experience pays off. I've had six years of doing this every single day. So I'm going to be pretty good at...that's what I get paid for. [laughs] So if I wasn't very good at that, I probably wouldn't be making any money at it. STEPH: [laughs] All right. Well, thanks, Nate, so much for coming on the show today and talking so much about performance. On that note, I think it's a good place for us to wrap up. If people are interested in following along with what you're working on and they want to keep up with your latest and greatest workshops that are coming out, where can they find you on the internet? NATE: speedshop.co is my site. @nateberkopec on Twitter. And speedshop.co has a link to my newsletter, which is where I'm actively thinking every week and publishing stuff too. So if you want to get the drip of news and thoughts, that's probably the best place to go. STEPH: Perfect. All right. Well, thank you so much. NATE: No problem. STEPH: The show notes for this episode can be found at bikeshed.fm. CHRIS: This show is produced and edited by Mandy Moore. STEPH: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or a review in iTunes as it helps other people find the show. CHRIS: If you have any feedback for this or any of our other episodes, you can reach us @bikeshed on Twitter. And I'm @christoomey. STEPH: And I'm @SViccari. CHRIS: Or you can email us at hosts@bikeshed.fm. STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week. Together: Bye. Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Today’s episode is with Kelly Watkins, CEO of Abstract, a platform for structure and transparency in the design process. In joining Abstract last year, Kelly is one of very few folks from a marketing background to take on the CEO seat. She brings a wealth of experience leading incredibly high-performing marketing teams for Slack, Github, and Bugsnag. In today’s conversation, we start by reflecting on her first year as CEO. She shares her alternative to yearly planning, borrowing from famed military strategist John Boyd. Kelly also walks us through Abstract’s most recent product launch, and how it clearly crystallized her leadership point of view to constantly optimize for trade-offs, rather than clear-cut right and wrong. Next we switch gears to talk about some of the lessons from her storied marketing career. She unpacks her jobs-to-be-done approach for crafting a product story when there’s loads of competition. She also takes us behind the scenes in developing Slack’s “where work happens” tagline, and crossing the chasm from a passionate early adopter customer base to the ubiquitous product it is today. Today’s conversation is a must-listen for marketing folks, who will surely appreciate the peek behind the curtain. But all sorts of leaders with goals to more effectively collaborate with the org will come away with a deeper understanding of marketing’s art and science. You can follow Kelly on Twitter at @_kcwatkins You can email us questions directly at review@firstround.com or follow us on Twitter @ twitter.com/firstround and twitter.com/brettberson To learn more about Kelly’s advice on hiring your first head of marketing, read her Medium article: https://medium.com/hackernoon/how-to-hire-your-first-head-of-marketing-67c43dd2cd73 For more on the jobs-to-be-done framework, check out this article on the Review: https://review.firstround.com/build-products-that-solve-real-problems-with-this-lightweight-jtbd-framework
On this Keepin' it 100, Nelson is joined by Kirti Dewan, Vice President of Marketing at Bugsnag, a full stack app stability monitoring solution. “The takeaway is to just be yourself. I literally wrote the LinkedIn message as though it was a conversation. It's not going to work every single time because people have different circumstances that they need to work with, but it does pay off at some time. People do like to help other people.” Before her role as Vice President of Marketing for Bugsnag, Kirti served in other marketing leadership roles as a consultant for various companies. Kirti shares the secret sauce to becoming a successful consultant: being authentic and cold calling.
Software Engineering Radio - The Podcast for Professional Software Developers
James Smith, CEO and co-founder of Bugsnag discusses “Why it is ok to ship your software with Bugs.”
----------------- *Episode outline* ----------------- [01:18] Kirti’s background [01:45] How Kirti developed work-life balance [06:13] What Kirti learned from her consulting career and how she uses these lessons today [10:35] Good habits that will positively impact your work [12:32] How others respond to Kirti’s views on work-life balance [13:13] 3 of Bugnsag’s best campaigns during the last 12 months and 3 that Kirti is excited about in the future [17:22] How Bugsnag pivoted to online events and how successful they have been [20:10] Kirti’s influences ---------------------- *Kirti’s Inspirations* ---------------------- Forward Thinking Founders Podcast ( https://podcasts.apple.com/us/podcast/forward-thinking-founders/id1454168902 ) -------------------- *Connect with Kirti* -------------------- LinkedIn ( https://www.linkedin.com/in/kirti-dewan-496441/ ) Website ( https://www.bugsnag.com/ )
Jonty's Twitter - https://twitter.com/jontybehrCodeigniter - https://codeigniter.comMettle - https://mettle.io/Laravel Live UK - https://laravellive.uk/PaperTrail - https://papertrail.com/Understand.io - https://understand.io/Xdebug - http://xdebug.org/Bugsnag - https://bugsnag.com/ELK Stack - https://www.elastic.co/what-is/elk-stackTICK stack - https://www.thoughtworks.com/radar/platforms/tick-stackMatt's live stream with Derick - https://www.youtube.com/watch?v=iloCjuqMdKULaravel discord channel - https://t.co/4fjanVoFIE?amp=1Send Portal - https://sendportal.io/Charity Majors and Observability - https://hub.packtpub.com/honeycomb-ceo-charity-majors-discusses-observability-and-dealing-with-the-coming-armageddon-of-complexity-interview/ Transcription sponsored byLarajobsEditing sponsored byTighten
Join the 30-DAY CHALLENGE: "You Don't Know JS Yet" In this episode of Ruby Rogues, James Thompson, a Software Architect at Mavenlink, delves into how to address errors in a service-based system and how to prioritize what errors to fix. He goes into how to recognize the errors when they are creeping in and so much more. Panel Dave Kimura John Epperson Matt Smith Luke Stutters Guest James Thompson Sponsors Scout APM | We'll donate $5 to the open source project of your choice when you deploy Scout Rails Remote Conf 2020 Links Bugsnag OpenTelemetry Application Monitoring and Error Tracking Software | Sentry SignalFx smartinez87/exception_notification: Exception Notifier Plugin for Rails Errbit Picks James Thompson: Follow James on Twitter @plainprogrammer, Website The Annotated American Gods Luke Stutters: raggi/async_sinatra Dave Kimura: Rubidium Slim Gemfile for increased application maintainability John Epperson: Sharing puzzles with your friends so you can do puzzles during the current stay-at-home era Matt Smith: Pulumi - Modern Infrastructure as Code Follow Ruby Rogues on Twitter > @rubyrogues
Join the 30-DAY CHALLENGE: "You Don't Know JS Yet" In this episode of Ruby Rogues, James Thompson, a Software Architect at Mavenlink, delves into how to address errors in a service-based system and how to prioritize what errors to fix. He goes into how to recognize the errors when they are creeping in and so much more. Panel Dave Kimura John Epperson Matt Smith Luke Stutters Guest James Thompson Sponsors Scout APM | We'll donate $5 to the open source project of your choice when you deploy Scout Rails Remote Conf 2020 Links Bugsnag OpenTelemetry Application Monitoring and Error Tracking Software | Sentry SignalFx smartinez87/exception_notification: Exception Notifier Plugin for Rails Errbit Picks James Thompson: Follow James on Twitter @plainprogrammer, Website The Annotated American Gods Luke Stutters: raggi/async_sinatra Dave Kimura: Rubidium Slim Gemfile for increased application maintainability John Epperson: Sharing puzzles with your friends so you can do puzzles during the current stay-at-home era Matt Smith: Pulumi - Modern Infrastructure as Code Follow Ruby Rogues on Twitter > @rubyrogues
Join the 30-DAY CHALLENGE: "You Don't Know JS Yet" In this episode of Ruby Rogues, James Thompson, a Software Architect at Mavenlink, delves into how to address errors in a service-based system and how to prioritize what errors to fix. He goes into how to recognize the errors when they are creeping in and so much more. Panel Dave Kimura John Epperson Matt Smith Luke Stutters Guest James Thompson Sponsors Scout APM | We'll donate $5 to the open source project of your choice when you deploy Scout Rails Remote Conf 2020 Links Bugsnag OpenTelemetry Application Monitoring and Error Tracking Software | Sentry SignalFx smartinez87/exception_notification: Exception Notifier Plugin for Rails Errbit Picks James Thompson: Follow James on Twitter @plainprogrammer, Website The Annotated American Gods Luke Stutters: raggi/async_sinatra Dave Kimura: Rubidium Slim Gemfile for increased application maintainability John Epperson: Sharing puzzles with your friends so you can do puzzles during the current stay-at-home era Matt Smith: Pulumi - Modern Infrastructure as Code Follow Ruby Rogues on Twitter > @rubyrogues
According to James Smith, CEO of Bugsnag, caring about customer experience is what makes for a better software developer. This is especially important when managing technical debt. Today on CTO Connection, James shares his experience and discusses his approach toward technical debt from both the developer and client perspectives.James shares why he believes overcommunication is key and how carving out time can help your team achieve a manageable sprint cadence. Listen for more.[01:26] - Becoming CTO in hard mode[05:43] - Learning to let go[10:17] - Hiring management[13:11] - Process maturation[15:34] - Overcommunication[19:12] - Managing technical debt[26:08] - Carving out time[31:23] - Bug triage[37:23] - Delivering a good experienceSpecial thanks to our global partner – Amazon Web Services (AWS). AWS offers a broad set of global cloud-based products to equip technology leaders to build better and more powerful solutions, reach out to aws-cto-program@amazon.com if you’re interested to learn more about their offerings.CTO Connection is where you can learn from the experiences of successful engineering leaders at fast-growth tech startups. Whether you want to learn more about hiring, motivating or managing an engineering team, if you're technical and manage engineers, the CTO Connection podcast is a great resource for learning from your peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.This podcast episode was produced by Dante32.
During this episode of Tech Qualified, Tristan Pelligrino and Justin Brown talk with Kirti Dewan, the VP of Marketing at Bugsnag. Kirti chats about her role as a consultant prior to joining Bugsnag and then takes us through some of the major marketing initiatives she’s spearheading at the company. Kirti has a wide range of marketing leadership experience with startups and brings a lot to the table with her role at Bugsnag. Episode Highlights: Kirti discusses a bit about her background both as a consultant and as an employee for various startup tech companies. Kirti talks about Bugsnag and its mission to be an application stability and management platform. Bugsnag helps companies capture errors and then prioritize and fix the errors within their development lifecycle. When Kirti joined the team, it had three employees on the marketing side - now she leads a team of six. When coming on board, Kirti had to focus on a lot of the foundational components to marketing - the marketing function inside a young company tends to need a lot more structure before growing too much. Kirti discusses the responsibilities of a marketing organization now, it's very difficult to be both tactical and strategic at the same time. Documentation was a big focus for the team as it started to build out various functions, especially for events. Kirti discusses the importance of communicating feature launches with their product, especially for those that are very important to the customer base. Kirti built out a lot of foundational elements for their content marketing - including explainer videos and other long-form materials. During the interview, Kirti talked about how she helped change the structure of the company so that content became a focus and a responsibility for everyone. Kirti described marketing and marketing initiatives as a “laser pointer” and how it’s important not to chase other projects or try to “boil the ocean.” As a startup, you can’t place your bets all over the place - it’s important to focus on a few channels and go deeper with strategy and tactics. Kirti discusses the development of their ABM strategy, which isn’t a “full blown strategy” - but a strategy that has a focus on segmentation. At Bugsnag, the company divides their content into four different “swim lanes” - areas focused on engineering, product features, thought leadership and sales enablement. The team develops a set of content initiatives for the entire quarter and assigns writers to each of the pieces to ensure a consistent cadence. Bugsnag is fortunate to have excellent writers within their development team and these folks contribute greatly to the highly technical content featured on the organization’s website. In many cases, Kirti uses an audio interview with executives to drive content pieces - this allows team members to just “riff” and produce content at the same time. Key Points: Kirti stated “The responsibilities within a marketing organization are quite immense. So you are down in the weeds. 10 minutes later, you're really high up at 30,000 feet. You're trying to be so strategic. So it's this constant gear switching, context switching that has to happen in the brain…” Kirti discussed the status of marketing teams at SaaS companies and how many marketing teams struggle to get respect with product-driven companies, “...So depending upon the type of company that you are...marketing can be a different junk drawer, different place of respect. And what happened at Bugsnag is it is very difficult, like other SaaS companies in that it is very, very product driven.” Resources Mentioned: Kirti Dewan: LinkedIn Bugsnag: Website Motion: Ultimate Thought Leadership Course for B2B Tech Companies
Technical debt has to be dealt with on a regular basis to have a healthy product and development team. The impacts of technical debt include emotional drain on engineers and slowing down development and can adversely affect your hiring ability and retention. But really, what is technical debt? Can we measure it? How do we reduce it, and when? James Smith, the CEO of Bugsnag, joins the show to talk about technical debt and all of these questions. Special Guest: James Smith.
In this episode of React Native Radio Josh Justice interviews Yassir Hartani. Yassir writes a blog about all he learns while programming with React Native. They begin by discussing his article about React Native Navigation. Yassir explains why he prefers React Native Navigation and walks Josh through the article. They move on to share tips for getting into React Native development. Yassir shares the differences between React Native development and developing on the web. He explains the difference in base components, syntax, and naming. For those used to developing on the web he recommends using styled-components. Next, the discuss best practices for upgrading and explain why upgrading in React Native can be painful. They discuss tips for improving user experience including, keyboards, clickable buttons, native feedback, and safe area view. Developer experience tips are next. Yassir recommends building for both iOS and Android, test for both platforms as well. They also recommend testing on a physical device. The panel shares other testing tips and gives error tracking recommendations. Panelists Josh Justice Guest Yassir Hartani Sponsors G2i Infinite Red CacheFly ____________________________________________________________ "The MaxCoders Guide to Finding Your Dream Developer Job" by Charles Max Wood is now available on Amazon. Get Your Copy Today! ____________________________________________________________ Links An Introduction to React-Native-Navigation Styled Components for React Native React Native Upgrade Helper React Native CLI “upgrade” command KeyboardAvoidingView TouchableNativeFeedback React-native-platform-touchable SafeAreaView https://facebook.github.io/react-native/docs/improvingux Sentry Bugsnag Android keystores Fastlane CircleCI App Center CodePush Detox Travis CI https://www.facebook.com/ReactNativeRadio/ https://twitter.com/R_N_Radio Picks Josh Justice: Big Nerd Ranch Guides PouchDB `pouchdb-react-native` Yassir Hartani: Deep Work 4-Hour Workweek
In this episode of React Native Radio Josh Justice interviews Yassir Hartani. Yassir writes a blog about all he learns while programming with React Native. They begin by discussing his article about React Native Navigation. Yassir explains why he prefers React Native Navigation and walks Josh through the article. They move on to share tips for getting into React Native development. Yassir shares the differences between React Native development and developing on the web. He explains the difference in base components, syntax, and naming. For those used to developing on the web he recommends using styled-components. Next, the discuss best practices for upgrading and explain why upgrading in React Native can be painful. They discuss tips for improving user experience including, keyboards, clickable buttons, native feedback, and safe area view. Developer experience tips are next. Yassir recommends building for both iOS and Android, test for both platforms as well. They also recommend testing on a physical device. The panel shares other testing tips and gives error tracking recommendations. Panelists Josh Justice Guest Yassir Hartani Sponsors G2i Infinite Red CacheFly ____________________________________________________________ "The MaxCoders Guide to Finding Your Dream Developer Job" by Charles Max Wood is now available on Amazon. Get Your Copy Today! ____________________________________________________________ Links An Introduction to React-Native-Navigation Styled Components for React Native React Native Upgrade Helper React Native CLI “upgrade” command KeyboardAvoidingView TouchableNativeFeedback React-native-platform-touchable SafeAreaView https://facebook.github.io/react-native/docs/improvingux Sentry Bugsnag Android keystores Fastlane CircleCI App Center CodePush Detox Travis CI https://www.facebook.com/ReactNativeRadio/ https://twitter.com/R_N_Radio Picks Josh Justice: Big Nerd Ranch Guides PouchDB `pouchdb-react-native` Yassir Hartani: Deep Work 4-Hour Workweek
Andy McLoughlin is a partner at Uncork Capital in San Francisco, focusing on B2B software opportunities. Uncork Capital is a seed-stage venture firm that commits early, helps with the hard stuff, and sticks around. Really. Andy's investments include LaunchDarkly, Coder, Crossbeam, Focal Systems, Human Interest, Pattern (acquired by Workday), Test.ai, Simplify, Fountain, GreatHorn, Bigfinite, and Identify3D, as well as a number of companies still in stealth. Andy was previously a prolific seed investor into many well-known startups. Postmates, Buffer, Intercom, Pipedrive, Hullabalu, RolePoint, Tray.io, Bugsnag, Thread, Calm, Secret Escapes, Apiary (acquired by Oracle), Import.io, Zesty (acquired by Square), Marvel, Cloud66 and more. Prior to becoming an investor, Andy is best know for co-founding Huddle with Alastair Mitchell. Andy had many roles at Huddle over the years - Head of Technology, Head of Product, Head of Marketing, Head of Strategy / Business Development, GM North America - but he was most passionate about building a great team and a great product. Andy stepped back from his executive role in 2015 but remained on the board of directors until the company's acquisition in 2017, when it was eventually sold to private equity for $89m in August 2017. That sum is far less than Huddle's reported peak valuation of up to $300M when it snagged a $51 million Series D funding round in 2014.
In this episode of React Native Radio Charles Max Wood interviews James Smith, the co-founder, and CEO of Bugsnag. James gives Bugsnag’s background and explains what makes it different than other bug-finding tools. He shares statistics on how much bugs cost. Developers spend on average 17.3 hrs per week dealing with bad code, 85 billion dollars in GDP dollars are lost to bad code every year and most customers leave an app after two crashes, harming your brand. Chuck and James consider when and why customers leave reviews. They consider how reviews help in finding and fixing bugs. They discuss how helpful it would be if they could communicate with unhappy customers to help them find bugs. James explains how Bugsnag can help with this by replicating user interactions to find what steps led to a bug. James explains what to once all the data has been gathered and the best processes for actually fixing the bugs. This process stems on establishing ownership and identifying priority bugs. Although QAs and QEs are getting more common, James recommends empowering the engineering team to fix bugs. Chuck and James consider the idea of a bug sheriff, a rotating position who holds the responsibility of determining priorities and ownership. They consider how these processes could lower the number of bugs and teach developers to better handle bugs. James explains that “zero bugs” is an impossible goal because there will always be more bugs, the hope is to stay on top of them so the team can reach new velocity. Performance bugs are considered and James explains how these can be measured and improved each release. Panelists Charles Max Wood Guest James Smith Sponsors Infinite Red G2i CacheFly Links https://stripe.com/reports/developer-coefficient-2018 Buckaroo https://square.github.io/leakcanary/ https://www.bugsnag.com/ https://twitter.com/loopj?lang=en https://www.facebook.com/ReactNativeRadio/ https://twitter.com/R_N_Radio Picks Charles Max Wood: The MaxCoders Guide To Finding Your Dream Developer Job It's A Wonderful Life Mr. Krueger's Christmas James Smith: DroidCon Links Awakening
In this episode of React Native Radio Charles Max Wood interviews James Smith, the co-founder, and CEO of Bugsnag. James gives Bugsnag’s background and explains what makes it different than other bug-finding tools. He shares statistics on how much bugs cost. Developers spend on average 17.3 hrs per week dealing with bad code, 85 billion dollars in GDP dollars are lost to bad code every year and most customers leave an app after two crashes, harming your brand. Chuck and James consider when and why customers leave reviews. They consider how reviews help in finding and fixing bugs. They discuss how helpful it would be if they could communicate with unhappy customers to help them find bugs. James explains how Bugsnag can help with this by replicating user interactions to find what steps led to a bug. James explains what to once all the data has been gathered and the best processes for actually fixing the bugs. This process stems on establishing ownership and identifying priority bugs. Although QAs and QEs are getting more common, James recommends empowering the engineering team to fix bugs. Chuck and James consider the idea of a bug sheriff, a rotating position who holds the responsibility of determining priorities and ownership. They consider how these processes could lower the number of bugs and teach developers to better handle bugs. James explains that “zero bugs” is an impossible goal because there will always be more bugs, the hope is to stay on top of them so the team can reach new velocity. Performance bugs are considered and James explains how these can be measured and improved each release. Panelists Charles Max Wood Guest James Smith Sponsors Infinite Red G2i CacheFly Links https://stripe.com/reports/developer-coefficient-2018 Buckaroo https://square.github.io/leakcanary/ https://www.bugsnag.com/ https://twitter.com/loopj?lang=en https://www.facebook.com/ReactNativeRadio/ https://twitter.com/R_N_Radio Picks Charles Max Wood: The MaxCoders Guide To Finding Your Dream Developer Job It's A Wonderful Life Mr. Krueger's Christmas James Smith: DroidCon Links Awakening
Crash monitoring emerged as a software category over the last decade. Crash monitoring software allows developers to understand when their applications are crashing on client devices. For example, we have an app for Software Engineering Daily that people download on Android or iOS. Users download the app to their smartphone. When the user is playing The post Bugsnag Business with James Smith appeared first on Software Engineering Daily.
In episode 10 of O11ycast, Charity Majors and Liz Fong-Jones speak with Bugsnag CEO James Smith. They discuss the seemingly impossible ways an organization can measure technical debt and how they can attempt to reduce it.
In episode 10 of O11ycast, Charity Majors and Liz Fong-Jones speak with Bugsnag CEO James Smith. They discuss the seemingly impossible ways an organization can measure technical debt and how they can attempt to reduce it.
In episode 10 of O11ycast, Charity Majors and Liz Fong-Jones speak with Bugsnag CEO James Smith. They discuss the seemingly impossible ways an organization can measure technical debt and how they can attempt to reduce it. The post Ep. #10, Measuring Technical Debt with Bugsnag CEO James Smith appeared first on Heavybit.
Robby sits down with James Smith, Co-Founder and CEO at Bugsnag, to discuss how to look at technical debt as a business cost, engineering processes in a startup vs. a stable company, and how the Bugsnag engineering team gets things done as a team with offices on two continents. Helpful Links: Bugsnag James Smith on Twitter Pre-Suasion by Robert Cialdini Subscribe to Maintainable on: Apple Podcasts Overcast Or search "Maintainable" wherever you stream your podcasts. Brought to you by the team at Planet Argon.
Hey everyone! In today’s episode, I share the mic with James Smith, Founder of Bugsnag, an automated crash protection platform for web and mobile. Bugsnag allows its customers to make data-driven decisions about building new features and fixing glitches. Tune in to hear how James turned his passion into his profession, how Bugsnag’s Customer Success Team owns up to their motto of “Land and Expand”, and why attending conferences can be a lucrative decision. Click here for show notes and transcript Leave Some Feedback: What should I talk about next? Who should I interview? Please let me know on Twitter or in the comments below. Did you enjoy this episode? If so, leave a short review here. Subscribe to Growth Everywhere on iTunes. Get the non-iTunes RSS feed Connect with Eric Siu: Growth Everywhere Single Grain Twitter @EricSiu
How do you turn a bug into a feature? On this episode, we talk about how we used Bugsnag to do just that. We then get into an interesting discussion around using 3rd party software applications to move software projects more quickly with less risk. We also highlight just how useful the bitcoin-dev mailing list is for learning about how the Bitcoin core team thinks about the protocol, and how their discussions underpin important considerations around scaling and network growth. We also look at a terrible case of UX in VeChain, how Google is adding Ethereum to BigQuery, how $800mm worth of Bitcoin and Bitcoin Cash moved out of a Silk Road wallet, and a very suspicious Bitmex short squeeze. Topics: How great Bugsnag is How to use Bugsnag to find a feature Getting to the roots of a crypto projects Bitcoin-dev mailing list What testnet is VeChain alert CertiK AutoScan Engine Google adding Ethereum to BigQuery How $800mm worth of bitcoin and bitcoin cash moved from silk road wallet How does short squeeze work Links: Crypto teams need to communicate better MIT License Bitcoin Dev Mailing List Testnet 3 Reest Blockonomics - https://app-alpha.quantlayer.com/dashboard/alert/0b08c219-e363-42a1-8be2-a1e8ba90e13c?search=mainnet&searchType=default TXN price stayed stable CertiK Autoscan Engine - https://medium.com/certik/certik-autoscan-engine-53-of-the-top-500-tokens-by-market-cap-were-found-to-have-vulnerabilities-80697fc245af - https://ambcrypto.com/binance-to-integrate-certiks-autoscan-to-their-exchange-platform/ Google adding Ethereum to BigQuery Alerts - https://app-alpha.quantlayer.com/dashboard/alert/dda92d48-8342-4359-9115-a2ab8dee49b6?search=crash&searchType=default - https://app-alpha.quantlayer.com/dashboard/alert/34884f60-ea42-35bb-87f3-a3f204e944d5?search=pump&searchType=default&telegramEnabled=false - https://app-alpha.quantlayer.com/dashboard/alert/c2168670-f055-4d37-9ff9-cfe3fb00250a?search=mainnet&searchType=default
Marketing School - Digital Marketing and Online Marketing Tips
In episode #671, Eric and Neil discuss which A/B you should run to improve your sales. Tune in to hear what will help you up your conversions. TIME-STAMPED SHOW NOTES: [00:27] Today’s Topic: What Kind of A/B Tests Should You Run to Improve Your Sales [00:40] Add pop-ups to your site! [00:42] You can use tools like Opt-in Monster, Sumo, and HelloBar to do this. [00:52] From there, you just try to drive people to a conversion. [01:08] Headlines and copy affect conversions more than most things. [01:20] Get on the phone with your customers and ask what got them on board. [01:35] You’ll find people will say a lot of the same things. [01:39] Take what worked and incorporate it into your headlines and copy. [01:53] You need to make sure your copy is cohesive throughout your site. [02:00] Urgency is key. Have deadlines. [02:09] Use Deadline Funnel to create a countdown. [02:17] When something seems limited, it makes it more appealing. [02:30] The final email you send out before the deal expires is usually what gets the most conversions. [02:45] Having all your form fields on one page can be overwhelming. Break your checkout process into a couple or a few parts. [03:07] Conversions will go up by at least 9%. [03:22] Test your pricing every now and again. [03:25] BugSnag changed their pricing several times during their first year in business. [03:47] When doing customer development, ask them what prices are deal breakers. [04:03] Play around with button color, text size, and images. These things don’t necessarily affect conversion as much as people say, but it counts as content. [04:25] When people aren’t buying, Neil surveys them to see what didn’t work. [04:35] Do a remarketing campaign addressing these negatives (maybe using video). [04:54] Also run a test using the video. [05:04] Try short-form vs. long-form copy and see what performs better. [05:30] Optimize your ads. [05:40] On Facebook, add video and a lot of text content. [05:56] The conversion rate goes through the roof. [06:07] Test out freemium, free trials, etc. [06:37] That’s it for today! [06:41] Go to Singlegrain.com/Giveway for a special marketing tool giveaway! Leave some feedback: What should we talk about next? Please let us know in the comments below. Did you enjoy this episode? If so, please leave a short review. Connect with us: NeilPatel.com Quick Sprout Growth Everywhere Single Grain Twitter @neilpatel Twitter @ericosiu
Marketing School - Digital Marketing and Online Marketing Tips
In episode #671, Eric and Neil discuss which A/B you should run to improve your sales. Tune in to hear what will help you up your conversions. TIME-STAMPED SHOW NOTES: [00:27] Today's Topic: What Kind of A/B Tests Should You Run to Improve Your Sales [00:40] Add pop-ups to your site! [00:42] You can use tools like Opt-in Monster, Sumo, and HelloBar to do this. [00:52] From there, you just try to drive people to a conversion. [01:08] Headlines and copy affect conversions more than most things. [01:20] Get on the phone with your customers and ask what got them on board. [01:35] You'll find people will say a lot of the same things. [01:39] Take what worked and incorporate it into your headlines and copy. [01:53] You need to make sure your copy is cohesive throughout your site. [02:00] Urgency is key. Have deadlines. [02:09] Use Deadline Funnel to create a countdown. [02:17] When something seems limited, it makes it more appealing. [02:30] The final email you send out before the deal expires is usually what gets the most conversions. [02:45] Having all your form fields on one page can be overwhelming. Break your checkout process into a couple or a few parts. [03:07] Conversions will go up by at least 9%. [03:22] Test your pricing every now and again. [03:25] BugSnag changed their pricing several times during their first year in business. [03:47] When doing customer development, ask them what prices are deal breakers. [04:03] Play around with button color, text size, and images. These things don't necessarily affect conversion as much as people say, but it counts as content. [04:25] When people aren't buying, Neil surveys them to see what didn't work. [04:35] Do a remarketing campaign addressing these negatives (maybe using video). [04:54] Also run a test using the video. [05:04] Try short-form vs. long-form copy and see what performs better. [05:30] Optimize your ads. [05:40] On Facebook, add video and a lot of text content. [05:56] The conversion rate goes through the roof. [06:07] Test out freemium, free trials, etc. [06:37] That's it for today! [06:41] Go to Singlegrain.com/Giveway for a special marketing tool giveaway! Leave some feedback: What should we talk about next? Please let us know in the comments below. Did you enjoy this episode? If so, please leave a short review. Connect with us: NeilPatel.com Quick Sprout Growth Everywhere Single Grain Twitter @neilpatel Twitter @ericosiu
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Eric Vishria is a General Partner @ Benchmark, one of the world’s leading VC funds with a portfolio including the likes of Twitter, Uber, Snapchat, eBay, WeWork, Yelp and many more revolutionary companies of the last decade. At Benchmark, Eric has led deals and sits on the board of the likes of Confluent, Amplitude, and Bugsnag. Prior to being in VC, Eric was the Founder & CEO @ Rockmelt, the startup that sought to re-imagine the browser for the way people use the web today, the company was ultimately acquired by Yahoo in 2013. Prior to Rockmelt Eric held numerous different roles including VP of Products @ HP and VP of Marketing @ Opsware. In Today’s Episode You Will Learn: 1.) How Eric came to be one of the 5 GP's at Benchmark following operational success with Rockmelt, HP and Opsware? 2.) Why does Eric believe the pendulum has swung too far to the operational route into VC? What are the under-appreciated benefits of career VCs and the perspective they bring? How does Eric expect the pendulum to swing in the coming years? 3.) What makes the best-performing venture partnerships? How does Benchmark think about partner composition and career pre-VC? How does Benchmark structure investment decision-making? Why do they favor advocacy over unanimity? 4.) What does Eric mean when he says we are at the beginning of an infrastructure renaissance? What opportunities does this create in the venture landscape? How does this lead Eric to consider the current state of the consumer landscape? 5.) How does Eric view multi-stage investing? Why does Eric and Benchmark favour stage specifity when it comes to investing? What are the dangers of larger stage funds investing in earlier rounds for optionality? Items Mentioned In Today’s Show: Eric’s Fave Book: Endurance: Shackleton's Incredible Voyage To The Antarctic Eric's Most Recent Investment: Confluent As always you can follow Harry, The Twenty Minute VC and Eric on Twitter here! Likewise, you can follow Harry on Snapchat here for mojito madness and all things 20VC. If you are an early stage startup, the right infrastructure and support systems are critical, that is where First Republic is so good. First Republic’s resources network and expertise allow entrepreneurs to customise a solid foundation for their business. Why First Republic, well you get to leverage their incredible network of VC firms to prepare you for future fundraising events, you get to count on a single point of contact that will be there for you and your employees, you get access to exclusive events and networking opportunities. Their clients include the likes of Instacart, eShares and Wish just to name a few. Check it out by heading over to innovation.firstrepublic.com Segment allows you to collect data from every platform (mobile, web, server, cloud apps) and load it into Segment. Segment then sends the customer data to your tools and destinations where it can be used most effectively, destinations include email, analytics, warehouses, helpdesks and more. With over 200 sources and destinations on the Segment platform that can empower your team, Segment really is the last integration you will ever do and that is why the world’s best companies use segment to drive growth and revenue including Atlassian, New Relic and Crate & Barrel. Simply head over to segment.com to find out more.
The Top Entrepreneurs in Money, Marketing, Business and Life
James Smith. He’s the co-founder and CEO of Bugsnag, the leading crash monitoring platform for web and mobile applications. The company helps companies like Airbnb, Lyft, Cisco, Pandora and Yelp catch and fix errors on their applications. Originally from London, James moved to the Bay area in 2009, leading the product team as the CTO of Heyzap. In his spare time, he likes hacking open source software, eating junk food and practicing his American accent. Famous Five: Favorite Book? – Radical Focus What CEO do you follow? – Jeff Bezos Favorite online tool? — eShares How many hours of sleep do you get?— 8 If you could let your 20-year old self, know one thing, what would it be? – “If you don’t ask, you don’t get, apply it to your life” Time Stamped Show Notes: 01:11 – Nathan introduces James to the show 02:00 – If a company has a software, Bugsnag detects when the software is broken 02:22 – Bugsnag charges monthly 02:28 – The price varies depending on the company’s needs 02:36 – Price starts at $29 a month to tens of thousands a month depending on the scale of the business 03:17 – Customer cohorts 03:57 – Team size is 35 and will be 45 at the end of the year 04:30 – James and his co-founder quit their job in 2012 and started Bugsnag in 2013 04:40 – James was the CTO for Heyzap which was a Y combinator company in the gaming space 04:59 – Heyzap wasn’t able to solve the problem James had with Bloomberg 05:59 – James invested in Heyzap and learned a lot from his time with them 06:40 – Heyzap was acquired by the German company Fyber 07:06 – James’ experience entering the startup world 08:30 – With Heyzap, James had to decide whether or not he’d buy his shares before the acquisition 09:43 – James’ price was low because he was an early employee of Heyzap 10:41 – James was 29 when he left Heyzap 10:50 – Bugsnag was initially bootstrapped, then raised in 2013 11:08 – Bugsnag went with Matrix Partners 11:32 – Bugsnag raised a total of $9.5M 11:49 – Customer number is around 4000 companies 12:04 – Bugsnag has a free and premium model 12:14 – There are 60K software engineers who are using Bugsnag 12:21 – One third are organizations and the rest are using it for free 13:00 – First year revenue was $4.5K in ARR 13:27 – Bugsnag has broken $2M ARR already 13:47 – “The expansion revenue is really, really strong” 13:50 – Bugsnag is constantly in a net negative churn 14:06 – Logo churn is around 1% 14:40 – Bugsnag started with low deal sizes and grew them slowly 15:05 – People try Bugsnag for free and see its value 15:45 – Healthy net negative churn in the industry is around mid-single digit to low double digit negative churn 16:41 – The best driver of growth for Bugsnag is word of mouth 17:01 – Bugsnag also does conferences and had 18 conferences last year 17:10 – Sponsorship price per conference can go up to $10K 17:28 – Large companies go to conferences as well 17:35 – Payback period is the 12-month which is the rule of thumb 18:17 – Bugsnag is in a typical SaaS gross margin 20:25 – The Famous Five 3 Key Points: If you don’t ask, you won’t receive; therefore, just get out there and ask for what you want. Small deal sizes can grow and expand to large ones once people see your value. Consider owning a part of a company—especially if it’s a company that you truly believe in. Resources Mentioned: The Top Inbox – The site Nathan uses to schedule emails to be sent later, set reminders in inbox, track opens, and follow-up with email sequences GetLatka - Database of all B2B SaaS companies who have been on my show including their revenue, CAC, churn, ARPU and more Klipfolio – Track your business performance across all departments for FREE Hotjar – Nathan uses Hotjar to track what you’re doing on this site. He gets a video of each user visit like where they clicked and scrolled to make the site a better experience Acuity Scheduling – Nathan uses Acuity to schedule his podcast interviews and appointments Host Gator – The site Nathan uses to buy his domain names and hosting for the cheapest price possible Audible – Nathan uses Audible when he’s driving from Austin to San Antonio (1.5-hour drive) to listen to audio books Show Notes provided by Mallard Creatives
Welcome to Founder Chats, the show where I, Josh Pigford, Founder of Baremetrics, hop on a call with my founder pals and get the stories of how they started and grew their businesses. This week I talk with James Smith of BugSnag, where they do application error monitoring. We talk about breaking computers as a kid, using open source as a way to get initial traction, marketing to developers, the role of data in decision making a bunch more. Hope you enjoy it! http://founderchats.com https://bugsnag.com https://baremetrics.com
So my wife and I recently took a trip into Nashville to see Amy Schumer perform. And wouldn't you know it: the moment we arrived, Bugsnag began sending me error reports. No laptop, and two hours from home. ...Crap.
Download this episode, in which Ian and Andrey discuss Uberdeck, system administration, pricing strategies, news coverage for startups, Jeff Atwood, Trello, and Snappy. Uberdeck – Andrey’s latest project Alwin Hoogerdijk (collectorz.com) – on email marketing Campaign Monitor – email campaign service DNS Made Easy – DNS service Amazon Route 53 – DNS service Linode – hosting for Uberdeck Trello – project management Coding Horror – Jeff Atwood’s blog Amazon SQS – hosted message queue service EngineHosting – hosting for Snappy Snappy – Ian’s latest product PG Experts – PostgreSQL expert services BugSnag – real-time bug intelligence AsyncHttp – HTTP library for Android James Smith – creator of BugSnag and AsyncHttp for Android. Discourse – Jeff Atwood’s forum software Eric Sink – on the business of software