Hey, Herspiration Happy Hour fans! Did you tune in Wed, September 14, 2022, at 7 pm EST on our FB, YouTube, or Twitch channel for Season 5 Episode 28 LIVE. Our guest was Amanda Rabideau.Amanda Rabideau is the Founder, CEO, & Fractional CMO at Arch Collective. She has over fifteen years of experience growing businesses using effective marketing, strategy, and scale practices. Throughout her career, Amanda has worked with large enterprises like Dell, Microsoft, CoreLogic, Cloudstaff, New Relic, and OraMetrix (acquired by Dentsply Sirona in 2018). Amanda's devotion to empowering entrepreneurs led to the creation of Arch Collective, a successful business that handles all marketing strategy and execution for B2B (back-to-back) tech startups. She believes that operating with talented, freelance marketers is the future of work, as well as a valuable advantage provided by Fractional CMOs, allowing post-Series A startups to deploy new capital in the safest, most cost-efficient way. Freelance marketers and designers, entrepreneurs, and female business leaders looking to grow their companies through effective marketing and commercialization will find Amanda's experience knowledgeable, inspiring, and beneficial. Tune in because you will not want to miss out on this cocktails and conversation episode!Connect with Amanda on IG: @archcollectivemarketingDon't forget you can be a part of the conversation during the show!!Connect with us on IG:@iamdpgurley@thegirlfriendtherapist@thebluephoenixhealsCatch up on past episodes on Apple Podcast, iHeartRadio, Pandora, Amazon Music, Spotify, Google Podcast, and many other platforms.#podcast #season5 #empoweringwomen #goaldiggers #ladybosstribe #inspiration #womanceo #empowerher ##savvybusinessowner #womensupportingwomen #thisgirlmeansbusiness #motivation #womenempowermentSupport the show
First-time founders look forward to becoming experienced serial entrepreneurs. Are there pitfalls to being a serial entrepreneur? Charlie Harary and Dean Noam Wasserman delve into the data and the inside stories of the founders of Twitter and billion-dollar company New Relic.
About IanIan Smith is Field CTO at Chronosphere where he works across sales, marketing, engineering and product to deliver better insights and outcomes to observability teams supporting high-scale cloud-native environments. Previously, he worked with observability teams across the software industry in pre-sales roles at New Relic, Wavefront, PagerDuty and Lightstep.Links Referenced: Chronosphere: https://chronosphere.io Last Tweet in AWS: lasttweetinaws.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while, I find that something I'm working on aligns perfectly with a person that I wind up basically convincing to appear on this show. Today's promoted guest is Ian Smith, who's Field CTO at Chronosphere. Ian, thank you for joining me.Ian: Thanks, Corey. Great to be here.Corey: So, the coincidental aspect of what I'm referring to is that Chronosphere is, despite the name, not something that works on bending time, but rather an observability company. Is that directionally accurate?Ian: That's true. Although you could argue it probably bend a little bit of engineering time. But we can talk about that later.Corey: [laugh]. So, observability is one of those areas that I think is suffering from too many definitions, if that makes sense. And at first, I couldn't make sense of what it was that people actually meant when they said observability, this sort of clarified to me at least when I realized that there were an awful lot of, well, let's be direct and call them ‘legacy monitoring companies' that just chose to take what they were already doing and define that as, “Oh, this is observability.” I don't know that I necessarily agree with that. I know a lot of folks in the industry vehemently disagree.You've been in a lot of places that have positioned you reasonably well to have opinions on this sort of question. To my understanding, you were at interesting places, such as LightStep, New Relic, Wavefront, and PagerDuty, which I guess technically might count as observability in a very strange way. How do you view observability and what it is?Ian: Yeah. Well, a lot of definitions, as you said, common ones, they talk about the three pillars, they talk really about data types. For me, it's about outcomes. I think observability is really this transition from the yesteryear of monitoring where things were much simpler and you, sort of, knew all of the questions, you were able to define your dashboards, you were able to define your alerts and that was really the gist of it. And going into this brave new world where there's a lot of unknown things, you're having to ask a lot of sort of unique questions, particularly during a particular instance, and so being able to ask those questions in an ad hoc fashion layers on top of what we've traditionally done with monitoring. So, observability is sort of that more flexible, more dynamic kind of environment that you have to deal with.Corey: This has always been something that, for me, has been relatively academic. Back when I was running production environments, things tended to be a lot more static, where, “Oh, there's a problem with the database. I will SSH into the database server.” Or, “Hmm, we're having a weird problem with the web tier. Well, there are ten or 20 or 200 web servers. Great, I can aggregate all of their logs to Syslog, and worst case, I can log in and poke around.”Now, with a more ephemeral style of environment where you have Kubernetes or whatnot scheduling containers into place that have problems you can't attach to a running container very easily, and by the time you see an error, that container hasn't existed for three hours. And that becomes a problem. Then you've got the Lambda universe, which is a whole ‘nother world pain, where it becomes very challenging, at least for me, in order to reason using the old style approaches about what's actually going on in your environment.Ian: Yeah, I think there's that and there's also the added complexity of oftentimes you'll see performance or behavioral changes based on even more narrow pathways, right? One particular user is having a problem and the traffic is spread across many containers. Is it making all of these containers perform badly? Not necessarily, but their user experience is being affected. It's very common in say, like, B2B scenarios for you to want to understand the experience of one particular user or the aggregate experience of users at a particular company, particular customer, for example.There's just more complexity. There's more complexity of the infrastructure and just the technical layer that you're talking about, but there's also more complexity in just the way that we're handling use cases and trying to provide value with all of this software to the myriad of customers in different industries that software now serves.Corey: For where I sit, I tend to have a little bit of trouble disambiguating, I guess, the three baseline data types that I see talked about again and again in observability. You have logs, which I think I've mostly I can wrap my head around. That seems to be the baseline story of, “Oh, great. Your application puts out logs. Of course, it's in its own unique, beautiful format. Why wouldn't it be?” In an ideal scenario, they're structured. Things are never ideal, so great. You're basically tailing log files in some cases. Great. I can reason about those.Metrics always seem to be a little bit of a step beyond that. It's okay, I have a whole bunch of log lines that are spitting out every 500 error that my app is throwing—and given my terrible code, it throws a lot—but I can then ideally count the number of times that appears and then that winds up incrementing counter, similar to the way that we used to see with StatsD, for example, and Collectd. Is that directionally correct? As far as the way I reason about, well so far, logs and metrics?Ian: I think at a really basic level, yes. I think that, as we've been talking about, sort of greater complexity starts coming in when you have—particularly metrics in today's world of containers—Prometheus—you mentioned StatsD—Prometheus has become sort of like the standard for expressing those things, so you get situations where you have incredibly high cardinality, so cardinality being the interplay between all the different dimensions. So, you might have, my container is a label, but also the type of endpoint is running on that container as a label, then maybe I want to track my customer organizations and maybe I have 5000 of those. I have 3000 containers, and so on and so forth. And you get this massive explosion, almost multiplicatively.For those in the audience who really live and read cardinality, there's probably someone screaming about well, it's not truly multiplicative in every sense of the word, but, you know, it's close enough from an approximation standpoint. As you get this massive explosion of data, which obviously has a cost implication but also has, I think, a really big implication on the core reason why you have metrics in the first place you alluded to, which is, so a human being can reason about it, right? You don't want to go and look at 5000 log lines; you want to know, out of those 5000 log lines of 4000 errors and I have 1000, OKs. It's very easy for human beings to reason about that from a numbers perspective. When your metrics start to re-explode out into thousands, millions of data points, and unique sort of time series more numbers for you to track, then you're sort of losing that original goal of metrics.Corey: I think I mostly have wrapped my head around the concept. But then that brings us to traces, and that tends to be I think one of the hardest things for me to grasp, just because most of the apps I build, for obvious reasons—namely, I'm bad at programming and most of these are proof of concept type of things rather than anything that's large scale running in production—the difference between a trace and logs tends to get very muddled for me. But the idea being that as you have a customer session or a request that talks to different microservices, how do you collate across different systems all of the outputs of that request into a single place so you can see timing information, understand the flow that user took through your application? Is that again, directionally correct? Have I completely missed the plot here? Which is again, eminently possible. You are the expert.Ian: No, I think that's sort of the fundamental premise or expected value of tracing, for sure. We have something that's akin to a set of logs; they have a common identifier, a trace ID, that tells us that all of these logs essentially belong to the same request. But importantly, there's relationship information. And this is the difference between just having traces—sorry, logs—with just a trace ID attached to them. So, for example, if you have Service A calling Service B and Service C, the relatively simple thing, you could use time to try to figure this out.But what if there are things happening in Service B at the same time there are things happening in Service C and D, and so on and so forth? So, one of the things that tracing brings to the table is it tells you what is currently happening, what called that. So oh, I know that I'm Service D. I was actually called by Service B and I'm not just relying on timestamps to try and figure out that connection. So, you have that information and ultimately, the data model allows you to fully sort of reflect what's happening with the request, particularly in complex environments.And I think this is where, you know, tracing needs to be sort of looked at as not a tool for—just because I'm operating in a modern environment, I'm using some Kubernetes, or I'm using Lambda, is it needs to be used in a scenario where you really have troubles grasping, from a conceptual standpoint, what is happening with the request because you need to actually fully document it. As opposed to, I have a few—let's say three Lambda functions. I maybe have some key metrics about them; I have a little bit of logging. You probably do not need to use tracing to solve, sort of, basic performance problems with those. So, you can get yourself into a place where you're over-engineering, you're spending a lot of time with tracing instrumentation and tracing tooling, and I think that's the core of observability is, like, using the right tool, the right data for the job.But that's also what makes it really difficult because you essentially need to have this, you know, huge set of experience or knowledge about the different data, the different tooling, and what influential architecture and the data you have available to be able to reason about that and make confident decisions, particularly when you're under a time crunch which everyone is familiar with a, sort of like, you know, PagerDuty-style experience of my phone is going off and I have a customer-facing incident. Where is my problem? What do I need to do? Which dashboard do I need to look at? Which tool do I need to investigate? And that's where I think the observability industry has become not serving the outcomes of the customers.Corey: I had a, well, I wouldn't say it's a genius plan, but it was a passing fancy that I've built this online, freely available Twitter client for authoring Twitter threads—because that's what I do is that of having a social life—and it's available at lasttweetinaws.com. I've used that as a testbed for a few things. It's now deployed to roughly 20 AWS regions simultaneously, and this means that I have a bit of a problem as far as how to figure out not even what's wrong or what's broken with this, but who's even using it?Because I know people are. I see invocations all over the planet that are not me. And sometimes it appears to just be random things crawling the internet—fine, whatever—but then I see people logging in and doing stuff with it. I'd kind of like to log and see who's using it just so I can get information like, is there anyone I should talk to about what it could be doing differently? I love getting user experience reports on this stuff.And I figured, ah, this is a perfect little toy application. It runs in a single Lambda function so it's not that complicated. I could instrument this with OpenTelemetry, which then, at least according to the instructions on the tin, I could then send different types of data to different observability tools without having to re-instrument this thing every time I want to kick the tires on something else. That was the promise.And this led to three weeks of pain because it appears that for all of the promise that it has, OpenTelemetry, particularly in a Lambda environment, is nowhere near ready for being able to carry a workload like this. Am I just foolish on this? Am I stating an unfortunate reality that you've noticed in the OpenTelemetry space? Or, let's be clear here, you do work for a company with opinions on these things. Is OpenTelemetry the wrong approach?Ian: I think OpenTelemetry is absolutely the right approach. To me, the promise of OpenTelemetry for the individual is, “Hey, I can go and instrument this thing, as you said and I can go and send the data, wherever I want.” The sort of larger view of that is, “Well, I'm no longer beholden to a vendor,”—including the ones that I've worked for, including the one that I work for now—“For the definition of the data. I am able to control that, I'm able to choose that, I'm able to enhance that, and any effort I put into it, it's mine. I own that.”Whereas previously, if you picked, say, for example, an APM vendor, you said, “Oh, I want to have some additional aspects of my information provider, I want to track my customer, or I want to track a particular new metric of how much dollars am I transacting,” that effort really going to support the value of that individual solution, it's not going to support your outcomes. Which is I want to be able to use this data wherever I want, wherever it's most valuable. So, the core premise of OpenTelemetry, I think, is great. I think it's a massive undertaking to be able to do this for at least three different data types, right? Defining an API across a whole bunch of different languages, across three different data types, and then creating implementations for those.Because the implementations are the thing that people want, right? You are hoping for the ability to, say, drop in something. Maybe one line of code or preferably just, like, attach a dependency, let's say in Java-land at runtime, and be able to have the information flow through and have it complete. And this is the premise of, you know, vendors I've worked with in the past, like New Relic. That was what New Relic built on: the ability to drop in an agent and get visibility immediately.So, having that out-of-the-box visibility is obviously a goal of OpenTelemetry where it makes sense—Go, it's very difficult to attach things at runtime, for example—but then saying, well, whatever is provided—let's say your gRPC connections, database, all these things—well, now I want to go and instrument; I want to add some additional value. As you said, maybe you want to track something like I want to have in my traces the email address of whoever it is or the Twitter handle of whoever is so I can then go and analyze that stuff later. You want to be able to inject that piece of information or that instrumentation and then decide, well, where is the best utilized? Is it best utilized in some tooling from AWS? Is it best utilized in something that you've built yourself? Is it best of utilized an open-source project? Is it best utilized in one of the many observability vendors, or is even becoming more common, I want to shove everything in a data lake and run, sort of, analysis asynchronously, overlay observability data for essentially business purposes.All of those things are served by having a very robust, open-source standard, and simple-to-implement way of collecting a really good baseline of data and then make it easy for you to then enhance that while still owning—essentially, it's your IP right? It's like, the instrumentation is your IP, whereas in the old world of proprietary agents, proprietary APIs, that IP was basically building it, but it was tied to that other vendor that you were investing in.Corey: One thing that I was consistently annoyed by in my days of running production infrastructures at places, like, you know, large banks, for example, one of the problems I kept running into is that this, there's this idea that, “Oh, you want to use our tool. Just instrument your applications with our libraries or our instrumentation standards.” And it felt like I was constantly doing and redoing a lot of instrumentation for different aspects. It's not that we were replacing one vendor with another; it's that in an observability, toolchain, there are remarkably few, one-size-fits-all stories. It feels increasingly like everyone's trying to sell me a multifunction printer, which does one thing well, and a few other things just well enough to technically say they do them, but badly enough that I get irritated every single time.And having 15 different instrumentation packages in an application, that's either got security ramifications, for one, see large bank, and for another it became this increasingly irritating and obnoxious process where it felt like I was spending more time seeing the care and feeding of the instrumentation then I was the application itself. That's the gold—that's I guess the ideal light at the end of the tunnel for me in what OpenTelemetry is promising. Instrument once, and then you're just adjusting configuration as far as where to send it.Ian: That's correct. The organization's, and you know, I keep in touch with a lot of companies that I've worked with, companies that have in the last two years really invested heavily in OpenTelemetry, they're definitely getting to the point now where they're generating the data once, they're using, say, pieces of the OpenTelemetry pipeline, they're extending it themselves, and then they're able to shove that data in a bunch of different places. Maybe they're putting in a data lake for, as I said, business analysis purposes or forecasting. They may be putting the data into two different systems, even for incident and analysis purposes, but you're not having that duplication effort. Also, potentially that performance impact, right, of having two different instrumentation packages lined up with each other.Corey: There is a recurring theme that I've noticed in the observability space that annoys me to no end. And that is—I don't know if it's coming from investor pressure, from folks never being satisfied with what they have, or what it is, but there are so many startups that I have seen and worked with in varying aspects of the observability space that I think, “This is awesome. I love the thing that they do.” And invariably, every time they start getting more and more features bolted onto them, where, hey, you love this whole thing that winds up just basically doing a tail-F on a log file, so it just streams your logs in the application and you can look for certain patterns. I love this thing. It's great.Oh, what's this? Now, it's trying to also be the thing that alerts me and wakes me up in the middle of the night. No. That's what PagerDuty does. I want PagerDuty to do that thing, and I want other things—I want you just to be the log analysis thing and the way that I contextualize logs. And it feels like they keep bolting things on and bolting things on, where everything is more or less trying to evolve into becoming its own version of Datadog. What's up with that?Ian: Yeah, the sort of, dreaded platform play. I—[laugh] I was at New Relic when there were essentially two products that they sold. And then by the time I left, I think there was seven different products that were being sold, which is kind of a crazy, crazy thing when you think about it. And I think Datadog has definitely exceeded that now. And I definitely see many, many vendors in the market—and even open-source solutions—sort of presenting themselves as, like, this integrated experience.But to your point, even before about your experience of these banks it oftentimes become sort of a tick-a-box feature approach of, “Hey, I can do this thing, so buy more. And here's a shared navigation panel.” But are they really integrated? Like, are you getting real value out of it? One of the things that I do in my role is I get to work with our internal product teams very closely, particularly around new initiatives like tracing functionality, and the constant sort of conversation is like, “What is the outcome? What is the value?”It's not about the feature; it's not about having a list of 19 different features. It's like, “What is the user able to do with this?” And so, for example, there are lots of platforms that have metrics, logs, and tracing. The new one-upmanship is saying, “Well, we have events as well. And we have incident response. And we have security. And all these things sort of tie together, so it's one invoice.”And constantly I talk to customers, and I ask them, like, “Hey, what are the outcomes that you're getting when you've invested so heavily in one vendor?” And oftentimes, the response is, “Well, I only need to deal with one vendor.” Okay, but that's not an outcome. [laugh]. And it's like the business having a single invoice.Corey: Yeah, that is something that's already attainable today. If you want to just have one vendor with a whole bunch of crappy offerings, that's what AWS is for. They have AmazonBasics versions of everything you might want to use in production. Oh, you want to go ahead and use MongoDB? Well, use AmazonBasics MongoDB, but they call it DocumentDB because of course they do. And so, on and so forth.There are a bunch of examples of this, but those companies are still in business and doing very well because people often want the genuine article. If everyone was trying to do just everything to check a box for procurement, great. AWS has already beaten you at that game, it seems.Ian: I do think that, you know, people are hoping for that greater value and those greater outcomes, so being able to actually provide differentiation in that market I don't think is terribly difficult, right? There are still huge gaps in let's say, root cause analysis during an investigation time. There are huge issues with vendors who don't think beyond sort of just the one individual who's looking at a particular dashboard or looking at whatever analysis tool there is. So, getting those things actually tied together, it's not just, “Oh, we have metrics, and logs, and traces together,” but even if you say we have metrics and tracing, how do you move between metrics and tracing? One of the goals in the way that we're developing product at Chronosphere is that if you are alerted to an incident—you as an engineer; doesn't matter whether you are massively sophisticated, you're a lead architect who has been with the company forever and you know everything or you're someone who's just come out of onboarding and is your first time on call—you should not have to think, “Is this a tracing problem, or a metrics problem, or a logging problem?”And this is one of those things that I mentioned before of requiring that really heavy level of knowledge and understanding about the observability space and your data and your architecture to be effective. And so, with the, you know, particularly observability teams and all of the engineers that I speak with on a regular basis, you get this sort of circumstance where well, I guess, let's talk about a real outcome and a real pain point because people are like, okay, yeah, this is all fine; it's all coming from a vendor who has a particular agenda, but the thing that constantly resonates is for large organizations that are moving fast, you know, big startups, unicorns, or even more traditional enterprises that are trying to undergo, like, a rapid transformation and go really cloud-native and make sure their engineers are moving quickly, a common question I will talk about with them is, who are the three people in your organization who always get escalated to? And it's usually, you know, between two and five people—Corey: And you can almost pick those perso—you say that and you can—at least anyone who's worked in environments or through incidents like this more than a few times, already have thought of specific people in specific companies. And they almost always fall into some very predictable archetypes. But please, continue.Ian: Yeah. And people think about these people, they always jump to mind. And one of the things I asked about is, “Okay, so when you did your last innovation around observably”—it's not necessarily buying a new thing, but it maybe it was like introducing a new data type or it was you're doing some big investment in improving instrumentation—“What changed about their experience?” And oftentimes, the most that can come out is, “Oh, they have access to more data.” Okay, that's not great.It's like, “What changed about their experience? Are they still getting woken up at 3 am? Are they constantly getting pinged all the time?” One of the vendors that I worked at, when they would go down, there were three engineers in the company who were capable of generating list of customers who are actually impacted by damage. And so, every single incident, one of those three engineers got paged into the incident.And it became borderline intolerable for them because nothing changed. And it got worse, you know? The platform got bigger and more complicated, and so there were more incidents and they were the ones having to generate that. But from a business level, from an observability outcomes perspective, if you zoom all the way up, it's like, “Oh, were we able to generate the list of customers?” “Yes.”And this is where I think the observability industry has sort of gotten stuck—you know, at least one of the ways—is that, “Oh, can you do it?” “Yes.” “But is it effective?” “No.” And by effective, I mean those three engineers become the focal point for an organization.And when I say three—you know, two to five—it doesn't matter whether you're talking about a team of a hundred or you're talking about a team of a thousand. It's always the same number of people. And as you get bigger and bigger, it becomes more and more of a problem. So, does the tooling actually make a difference to them? And you might ask, “Well, what do you expect from the tooling? What do you expect to do for them?” Is it you give them deeper analysis tools? Is it, you know, you do AI Ops? No.The answer is, how do you take the capabilities that those people have and how do you spread it across a larger population of engineers? And that, I think, is one of those key outcomes of observability that no one, whether it be in open-source or the vendor side is really paying a lot of attention to. It's always about, like, “Oh, we can just shove more data in. By the way, we've got petabyte scale and we can deal with, you know, 2 billion active time series, and all these other sorts of vanity measures.” But we've gotten really far away from the outcomes. It's like, “Am I getting return on investment of my observability tooling?”And I think tracing is this—as you've said, it can be difficult to reason about right? And people are not sure. They're feeling, “Well, I'm in a microservices environment; I'm in cloud-native; I need tracing because my older APM tools appear to be failing me. I'm just going to go and wriggle my way through implementing OpenTelemetry.” Which has significant engineering costs. I'm not saying it's not worth it, but there is a significant engineering cost—and then I don't know what to expect, so I'm going to go on through my data somewhere and see whether we can achieve those outcomes.And I do a pilot and my most sophisticated engineers are in the pilot. And they're able to solve the problems. Okay, I'm going to go buy that thing. But I've just transferred my problems. My engineers have gone from solving problems in maybe logs and grepping through petabytes worth of logs to using some sort of complex proprietary query language to go through your tens of petabytes of trace data but actually haven't solved any problem. I've just moved it around and probably just cost myself a lot, both in terms of engineering time and real dollars spent as well.Corey: One of the challenges that I'm seeing across the board is that observability, for certain use cases, once you start to see what it is and its potential for certain applications—certainly not all; I want to hedge that a little bit—but it's clear that there is definite and distinct value versus other ways of doing things. The problem is, is that value often becomes apparent only after you've already done it and can see what that other side looks like. But let's be honest here. Instrumenting an application is going to take some significant level of investment, in many cases. How do you wind up viewing any return on investment that it takes for the very real cost, if only in people's time, to go ahead instrumenting for observability in complex environments?Ian: So, I think that you have to look at the fundamentals, right? You have to look at—pretend we knew nothing about tracing. Pretend that we had just invented logging, and you needed to start small. It's like, I'm not going to go and log everything about every application that I've had forever. What I need to do is I need to find the points where that logging is going to be the most useful, most impactful, across the broadest audience possible.And one of the useful things about tracing is because it's built in distributed environments, primarily for distributed environments, you can look at, for example, the biggest intersection of requests. A lot of people have things like API Gateways, or they have parts of a monolith which is still handling a lot of requests routing; those tend to be areas to start digging into. And I would say that, just like for anyone who's used Prometheus or decided to move away from Prometheus, no one's ever gone and evaluated Prometheus solution without having some sort of Prometheus data, right? You don't go, “Hey, I'm going to evaluate a replacement for Prometheus or my StatsD without having any data, and I'm simultaneously going to generate my data and evaluate the solution at the same time.” It doesn't make any sense.With tracing, you have decent open-source projects out there that allow you to visualize individual traces and understand sort of the basic value you should be getting out of this data. So, it's a good starting point to go, “Okay, can I reason about a single request? Can I go and look at my request end-to-end, even in a relatively small slice of my environment, and can I see the potential for this? And can I think about the things that I need to be able to solve with many traces?” Once you start developing these ideas, then you can have a better idea of, “Well, where do I go and invest more in instrumentation? Look, databases never appear to be a problem, so I'm not going to focus on database instrumentation. What's the real problem is my external dependencies. Facebook API is the one that everyone loves to use. I need to go instrument that.”And then you start to get more clarity. Tracing has this interesting network effect. You can basically just follow the breadcrumbs. Where is my biggest problem here? Where are my errors coming from? Is there anything else further down the call chain? And you can sort of take that exploratory approach rather than doing everything up front.But it is important to do something before you start trying to evaluate what is my end state. End state obviously being sort of nebulous term in today's world, but where do I want to be in two years' time? I would like to have a solution. Maybe it's open-source solution, maybe it's a vendor solution, maybe it's one of those platform solutions we talked about, but how do I get there? It's really going to be I need to take an iterative approach and I need to be very clear about the value and outcomes.There's no point in doing a whole bunch of instrumentation effort in things that are just working fine, right? You want to go and focus your time and attention on that. And also you don't want to go and burn just singular engineers. The observability team's purpose in life is probably not to just write instrumentation or just deploy OpenTelemetry. Because then we get back into the land where engineers themselves know nothing about the monitoring or observability they're doing and it just becomes a checkbox of, “I dropped in an agent. Oh, when it comes time for me to actually deal with an incident, I don't know anything about the data and the data is insufficient.”So, a level of ownership supported by the observability team is really important. On that return on investment, sort of, though it's not just the instrumentation effort. There's product training and there are some very hard costs. People think oftentimes, “Well, I have the ability to pay a vendor; that's really the only cost that I have.” There's things like egress costs, particularly volumes of data. There's the infrastructure costs. A lot of the times there will be elements you need to run in your own environment; those can be very costly as well, and ultimately, they're sort of icebergs in this overall ROI conversation.The other side of it—you know, return and investment—return, there's a lot of difficulty in reasoning about, as you said, what is the value of this going to be if I go through all this effort? Everyone knows a sort of, you know, meme or archetype of, “Hey, here are three options; pick two because there's always going to be a trade off.” Particularly for observability, it's become an element of, I need to pick between performance, data fidelity, or cost. Pick two. And when data fidelity—particularly in tracing—I'm talking about the ability to not sample, right?If you have edge cases, if you have narrow use cases and ways you need to look at your data, if you heavily sample, you lose data fidelity. But oftentimes, cost is a reason why you do that. And then obviously, performance as you start to get bigger and bigger datasets. So, there's a lot of different things you need to balance on that return. As you said, oftentimes you don't get to understand the magnitude of those until you've got the full data set in and you're trying to do this, sort of, for real. But being prepared and iterative as you go through this effort and not saying, “Okay, well, I'm just going to buy everything from one vendor because I'm going to assume that's going to solve my problem,” is probably that undercurrent there.Corey: As I take a look across the entire ecosystem, I can't shake the feeling—and my apologies in advance if this is an observation, I guess, that winds up throwing a stone directly at you folks—Ian: Oh, please.Corey: But I see that there's a strong observability community out there that is absolutely aligned with the things I care about and things I want to do, and then there's a bunch of SaaS vendors, where it seems that they are, in many cases, yes, advancing the state of the art, I am not suggesting for a second that money is making observability worse. But I do think that when the tool you sell is a hammer, then every problem starts to look like a nail—or in my case, like my thumb. Do you think that there's a chance that SaaS vendors are in some ways making this entire space worse?Ian: As we've sort of gone into more cloud-native scenarios and people are building things specifically to take advantage of cloud from a complexity standpoint, from a scaling standpoint, you start to get, like, vertical issues happening. So, you have things like we're going to charge on a per-container basis; we're going to charge on a per-host basis; we're going to charge based off the amount of gigabytes that you send us. These are sort of like more horizontal pricing models, and the way the SaaS vendors have delivered this is they've made it pretty opaque, right? Everyone has experiences, or has jerks about overages from observability vendors' massive spikes. I've worked with customers who have used—accidentally used some features and they've been billed a quarter million dollars on a monthly basis for accidental overages from a SaaS vendor.And these are all terrible things. Like, but we've gotten used to this. Like, we've just accepted it, right, because everyone is operating this way. And I really do believe that the move to SaaS was one of those things. Like, “Oh, well, you're throwing us more data, and we're charging you more for it.” As a vendor—Corey: Which sort of erodes your own value proposition that you're bringing to the table. I mean, I don't mean to be sitting over here shaking my fist yelling, “Oh, I could build a better version in a weekend,” except that I absolutely know how to build a highly available Rsyslog cluster. I've done it a handful of times already and the technology is still there. Compare and contrast that with, at scale, the fact that I'm paying 50 cents per gigabyte ingested to CloudWatch logs, or a multiple of that for a lot of other vendors, it's not that much harder for me to scale that fleet out and pay a much smaller marginal cost.Ian: And so, I think the reaction that we're seeing in the market and we're starting to see—we're starting to see the rise of, sort of, a secondary class of vendor. And by secondary, I don't mean that they're lesser; I mean that they're, sort of like, specifically trying to address problems of the primary vendors, right? Everyone's aware of vendors who are attempting to reduce—well, let's take the example you gave on logs, right? There are vendors out there whose express purpose is to reduce the cost of your logging observability. They just sit in the middle; they are a middleman, right?Essentially, hey, use our tool and even though you're going to pay us a whole bunch of money, it's going to generate an overall return that is greater than if you had just continued pumping all of your logs over to your existing vendor. So, that's great. What we think really needs to happen, and one of the things we're doing at Chronosphere—unfortunate plug—is we're actually building those capabilities into the solution so it's actually end-to-end. And by end-to-end, I mean, a solution where I can ingest my data, I can preprocess my data, I can store it, query it, visualize it, all those things, aligned with open-source standards, but I have control over that data, and I understand what's going on with particularly my cost and my usage. I don't just get a bill at the end of the month going, “Hey, guess what? You've spent an additional $200,000.”Instead, I can know in real time, well, what is happening with my usage. And I can attribute it. It's this team over here. And it's because they added this particular label. And here's a way for you, right now, to address that and cap it so it doesn't cost you anything and it doesn't have a blast radius of, you know, maybe degraded performance or degraded fidelity of the data.That though is diametrically opposed to the way that most vendors are set up. And unfortunately, the open-source projects tend to take a lot of their cues, at least recently, from what's happening in the vendor space. One of the ways that you can think about it is a sort of like a speed of light problem. Everyone knows that, you know, there's basic fundamental latency; everyone knows how fast disk is; everyone knows the, sort of like, you can't just make your computations happen magically, there's a cost of running things horizontally. But a lot of the way that the vendors have presented efficiency to the market is, “Oh, we're just going to incrementally get faster as AWS gets faster. We're going to incrementally get better as compression gets better.”And of course, you can't go and fit a petabyte worth of data into a kilobyte, unless you're really just doing some sort of weird dictionary stuff, so you feel—you're dealing with some fundamental constraints. And the vendors just go, “I'm sorry, you know, we can't violate the speed of light.” But what you can do is you can start taking a look at, well, how is the data valuable, and start giving the people controls on how to make it more valuable. So, one of the things that we do with Chronosphere is we allow you to reshape Prometheus metrics, right? You go and express Prometheus metrics—let's say it's a business metric about how many transactions you're doing as a business—you don't need that on a per-container basis, particularly if you're running 100,000 containers globally.When you go and take a look at that number on a dashboard, or you alert on it, what is it? It's one number, one time series. Maybe you break it out per region. You have five regions, you don't need 100,000 data points every minute behind that. It's very expensive, it's not very performant, and as we talked about earlier, it's very hard to reason about as a human being.So, giving the tools to be able to go and condense that data down and make it more actionable and more valuable, you get performance, you get cost reduction, and you get the value that you ultimately need out of the data. And it's one of the reasons why, I guess, I work at Chronosphere. Which I'm hoping is the last observability [laugh] venture I ever work for.Corey: Yeah, for me a lot of the data that I see in my logs, which is where a lot of this stuff starts and how I still contextualize these things, is nonsense that I don't care about and will never care about. I don't care about load balance or health checks. I don't particularly care about 200 results for the favicon when people visit the site. I care about other things, but just weed out the crap, especially when I'm paying by the pound—or at least by the gigabyte—in order to get that data into something. Yeah. It becomes obnoxious and difficult to filter out.Ian: Yeah. And the vendors just haven't done any of that because why would they, right? If you went and reduced the amount of log—Corey: Put engineering effort into something that reduces how much I can charge you? That sounds like lunacy. Yeah.Ian: Exactly. They're business models entirely based off it. So, if you went and reduced every one's logging bill by 30%, or everyone's logging volume by 30% and reduced the bills by 30%, it's not going to be a great time if you're a publicly traded company who has built your entire business model on essentially a very SaaS volume-driven—and in my eyes—relatively exploitative pricing and billing model.Corey: Ian, I want to thank you for taking so much time out of your day to talk to me about this. If people want to learn more, where can they find you? I mean, you are a Field CTO, so clearly you're outstanding in your field. But if, assuming that people don't want to go to farm country, where's the best place to find you?Ian: Yeah. Well, it'll be a bunch of different conferences. I'll be at KubeCon this year. But chronosphere.io is the company website. I've had the opportunity to talk to a lot of different customers, not from a hard sell perspective, but you know, conversations like this about what are the real problems you're having and what are the things that you sort of wish that you could do?One of the favorite things that I get to ask people is, “If you could wave a magic wand, what would you love to be able to do with your observability solution?” That's, A, a really great part, but oftentimes be being able to say, “Well, actually, that thing you want to do, I think I have a way to accomplish that,” is a really rewarding part of this particular role.Corey: And we will, of course, put links to that in the show notes. Thank you so much for being so generous with your time. I appreciate it.Ian: Thanks, Corey. It's great to be here.Corey: Ian Smith, Field CTO at Chronosphere on this promoted guest episode. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment, which going to be super easy in your case, because it's just one of the things that the omnibus observability platform that your company sells offers as part of its full suite of things you've never used.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
It's that time again, for the analysis of the big event of the year. Benjamin and Zac talk about everything announced at the Apple September 2022 event, including iPhone 14, iPhone 14 Pro, Apple Watch Series 8, Apple Watch Ultra, and the new AirPods Pro. Far out. Sponsored by Decluttr: Trade-in your iPhone or other device with a 28 day price lock and get an extra 10%* cash back with code 9TO5MAC(*$30 cap). Sponsored by Ladder: Go to Ladder.com/HappyHour today to see if you're instantly approved. Sponsored by New Relic: That next 9:00 p.m. call is just waiting to happen, get New Relic before it does! You can get access to the whole New Relic platform and 100GB of data free, forever – no credit card required. Sign up at NewRelic.com/happyhour. Sponsored by Pillow: Pillow is an all-in-one sleep tracking solution to help you get a better night's sleep. Download it from the App Store today. Follow Zac Hall @apollozac Benjamin Mayo @bzamayo Read More iPhone 14 vs iPhone 14 Pro: Which should you buy? AirPods Pro 2 vs AirPods Pro, AirPods 2/3: How the lineup compares iPhone 14 sales: Apple expects 85% of early buyers to opt for a Pro model – report iPhone 14 Pro: An in-depth look at how Dynamic Island works with apps, animations, more Apple Watch Ultra vs Series 8, SE: In-depth comparison of the new lineup iPhone 14 Pro hands-on: New colors, Dynamic Island in action, and more [Videos] Hands-on: Apple Watch Series 8 and Apple Watch Ultra Tim Cook, Jony Ive, and Laurene Powell Jobs talk Steve Jobs's legacy, Tesla design, and much more Laurene Powell Jobs, Jony Ive, Tim Cook, and others team up to launch the ‘Steve Jobs Archive' Tim Cook explains why Apple refuses to adopt RCS: ‘Buy your mom an iPhone' Apple event: The full recap on iPhone 14, Apple Watch Ultra, and much more iPhone 14 Emergency SOS via satellite feature to be powered by Globalstar Listen to more Happy Hour Episodes Subscribe Apple Podcasts Overcast Spotify Listen to more 9to5 Podcasts Apple @ Work Alphabet Scoop Electrek The Buzz Podcast Space Explored Rapid Unscheduled Discussions Enjoy the podcast? Shop Apple at Amazon to support 9to5Mac Happy Hour or shop 9to5Mac Merch!
Apple held its annual event this week at its Cupertino, Calif. headquarters. There were a lot of products announced, including new Apple Watches, new iPhones and new AirPods Pro. Dave and I take some time to talk about all of the products, and colors, of everything the company announced. Brought to you by: Kolide: Is your Service Desk struggling with remote work and a mix of Mac, Windows, and Linux devices? Kolide can help. Learn more here. New Relic: Use the data platform made for the curious! Right now, you can get access to the whole New Relic platform and 100GB of data per month free, forever – no credit card required! Sign up at New Relic.com/dalrymple. Show Notes: iPhone 14 iPhone 14 Pro Apple Watch Series 8 Apple Watch Ultra Apple Watch SE AirPods Pro
Apple will hold it's anual event on September 7 in Cupertino, the company announced. There is little doubt we will see a new iPhone at the event, but we could see a few other products too. Regardless of what else is announced, Appple will want to the focus on iPhone. What happens when you take pictures of your naked toddler for your doctor and those pictures get uploaded to the cloud. One unfortunate father found out. Dave and I are back talking about how useless Siri is, this time with setting timers—now it can't even do that reliably. Brought to you by: Zocdoc: NOW is the time to prioritize your health. Go to Zocdoc.com/DALRYMPLE and download the Zocdoc app to sign-up for FREE and book a top-rated doctor. Many are available as soon as today. LinkedIn Jobs: LinkedIn Jobs helps you find the candidates you want to talk to, faster. Did you know every week, nearly 40 million job seekers visit LinkedIn? Post your job for free at LinkedIn.com/DALRYMPLE. Terms and conditions apply. New Relic: Use the data platform made for the curious! Right now, you can get access to the whole New Relic platform and 100GB of data per month free, forever – no credit card required! Sign up at New Relic.com/dalrymple. Show Notes: Apple event official for Sept 7 A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal Shazam Turns 20 Apple Watch and timers That handy little “Fn” key Apple expands Self Service Repair to Mac notebooks Sennheiser's $350 Momentum 4 ANC Headphones Boast 60-Hour Battery Life Former Apple engineer accused of stealing automotive trade secrets pleads guilty
Originally published on February 12, 2022. Lee Atchison spent seven years at Amazon working in retail, software distribution, and Amazon Web Services. He then moved to New Relic, where he has spent four years scaling the company's internal architecture. From his decade of experience at fast-growing web technology companies, Lee has written the book Architecting The post Architecting for Scale with Lee Atchison appeared first on Software Engineering Daily.
Originally published on February 12, 2022. Lee Atchison spent seven years at Amazon working in retail, software distribution, and Amazon Web Services. He then moved to New Relic, where he has spent four years scaling the company's internal architecture. From his decade of experience at fast-growing web technology companies, Lee has written the book Architecting The post Architecting for Scale with Lee Atchison appeared first on Software Engineering Daily.
Originally published on February 12, 2022. Lee Atchison spent seven years at Amazon working in retail, software distribution, and Amazon Web Services. He then moved to New Relic, where he has spent four years scaling the company's internal architecture. From his decade of experience at fast-growing web technology companies, Lee has written the book Architecting The post Architecting for Scale with Lee Atchison appeared first on Software Engineering Daily.
Get a $100 60-day credit on your new account at: http://linode.com/wan Save your time and sanity with New Relic at https://www.newrelic.com/wan Save money on your phone plan today at https://www.mintmobile.com/wanshow Timestamps: (Courtesy of NoKi1119 - Timestamps may be off due to change in sponsors) 0:00 Chapters. 1:08 Intro. 1:33 Topic #1: Lenovo sends Frameworks a cease & desist. 2:53 Legion logo compared to Mercedez-Benz. 4:10 Linus & Luke agree with Lenovo. 6:04 Topic #2: Apple restricts third-party tracking. 7:42 Ads sold on applications, hypocritical Apple. 10:50 Luke setting up an iPhone to check tracking. 11:10 LTTStore's RGB hoodie. 13:56 Lambo edition LTT bottle. Cont. Topic #2: Apple restricts third-patry tracking. 15:13 Reading Apple's "transparent" when setting up. 17:28 Topic #3: Discussing Metaverse's look. 19:16 Horizon Worlds, making avatar via Readyplayer. 25:26 Merch Messages #1. 26:32 Important lessons Linus teaches his kids. 29:38 Secret shopper for cell service provider. 30:54 Do writers have partial ownership of channels? 33:24 Favorite daily-driver phone. 37:18 How much damage Denis (& Colton) did to Linus's house. 44:22 Trust Me Bro Limited Lifetime Warranty. Cont. Merch Messages #1. 47:00 Trust me bro shirt was requested on Twitter. 47:51 Sponsors. 48:05 Squarespace site maker. 48:53 XSplit live streaming. 49:42 Secretlab gaming chair. 50:52 Topic #4: DOOM ran on tractor display. 53:12 Is there a more anti-consumer company? 58:46 How John Deere hurts everyone. 1:00:16 LTTLabs is not LTTLab, discussing domain. 1:04:02 Lab32, reason behind the number. 1:06:26 Merch Messages #2. 1:06:34 Camera bag for the backpack, domain Strawpoll. 1:07:58 What games would Linus & Luke install on the tractor. 1:09:44 Steam alternatives. 1:12:30 "All domains I own" segment. 1:15:28 Frameworks' used hardware market. 1:16:30 Project Farm screwdriver testing. 1:17:28 LTTStore Screwdriver pop-up. 1:19:38 Linus calls about backpack's availability in the pop-up. 1:21:54 LTX, Linus recommends against early booking. 1:22:54 Merch Messages #3. 1:23:02 More home server, automation & labs. 1:24:45 What Linus plans to leave behind for the new owners. 1:25:37 NCIX's impact on Linus & LTTStore. 1:29:10 Hiring experienced V.S. learners. 1:33:14 Topic #5: Tesla accused of false autopilot advertisement. 1:38:02 Merch Messages #4. 1:38:12 VPS company suggestion ft. Linus's birthday. 1:39:40 Valve Index features & suggestions. 1:40:28 What do Linus & Luke eat? 1:41:58 Coolest case Linus & Luke built in. 1:44:00 Take on Samsung's Z Fold 4. 1:44:30 Labs on repairability or jailbreaks. 1:45:25 Labs on mics, interface, XLR & TRS. 1:48:12 Linus on long time format. 1:49:47 Any cool projects for computer vision? 1:49:56 What business help Linus got. 1:50:29 Budget server rack. 1:50:54 Benefits of gaming to buy consoles. 1:52:18 Floatplane QOL features & motivation. 1:54:32 Why Colton is always fired. 1:56:08 Do LTT employees get discount on merch? 1:58:28 DOS gaming PCs build in the future. 1:58:42 LTT RGB merch idea. 1:58:58 Linux-like challenge with Apple. 1:59:38 Linus on Beat Saber technique. 2:00:06 Selling YouTube videos for revenues. 2:02:52 Recommendations for rack-mount gaming 2:03:22 Linus's Epson projector. 2:04:22 Outro.
Apple seems to want to shove more ads down our throats. Benjamin and Zac discuss the careful line Apple must walk to not lose customers at the expense of infinite revenue growth. Also, iOS 16 beta 6 is out as Apple heads to weekly beta cycle ahead of an expected iPhone 14 event on September 7. Plus, Zac finally had a chance to try out Stage Manager and put it through its paces. Sponsored by Things: The award-winning to-do app for iPhone, iPad, and Mac. Sponsored by BetterHelp: As a listener, you'll get 10% off your first month by visiting our sponsor at BetterHelp.com/MacHappyHour. Sponsored by New Relic: That next 9:00 p.m. call is just waiting to happen, get New Relic before it does! You can get access to the whole New Relic platform and 100GB of data free, forever – no credit card required. Sign up at NewRelic.com/happyhour. Follow Zac Hall @apollozac Benjamin Mayo @bzamayo Read More Bloomberg: iPhone 14 event slated for September 7, release on September 16 Report: iPhone 14 Pro price increase could be $100, offset by more storage watchOS 9 beta 6 is now available to developers and public testers Apple sets new deadline for corporate employees to return to the office three days a week iOS 16 beta 6 gives users more control over battery percentage in Low Power Mode iOS 16 beta 6 now available as Apple finalizes features ahead of September launch Report: Apple wants to triple its revenue from ads business, likely expanding Search Ads to Maps app Apple September event: iPhone 14, Apple Watch Series 8, iOS 16 release date, and more. Listen to more Happy Hour Episodes Subscribe Apple Podcasts Overcast Spotify Listen to more 9to5 Podcasts Apple @ Work Alphabet Scoop Electrek The Buzz Podcast Space Explored Rapid Unscheduled Discussions Enjoy the podcast? Shop Apple at Amazon to support 9to5Mac Happy Hour or shop 9to5Mac Merch!
Google is whining again about the green bubbles in Apple's Messages and the fact that the two systems don't work well together. Google wants Apple to give up its superior system and adopt a less secure system. Thanks to a listener of last week's show we have some more information about Passkeys and what it will mean for users going forward. Brought to you by: New Relic: Use the data platform made for the curious! Right now, you can get access to the whole New Relic platform and 100GB of data per month free, forever – no credit card required! Sign up at New Relic.com/dalrymple. MasterClass: I highly recommend you check it out. Get unlimited access to EVERY MasterClass, and as a listener of The Dalrymple Report, you get 15% off an annual membership! Go to MASTERCLASS.com/dalrymplenow. That's MASTERCLASS.com/dalrymple for 15% off MasterClass. Show Notes: Do you remember the Columbia Record Club? Issey Miyake and Steve Jobs ‘It's Time for Apple to Fix Texting' Says New Android Website Pushing RCS Messaging Technology A look at Passkey on Mac and on Windows (note the QR-code) Disney+ grows to 152 million subscribers Latest iOS 16 beta 5 brings battery percentage to Home Screen Compare look to the Control Panel percentage. It's all about the notch and lack of room to do it bigger
In this HCI Podcast episode, Dr. Jonathan H. Westover talks with Nellie Wartoft about utilizing social learning in your enterprise learning and development strategy. See the video here: https://youtu.be/1McHZABDvyI. Nellie Wartoft started as a young entrepreneur in her early teen years in Sweden. After winning several trophies in skeet shooting, Nellie turned her attention to educating the elderly on the internet and digital literacy to improve their qualities of life. This social enterprise was more than a passion and purpose for her - it sowed the seeds of her journey in Social Learning. At the age of 18, Nellie booked a single ticket to Singapore, to build her career and business in what she refers to as “the most buzzing and happening part of the world”. In her first foray into the corporate world at Michael Page, Nellie rose through the ranks and was the youngest executive in the company to be appointed Practice Lead and become a Top Biller at age 23. She's placed over 150 executives into Sales and Marketing roles in Fortune 500 companies within the financial, legal, consulting, technology and media sectors. It was during her time as a recruiter that she spotted a stark skills gap that she was determined to fix. She saw that there was a need to refresh the way business professionals leveled up through actionable insights from industry experts. This led to the launch of Tigerhall in March 2019, a mobile Saas platform for social learning, revolutionizing how professionals learn from one another. Nellie challenges mass education formats that fail to prepare future leaders for success in the real world. Her vision and execution are taking the corporate world by storm. Tigerhall's clients include HP, New Relic and BNY Mellon. She's raised over $10 million in funding from visionary investors such as Monk's Hill Ventures, Sequoia Capital and the XA Network. Fueled by the mission, “where you come from should never get in the way you want to go,” Nellie continues to lead Tigerhall in Fundraising, Sales and BD while scaling the team across APAC and the U.S. Please consider supporting the podcast on Patreon and leaving a review wherever you listen to your podcasts! Head over to setapp.com/podcast to listen to Ahead of Its Time. Check out BetterHelp.com/HCI to explore plans and options! Go to cardiotabs.com/innovations and use code innovations to get a free Mental Health Pack featuring Cardiotabs Omega-3 Lemon Minis and Curcumin when you sign up for a subscription. Check out Zapier.com/HCI to explore their business automations! Check out the HCI Academy: Courses, Micro-Credentials, and Certificates to Upskill and Reskill for the Future of Work! Check out the LinkedIn Alchemizing Human Capital Newsletter. Check out Dr. Westover's book, The Future Leader. Check out Dr. Westover's book, 'Bluer than Indigo' Leadership. Check out Dr. Westover's book, The Alchemy of Truly Remarkable Leadership. Check out the latest issue of the Human Capital Leadership magazine. Each HCI Podcast episode (Program, ID No. 592296) has been approved for 0.50 HR (General) recertification credit hours toward aPHR™, aPHRi™, PHR®, PHRca®, SPHR®, GPHR®, PHRi™ and SPHRi™ recertification through HR Certification Institute® (HRCI®). Learn more about your ad choices. Visit megaphone.fm/adchoices
This week on the show, Dave and I look at some of the new features of iOS, including some found in Messages. We also look at Mark Gurman's list of new Apple products he expects to be released, and some new features released in Google Maps. Oh, and CompuServe. Brought to you by: New Relic: Use the data platform made for the curious! Right now, you can get access to the whole New Relic platform and 100GB of data per month free, forever – no credit card required! Sign up at New Relic.com/dalrymple. LinkedIn Jobs: LinkedIn Jobs helps you find the candidates you want to talk to, faster. Did you know every week, nearly 40 million job seekers visit LinkedIn? Post your job for free at LinkedIn.com/DALRYMPLE. Terms and conditions apply. Show Notes: MG Siegler's 3 favorite iOS 16 features A burger without Heinz Mark Gurman's list of Apple products in the pipeline The Apple Store Time Machine Three new features coming to Google Maps CompuServe lives!
Save your time and sanity with New Relic at https://www.newrelic.com/wan Make compliance easy with Kolide at: https://l.kolide.co/3aPeTTR Get up to 70% off 2-year a NordPass Premium plan with an extra month free at nordpass.com/WAN with code WAN Timestamps (Courtesy of NoKi1119 - Note: timing may be off due to change in sponsors) 1:13 Intro 1:48 Superchats and bits are ignored, use merch messages 2:22 Topic #1 - OverKill computers, now with financing... 3:53 Reviewing computer projects and configurator 6:40 Project "Unknown", custom request form & pricing 8:16 Comparing to Maingear, cooling, colors & configs 12:50 Maingear's pricing to upgrade SSD 15:40 Linus's system specification, comparing to OverKill 17:02 Linus on the costs of watercooling 19:48 TikTokker compares pricing to expensive companies 22:20 OverKill's response, customers on delays & NDA 25:20 OverKill PC giveaways on patreon; is it a lottery? 26:46 OverKill sends a cease & desist to a TikTokker 27:48 Marine vet of OverKill goes on TikTok 30:32 A neutral take on the system integrator & website 32:56 Alex's notes on the topic, mentioning Intel 34:02 Topic #2 - Discord & voice chatting is now on Xbox 36:22 Luke on not supporting APIs for third-party apps 38:46 Cross-playing, browser apps are not native apps 40:02 Merch Messages #1 ft Colton 40:16 Biggest ask from the 40 series cards 42:18 Thoughts on LMG Autobench going open-source 45:30 Labs expectation, success and failure 47:02 Products besides the socks to be excited about 48:50 LTTStore Linus sandals idea 49:40 Labs data & sphere of influence 52:28 LTTStore swim trunks, a great photo shoot 55:20 LTTStore mystery cable ties, backpack reviews 59:04 Verification for the backpack 1:00:34 Merch Messages #2 1:00:44 Pick for a competition game for Whale LAN 1:03:56 LMG dedicated workshop for maker content idea 1:05:14 Unbiased benchmark reviews for vacuums idea 1:07:03 Luke's future if he didn't go to NCIX 1:10:50 Linus owes The Spiffing Brit a tea-powered PC 1:13:30 Linus not able to watch Backstreet Boys live 1:14:30 Intel ARC for Linux as a good value 1:15:56 Sponsor - Corsair 1:17:03 Sponsor - Zoho Desk 1:17:52 Sponsor - PolyArc ft Linus screwing up the spot 1:18:40 Merch Messages #3 1:19:10 What infuriates Linus 1:20:26 WAN topics that stand out, IRL rocket league 1:22:32 Linus's favorite type of cheese 1:23:48 Sponsor - PolyArc Moss: Book II on Meta Quest 2 1:24:56 Topic #3 - Hive "fights" climate change via e-waste 1:27:58 Topic #4 - FUTO on taking back control of PCs 1:30:35 FUTO's projects, GrapheneOS 1:32:36 Topic #5 - Robot dog with an assault rifle 1:35:26 Discussing modern warfare 1:36:28 Topic #6 - Emails with allegations against Linus 1:38:26 Linus's reply & his relationship history 1:44:42 Merch Messages #4 1:44:50 Plans for AddOns on the LTTStore backpack 1:46:20 Testing desk-mats & tempered glass 1:47:12 Plans to collaborate with GN & Labs 1:48:44 Display cable testing incorporated into Labs 1:50:11 Colton reads a curated message to himself 1:50:56 Favorite game franchise 1:51:48 Outro
Jon Prosser and Sam Kohl return to the Genius Bar...or at least their physical bodies do. Their souls didn't make it this week. Still, the duo discusses new rumors about iPhone 14 and Apple Watch Series 8...plus some recent troubles with Apple's PR department. Support Our Sponsors!
Chris talks about a small toy app he maintains on the side and working with a project called capybara_table. Steph is getting ready for maternity leave and wonders how you track velocity and know if you're working quickly enough? They answer a listener's question about where to get started testing a legacy app. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. jnicklas/capybara_table: (https://github.com/jnicklas/capybara_table) Capybara selectors and matchers for working with HTML tables Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Just gotta hold on. Fly this thing straight to the crash site. STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. CHRIS: And I'm Steph Viccari. STEPH: And together, we're here to share a bit of what we've learned along the way. I love that you rolled with that. [laughs] CHRIS: No, actually, it was the only thing I could do. I [laughs] was frozen into action is a weird way to describe it, but there we are. STEPH: I mentioned to you a while back that I've always wanted to do that. Today was the day. It happened. CHRIS: Today was the day. It wasn't even that long ago that you told me. I feel like you could have waited another week or two. I feel like maybe I was too prepared. But yeah, for anyone listening, you may be surprised to find out that I am not, in fact, Steph Viccari. STEPH: And they'll be surprised to find out that I actually am Chris Toomey. This is just a solo monologue. And you've done a great job of two voices [laughs] this whole time and been tricking everybody. CHRIS: It has been a struggle. But I'm glad to now get the proper recognition for the fact that I have actually [laughs] been both sides of this thing the whole time. STEPH: It's been a very impressive talent in how you've run both sides of the conversation. Well, on that note, [laughs] switching gears just a bit, what's new in your world? CHRIS: What's new in my world? Answering now as Chris Toomey. Let's see; I got two small updates, one a very positive update, one a less positive update. As is the correct order, I'm going to lead with the less positive thing. So I have a small toy app that I maintain on the side. I used to have a bunch of these little purpose-built singular apps, typically Rails app sort of things where I would play with a new technology, but it was some sort of like, oh, it's a tracker. It's a counter. We talked about breakable toys in the past. These were those, for me, serve different purposes, productivity things, or whatever. But at some point, I was like, this is too much work, so I consolidated them all. And I kept like, there was a handful of features that I liked, smashed them all together into one Rails app that I maintain. And that's just like my Rails app. It turns out it's useful to be able to program the internet. So I was like, cool, I'll do that for myself. I have this little app that I maintain. It's got like a journal in it and other things. I think I've talked about the journal in the past. But I don't actually take that good care of it. I haven't added any features in a while. It mostly just does what it's supposed to, but it had...entropy had gotten the better of it. And so, I had a very small feature that I wanted to add. It was actually just a Rake task that should run in the background on a schedule. And if something is out of order, then it should send me an email. Basically, just an update of like, you need to do something. It seemed like such a simple task. And then, oh goodness, the failure modes that I fell into. First, I was on Heroku-18. Heroku is currently on their Heroku-22 stack. 18 being the year, so it was like 2018, and then there's a 2020 stack, and then the 2022. That's the current one. So I was two stacks behind, and they were yelling at me about that. So I was like, okay, but whatever. Can I ignore that for a little while? Turns out no, because I couldn't even get the app to boot locally, something about some gems or some I think Webpacker was broken locally. So I was trying to fix things, finally got that to work. But then I couldn't get it to build on CircleCI because Node needed Python, Python 2 specifically, not Python 3, in order to build Node dependencies, particularly LibSass, I want to say, or node-sass. So node-sass needed Python 2, which I believe is end of life-d, to build a CSS authoring tool. And I kind of took a step back at that moment, and I was like, what did we do, everybody? What is going on here? And thankfully, I feel like there was more sort of unification of tools and simplification of the build tool space and whatnot. But I patched it, and I fixed some things, then finally I got it working. But then Memcache wasn't working, and I had to de-provision that and reprovision something. The amount of little...like, each thing that I fixed broke something else. I was like, the only thing I can do at this point is just burn the entire app down and rebuild it. Thankfully, I found a working version of things. But I think at some point, I've got to roll up my sleeves some weekend and do the full Rails, Ruby, everything upgrade, just get back to fresh. But my goodness, it was rough. STEPH: I feel like this is one of those reasons where we've talked in the past about you want to do something, and you keep putting it off. And it's like, if I had just sat down and done it, I could have knocked it out. Like, oh, it only took me like 5-10 minutes. But then there's this where you get excited, and then you want to dive in. And then suddenly, you do spend an hour or however long, and you're just focused on trying to get to the point where you can break ground and start building. I think that's the resistance that we're often fighting when we think about, oh, I'm going to keep delaying this because I don't know how long it's going to take. CHRIS: There's something that I see in certain programming communities, which is sort of a beginner-friendliness or a beginner's mindset or a welcomingness to beginners. I see it, particularly in the Svelte world, where they have a strong focus on being able to pick something up and run with it immediately. The entire tutorial is built as there's the tutorial on the one side, like the text, and then on the right side is an interactive REPL. And you're just playing with the Svelte REPL and poking around. And it's so tangible and immediate. And they're working on a similar thing now for SvelteKit, which is the meta-framework that does server-side rendering and all the fancy stuff. But I love the idea that that is so core to how the Svelte community works. And I'll be honest that other times, I've looked at it, and I've been like, I don't care as much about the first run experience; I care much more about the long-term maintainability of something. But it turns out that I think those two are more coupled than I had initially...like, how easy is it for a beginner to get started is closely related to or is, you know, the flip side of how easy is it for me to maintain that over time, to find the documentation, to not have a weird builder that no one else has ever seen. There's that wonderful XKCD where it's like, what's the saddest thing on the internet? Seeing the question that you have asked by one other person on Stack Overflow and no answers four years ago. It's like, yeah, that's painful. You actually want to be part of the boring, mundane, everybody's getting the same errors, and we have solutions to them. So I really appreciate when frameworks and communities care a lot about both that first run experience but also the maintainability, the error messages, the how okay is it for this system to segfault? Because it turns out segfaults prints some funny characters to your terminal. And so, like the range from human-friendly error message all the way through to binary character dump, I'm interested in folks that care about that space. But yeah, so that's just a bit of griping. I got through it. I made things work. I appreciate, again, the efforts that people are putting in to make that sad situation that I experienced not as common. But to highlight something that's really great and wonderful that I've been working with, there is a project called capybaratable. capybaratable is the gem name. And it is just this delightful little set of matchers that you can use within a Capybara, particularly within feature spec. So if you have a table, you can now make an assertion that's like, expect the table to have table row. And then you can basically pass it a hash of the column name and the value, but you can pass it any of the columns that you want. And you can pass it...basically, it reads exactly like the user would read it. And then, if there's an error, if it actually doesn't find it, if it misses the assertion, it will actually print out a little ASCII table for you, which is so nice. It's like, here's the table row that I saw. It didn't have what you were looking for, friend, sorry about that. And it's just so expressive. It forces accessibility because it basically looks at the semantic structure of a table. And if your table is not properly semantically structured, if you're not using TDs and TRs, and all that kind of stuff, then it will not find it. And so it's another one of those cases where testing can be a really useful constraint from the usability and accessibility of your application. And so, just in every way, I found this project works so well. Error messages are great. It forces you into a better way of building applications. It is just a wonderful little tool that I found. STEPH: That's awesome. I've definitely seen other thoughtboters when working in codebases that then they'll add really nice helper methods around that for like checking does this data exist in the table? And so I'm used to seeing that type of approach or taking that type of approach myself. But the ASCII table printout is lovely. That's so...yeah, that's just a nice cherry on top. I will have to lock that one away and use that in the future. CHRIS: Yeah, really, just such a delightful thing. And again, in contrast to the troubles of my weekend, it was very nice to have this one tool that was just like, oh, here's an error, and it's so easy to follow, and yeah. So it's good that there are good things in the world. But speaking of good things, what's new in your world? I hope good things. And I hope you're not about to be like, everything's terrible. But what's up with you? [laughter] STEPH: Everything's on fire. No, I do have some good things. So the good thing is that I'm preparing for...I have maternity leave that's coming up. So I am going to take maternity leave in about four-ish weeks. I know the date, but I'm saying the ish because I don't know when people are listening. [laughs] So I'm taking maternity leave coming up soon. I'm very excited, a little panicked mostly about baby preparedness, because, oh my goodness, it is such an overwhelming world, and what everyone thinks you should or shouldn't have and things that you need to do. So I've been ramping up heavily in that area. And then also planning for when I'm gone and then what that's going to look like for the team, and for clients, and for making sure I've got work wrapped up nicely. So that's a big project. It's just something that's on my mind, something that I am working through and making plans for. On the weird side, I ran into something because I'm still in test migration world. That is one of like, this is my mountain. This is my Everest. I am determined to get all of these tests. Thank you to everyone who has listened to me, especially you, listen to me talk about this test migration path I've been on and the journey that it's been. This is the goal that I have in mind that I really want to get done. CHRIS: I know that when you said, "Especially you," you were talking to me, Chris Toomey. But I want to imagine that every listener out there is just like, aww, you're welcome, Steph. So I'm going to pretend for my own sake that that's what you meant by, especially you. It's especially every one of you out there in the audience. STEPH: Yes, I love either version. And good point, because you're right, I'm looking at you. So I can say especially you since you've been on this journey with me, but everybody listening has been on this journey with me. So I've got a number of files left that I'm working through. And one of the funky things that I ran into, well, it's really not funky; it was a little bit more of an educational rabbit hole for me because it's something that I hadn't considered. So migrating over a controller test over from Test::Unit to then RSpec, there are a number of controller tests that issue requests or they call the same controller method multiple times. And at first, I didn't think too much about it. I was like, okay, well, I'm just going to move this over to RSpec, and everything is going to be fine. But based on the way a lot of the information is getting set around logging in a user and then performing an action, and then trying to log in a different user, and then perform another action that was causing mayhem. Because then the second user was never getting logged in because the first user wasn't getting logged out. And it was causing enough problems that Joël and I both sat back, and we're like, this should really be a request back because that way, we're going through the full Rails routing. We're going through more of the sessions that get set, and then we can emulate that full request and response cycle. And that was something that I just hadn't, I guess, I hadn't done before. I've never written a controller spec where then I was making multiple calls. And so it took a little while for me to realize, like, oh, yeah, controller specs are really just unit test. And they're not going to emulate, give us the full lifecycle that a request spec does. And it's something that I've always known, but I've never actually felt that pain point to then push me over to like, hey, move this to a request spec. So that was kind of a nice reminder to go through to be like, this is why we have controller specs. You can unit test a specific action; it is just hitting that controller method. And then, if you want to do something that simulates more of a user flow, then go ahead and move over to the request spec land. CHRIS: I don't know what the current status is, but am I remembering correctly that the controller specs aren't really a thing anymore and that you're supposed to just use request specs? And then there's features specs. I feel like I'm conflating...there's like controller requests and feature, but feature maybe doesn't...no, system, that's what I'm thinking of. So request specs, I think, are supposed to be the way that you do controller-like things anymore. And the true controller spec unit level thing doesn't exist anymore. It can still be done but isn't recommended or common. Does that sound true to you, or am I making stuff up? STEPH: No, that sounds true to me. So I think controller specs are something that you can still do and still access. But they are very much at that unit layer focus of a test versus request specs are now more encouraged. Request specs have also been around for a while, but they used to be incredibly slow. I think it was more around Rails 5 that then they received a big increase in performance. And so that's when RSpec and Rails were like, hey, we've improved request specs. They test more of the framework. So if you're going to test these actions, we recommend going for request specs, but controller specs are still there. I think for smaller things that you may want to test, like perhaps you want to test that an endpoint returns a particular status that shows that you're not authorized or forbidden, something that's very specific, I think I would still reach for a controller spec in that case. CHRIS: I feel like I have that slight inclination to the unit spec level thing. But I've been caught enough by different things. Like, there was a case where CSRF wasn't working. Like, we made some switch in the application, and suddenly CSRF was broken, and I was like, well, that's bad. And the request spec would have caught it, but the controller spec wouldn't. And there's lots of the middleware stack and all of the before actions. There is so much hidden complexity in there that I think I'm increasingly of the opinion, although I was definitely resistant to it at first, but like, yeah, maybe just go the request spec route and just like, sure. And they'll be a little more costly, but I think it's worth that trade-off because it's the stuff that you're not thinking about that is probably the stuff that you're going to break. It's not the stuff that you're like, definitely, if true, then do that. Like, that's the easier stuff to get right. But it's the sneaky stuff that you want your tests to tell you when you did something wrong. And that's where they're going to sneak in. STEPH: I agree. And yeah, by going with the request specs, then you're really leaning into more of an integration test since you are testing more of that request/response lifecycle, and you're not as likely to get caught up on the sneaky stuff that you mentioned. So yeah, overall, it was just one of those nice reminders of I know I use request specs. I know there's a reason that I favor them. But it was one of those like; this is why we lean into request specs. And here's a really good use case of where something had been finagled to work as a controller test but really rightfully lived in more of an integration request spec. MIDROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help you cut your debugging time in half. So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today! STEPH: Changing gears just a bit, I have something that I'd love to chat with you about. It came up while I was having a conversation with another thoughtboter as we were discussing how do you track velocity and know if you're working quickly enough? So since we often change projects about every six months, there's the question of how do I adapt to this team? Or maybe I'm still newish to thoughtbot or to a team; how do I know that I am producing the amount of work that the client or the team expects of me and then also still balancing that and making sure that I'm working at a sustainable pace? And I think that's such a wonderful, thoughtful question. And I have some initial thoughts around it as to how someone could track velocity. I also think there are two layers to this; there could be are we looking to track an individual's velocity, or are we looking to track team velocity? I think there are a couple of different ways to look at this question. But I'm curious, what are your thoughts around tracking velocity? CHRIS: Ooh, interesting. I have never found a formal method that worked in this space, no metric, no analysis, no tool, no technique that really could boil this down and tell a truth, a useful truth about, quote, unquote, "Velocity." I think the question of individual velocity is really interesting. There's the case of an individual who joins a team who's mostly working to try and support others on the team, so doing a lot of pairing, doing a lot of other things. And their individual velocity, the actual output of lines of code, let's say, is very low, but they are helping the overall team move faster. And so I think you'll see some of that. There was an episode a while back where we talked about heuristics of a team that's moving reasonably well. And I threw out the like; I don't know, like a pull request a day sort of thing feels like the only arbitrary number that I feel comfortable throwing out there in the world. And ideally, these pull requests are relatively small, individual deployable things. But any other version of it, like, are we thinking lines of code? That doesn't make sense. Is it tickets? Well, it depends on how you size your tickets. And I think it's really hard. And I think it does boil down to it's sort of a feeling. Do we feel like we're moving at a comfortable clip? Do I feel like I'm roughly keeping pace with the rest of the team, especially given seniority and who's been on the team longer? And all of those sorts of things. So I think it's incredibly difficult to ask about an individual. I have, I think, some more pointed thoughts around as a team how we would think about it and communicate about velocity. But I'm interested what came to mind for you when you thought about it, particularly for the individual side or for the team if you want to go in that direction. STEPH: Yeah, most of my initial thoughts were more around the individual because I think that's where this person was coming from because they were more interested in, like, how do I know that I'm producing as much as the team would expect of me? But I think there's also the really interesting element of tracking a team's velocity as well. For the individual, I think it depends a lot on that particular team and their goals and what pace they're moving at. So when I do join a new team, I will look around to see, okay, well, what's the cadence? What's the standard bar for when someone picks up a ticket and then is able to push it through? How much cruft are we working with in the codebase? Because then that will change the team's expectations of yes, we know that we have a lot of legacy code that we're working with, and so it does take us longer to get through things. And that is totally fine because we are looking more to optimize our sustainability and improving the code as we go versus just trying to get new features in. I think there's also an important cultural aspect. So some teams may, unfortunately, work a lot of extra hours. And that's something that I won't bend for. I'm still going to stick to my sustainable hours. But that's something that I keep in mind that just if some other people are working a lot of evenings or just working extra hours to keep that in mind that if they do have a higher velocity to not include that in my calculation as heavily. I also really liked how you highlighted that certain individuals often their velocity is unblocking others. So it's less about the specific code or features or tickets that they're producing, but it's how many people can they help? And then they're increasing the velocity of those individuals. And then the other metrics that unfortunately can be gamified, but it's still something to look at is like, how many hours are you spending on a particular feature, the tickets? But I like that phrasing that you used earlier of what's your progress? So if someone comes to daily sync and they mention that they're working on the same thing and we're on like day three, or four, but they haven't given an update around, like, oh, I have this new thing that I'm focused on, or this new area that I'm exploring, that's when I'll start to have alarm bells go off. And I'm like, okay, you've been working on the same thing. I can't quite tell if you've made progress. It sounds like you're still in the depths of the original thing that you were on a couple of days ago. So at that point, I'm going to want to check in to see how you're doing. But yeah, I think that's why this question fascinates me so much is because I don't think there's one answer that fits for everybody. There's not a way to tell one person to say, "Hey, this is your output that you should be producing, and this applies to all teams." It's really going to vary from team to team as to what that looks like. I remember there was one team that I joined that initially; I panicked because I noticed that their team was moving at a slower rate in terms of the number of tickets and PRs and stuff that were getting pushed up, reviewed, and then merged. That was moving at a slower pace than I was used to with previous clients. And I just thought, oh, what's going on? What's slowing us down? Like, why aren't we moving faster? And I actually realized it's just because they were working at a really sustainable pace. They showed up to the office. This was back in the day when I used to go to an office, and people showed up at like 9:00 a.m. and then 5:00 o'clock; it was a ghost town, and people were gone. So they were doing really solid, great work, but they were sticking to very sustainable hours. Versus, a previous team that I had been on had more of like a rushed feeling, and so there was more output for it. And that was a really nice reset for me to watch this team and see them do such great work in a sustainable fashion and be like, oh, yeah, not everything has to be a fire, not everything has to be rushed. I think the biggest thing that I'd look at is if velocity is being called into question, so if someone is concerned that someone's not producing enough or if the team is not producing enough, the first place I'm going to look is what's our priorities and see are we prioritizing correctly? Or are people getting pulled into a lot of work that's not supporting the priorities, and then that's why suddenly it feels like we're not producing at the level that we need to? I feel like that's the common disconnect between how much work we're getting done versus then what's actually causing people or product managers, or management stress. And so reevaluating to make sure that they're on the same page is where I would look first before then thinking, oh, someone's not working hard enough. CHRIS: Yeah, I definitely resonate with all of that. That was a mini masterclass that you just gave right there in all of those different facets. The one other thing that comes to mind for me is the question is often about velocity or speed or how fast can we go. But I increasingly am of the opinion that it's less about the actual speed. So it's less about like, if you think about it in terms of the average pace, the average number of features that we're going through, I'm more interested in the standard deviation. So some days you pick up a ticket, and it takes you a day; some days you pick up a ticket, and suddenly, seven days later, you're still working on it. And both at the individual level and at the team level, I'm really interested in decreasing that standard deviation and making it so that we are more consistently delivering whatever amount of output it is but very consistently doing that. And that really helps with our ability to estimate overall bodies of work with our ability for others to know and for us to be able to sort of uphold expectations. Versus if randomly someone might pick up a piece of code or might pick up a ticket that happens to hit a landmine in the code, it's like, yeah, we've been meaning to refactor that for a while. And it turns out that thing that you thought would be super easy is really hard because we've been kicking the can on this refactoring of the fundamental data model. Sorry about that. But today's your day; you lose. Those are the sort of things that I see can be really problematic. And then similarly, on an individual side, maybe there's some stuff that you can work on that is super easy for you. But then there's other stuff that you kind of hit a wall. And I think the dangerous mode to get into is just going internal and not really communicating about that, and struggling and trying to get there on your own rather than asking for help. And it can be very difficult to ask for help in those sorts of situations. But ideally, if you're focusing on I want to be delivering in that same pace, you probably might need some help in that situation. And I think having a team that really...what you're talking about of like, if I notice someone saying the same thing at daily sync for a couple of days in a row, I will typically reach out in a very friendly, collegial way, hey, do you want someone else to take a look at that with you? Because ideally, we want to unblock those situations. And then if we do have a team that is pretty consistently delivering whatever overall velocity but it's very consistent at that velocity, it's not like 3 one day and then 0, and then 12, and then 2; it's more of like, 6,5,6,5 sort of thing, to pick random numbers out of the air, then I feel so much more able to grow that, to increase that. If the question comes to me of like, hey, we're looking at the budget for the next quarter; do we think we want to hire another developer? I think I can answer that much more accurately at that point and say what do I think that additional individual would be able to do on the team. Versus if development is kind of this sporadic thing all over the place, then it's so much harder to understand what someone new joining that team would be able to do. So it's really the slow is smooth, smooth is fast adage that I've talked about in the past that really captured my mind a while back that just continues to feel true to me. And then yeah, I can work with that so much better than occasional days of wild productivity and then weeks of sadness in the swamp of refactoring. So it's a different way to think about the question, but it is where my mind initially went when I read this question. STEPH: I'm going to start using that description for when I'm refactoring. I'm in the refactoring swamp. That's where I'm spending my time. [laughs] Talking about this particular question is helping me realize that I do think less in terms of like what is my output in the strict terms of tickets, and PRs, and things like that. But I do think more about my progress and how can I constantly show progress, not just to the world but show it to myself. So if there are tickets that then maybe the ticket was scoped too big at first and I've definitely made some really solid progress, maybe I'm able to ship something or at least identified some other work that could be broken out, then I'm going to do that. Because then I want everybody to know, like, hey, this is the progress that was made here. And I may even be able to make myself feel good and move something over to the done column. So there's that aspect of the work that I focus on more heavily. And I feel like that also gives us more opportunities to then iterate on what's the goal? Like, we're not looking to just churn out work. That's not the point. But we really want to focus on meaningful work to get done. So if we're constantly giving an update on this as the progress that I've made in this direction, that gives people more opportunities to then respond to that progress and say, "Oh, actually, I think the work was supposed to do this," or "I have questions about some of the things that you've uncovered." So it's less about just getting something done. But it's still about making sure that we're working on the right thing. CHRIS: Yeah, it doesn't matter how fast we're going if we're going in the wrong direction, so another critical aspect. You can be that person on the team who actually doesn't ship much code at all. Just make sure that we don't ship the wrong code, and you will be a critical member of that team. But shifting gears just a little bit, we have another listener question here that I'd love to get into. This one is about testing a legacy app. So reading this question, it starts off with a very nice note to us, Steph. "I want to start by saying thanks for putting out great content week after week." We are very happy to do so." So a question for you two. I just took over a legacy Rails app. It's about 12 years old, and it's a bit of a mess. There was some testing in place, but it was completely broken and hadn't been touched in over seven years. So I decided to just delete it all. My question is, where do I even start with testing? There are so many callbacks on the models and so many controller hooks that I feel like I somehow need to have a factory for every model in our repo. I need to get testing in place ASAP because that is how I develop. But we are also still on Ruby 2 and Rails 4.0. So we desperately have to upgrade. Thanks in advance for any advice." So Steph, I actually replied in an email to this kind listener who sent this. And so, I definitely have some thoughts, but I'm interested in where would you start with this. STEPH: Legacy code, I wouldn't know anything about working in legacy code. [laughs] This is a fabulous question. And yeah, the response that you provided is incredible. So I'm very excited for you to share the message that you replied with. So I'm going to try not to steal any of those because they're wonderful. But to add to that list that is soon to come, often where I start with applications like these where I need some testing in place because, as this person mentioned, that's how they work. And then also, at that point, you're just scared to ship anything because you just don't know what's going to break. So one area that you could start with is what's your rollback strategy? So if you don't have any tests in place and you send something out into the world, then what's your plan to then be able to either roll back to a safe point or perhaps it's using feature flags for anything new that you're adding so that way you can quickly turn something on and off. But having a strategy there, I think, will help alleviate some of that stress of I need to immediately add tests. It's like, yes, that's wonderful, but that's going to take time. So until you can actually write those tests, then let's figure out a plan to mitigate some of that pain. So that's where I would initially start. And then, as for adding the test, typically, you start with testing as you go. So I would add tests for the code that I'm adding that I'm working on because that's where I'm going to have the most context. And I'm going to start very high. So I might have really slow tests that test everything that is going to be feature level, integration level specs because I'm at the point that I'm just trying to document the most crucial user flows. And then once I have some of those in place, then even if they are slow, at least I'm like, okay, I know that the most crucial user flows are protected and are still working with this change that I'm making. And in a recent episode, we were talking about how to get to know a Rails app. You highlighted a really good way to get to know those crucial user flows or the most common user flows by using something like New Relic and then seeing what are the paths that people are using. Maybe there's a product manager or just someone that you're taking the app over that could also give you some help in letting you know what's the most crucial features that users are relying on day to day and then prioritizing writing tests for those particular flows. So then, at this point, you've got a rollback strategy. And then you've also highlighted what are your most crucial user flows, and then you've added some really high level probably slow tests. Something that I've also done in the past and seen others do at thoughtbot when working on a legacy project or just working on a project, it wasn't even legacy, but it just didn't have any test coverage because the team that had built it before hadn't added test coverage. We would often duplicate a lot of the tests as well. So you would have some integration tests that, yes, frankly, were very similar to others, which felt like a bad choice. But there was just some slight variation where a user-provided some different input or clicked on some small different field or something else happened. But we found that it was better to have that duplication in the test coverage with those small variations versus spending too much time in finessing those tests. Because then we could always go back and start to improve those tests as we went. So it really depends. Are you in fire mode, and maybe you need to duplicate some stuff? Or are you in a state where you can be more considerate with your tests, and you don't need to just get something in place right away? Those are some of the initial thoughts I have. I'm very excited for the thoughts that you're about to share. So I'm going to turn it over to you. CHRIS: It's sneaky in this case. You have advanced notice of what I'm about to say. But yeah, this is a super interesting topic and one of those scary places to find yourself in. Very similar to you, the first thing that I recommended was feature specs, starting at that very high level, particularly as the listener wrote in and saying there are a lot of model callbacks and controller callbacks. And before filters and all of this, it's very indirect how this application works. And so, really, it's only when the whole thing is integrated together that you're going to have a reasonable sense of what's going on. And so trying to write those high-level feature specs, having a handful of them that give you some confidence when you're deploying that those core workflows are still working as expected. Beyond that, the other things that I talked about one was observability. As an aside, I didn't mention feature flags or anything like that. And I really loved that that was something you highlighted as a different way to get to confidence, so both feature flags and rollbacks. Testing at the end of the day, the goal is to have confidence that we're deploying software that works, and a different way to get that is feature flags and rollbacks. So I really love that you highlighted that. Something that goes really well hand in hand with those is observability. This has been a thing that I've been exploring more and more and just having some tooling that at runtime will tell you if your application is behaving as expected or is not. So these can be APM-type tools, but it can also be things like Sentry or Honeybadger error monitoring, those sorts of things. And in a system like this, I wouldn't be surprised if maybe there was an existing error monitoring tool, but it had just kind of decayed over time and now just has perhaps thousands of different entries in it that have been ignored and whatnot. On more than one occasion, I've declared Sentry bankruptcy working with clients and just saying like, listen; this thing can't tell us any truths anymore. So let's burn it down and restart it. So I would recommend that and having that as a tool such that much as tests are really wonderful before the code gets out there into the wild; it turns out it's only when users start using it that the real stuff happens. And so, having observability, having tooling in place that will tell you when something breaks is equally critical in my mind. One of the other things I said, and this is probably the spiciest take on my list, is questioning the trade-off space that you're in. Is this an application that actually has a relatively low defect rate that users use and are quite happy with, and expect that level of performance and correctness, and all of those sorts of things, and so you, frankly, need to be careful with it? Or, is it potentially something that has a handful of bugs and that users are used to a certain lower fidelity experience, let's call it? And can you take advantage of that if that happens to be true? Like, I would be very careful to break something that has never been broken before that there's no expectation of that. But if we can get away with moving fast and breaking things for a little while just to try and get ourselves out of the spot that we're in, I would at least want to consider that trade-off space. Because caution slows you down, it means that your progress is going to be limited. And so, if we're able to reduce the caution filter just a little bit and move a little bit more rapidly, then ideally, we can get out of this place that we're in a little more quickly. Again, I think that's a really subtle one and one that you'd have to get buy-in from product managers and probably be very explicit in the conversations and sort of that trade-off space. But it is something that I would want to explore if I found myself in this sort of situation. The last thing that I highlighted was the fact that the versions of Ruby and Rails that were listed in the question are, I think, both end of life at this point. And so from a security perspective, that is just a giant glaring warning sign in the corner because the day that your app gets hacked, well, that's a bad day. So testing, unfortunately, I think that's the main way that you're going to get by on that as you're going through upgrades. You can deploy a new version of the application and see what happens and see if your observability can get you there. But really, testing is what you want to do. So that's where building out that testing is all the more critical so that you can perform those security upgrades because they are now truly critical to get done. And so it gives sort of more than a nice to have, more than this makes me feel comfortable. It is pretty much a necessity if you want to go through that, and you absolutely need to go through the security upgrades because otherwise, you're going to get hacked. There are just automated scanners out there. They're going to find you. You don't need to be a high vulnerability target to get taken down on the internet these days. So if it hasn't happened yet, it's going to. And I think that's an easy business case to sell is, I guess, the way that I would frame it. So those were some of my thoughts. STEPH: You bring up a really good point about needing to focus on the security upgrades. And I'm thinking that through a little bit further in regards to what trade-offs would I make? Would I wait till I have tests in place to then start the upgrades, or would I start the upgrades now but just know I'm going to spend more time manual testing on staging? Or maybe I'm solo on the project. If I have a product manager or someone else that can also help the testing with me, I think I would go for that latter approach where I would start the upgrades today and then just do more manual testing of those crucial flows and then have that rollback strategy. And as you mentioned, it's a trade-off in terms of, like, how important is it that we don't break anything? CHRIS: I think similar to the thing that both of us hit on early on is like, have some feature specs that just kick the whole application as one connected piece of code. Have that in place for the security upgrade, testing. But I agree, I wouldn't want to hold off on that because I think that's probably the scariest part of all of this. But yeah, it is, again, trade-offs. As always, it depends. But I think those are my thoughts. Anything else you want to add, Steph? STEPH: I think those are fabulous thoughts. I think you covered it all. CHRIS: Sounds good. Well, in that case, should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at email@example.com via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
This week, Benjamin and Zac break down Apple's continued success at getting TV+ awards, and some new Apple Music initiatives. Plus, Zac gets his hands on with the brilliant — completely controversy unencumbered — new MacBook Air. Sponsored by Ladder: Go to Ladder.com/HappyHour today to see if you're instantly approved. Sponsored by New Relic: That next 9:00 p.m. call is just waiting to happen, get New Relic before it does! You can get access to the whole New Relic platform and 100GB of data free, forever – no credit card required. Sign up at NewRelic.com/happyhour. Follow Zac Hall @apollozac Benjamin Mayo @bzamayo Read More Apple Music Sessions announced: Exclusive live releases in Spatial Audio Luke Combs concert streaming on Apple Music Live in August Apple to cash out $50 million for selling MacBooks with faulty keyboard M2 MacBook Air base model review - How iPhone and Apple Watch are taking on health - 9to5Mac Apple Arcade adds new 'Leaving Soon' tab with 15 titles to be removed Apple to cut spending and hiring on select teams in response to economic slowdown Listen to more Happy Hour Episodes Subscribe Apple Podcasts Overcast Spotify Listen to more 9to5 Podcasts Apple @ Work Alphabet Scoop Electrek The Buzz Podcast Space Explored Rapid Unscheduled Discussions Enjoy the podcast? Shop Apple at Amazon to support 9to5Mac Happy Hour or shop 9to5Mac Merch!
Natural disaster movies, anyone? It's what Steph's been into, and Chris has THOUGHTS on the drilling in Armageddon. Additionally, a chat around RuboCop RSpec rules happens, and they answer a listener's question, "how do you get acquainted with a new code base?" This episode is brought to you by BuildPulse (https://buildpulse.io/bikeshed). Start your 14-day free trial of BuildPulse today. Greenland (https://www.imdb.com/title/tt7737786/) Geostorm (https://www.imdb.com/title/tt1981128/) San Andreas (https://www.imdb.com/title/tt2126355/) Armageddon (https://www.imdb.com/title/tt0120591/) This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow, and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix all of them today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all of the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed. CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. So I've been watching more movies lately. So evenings aren't always great; I don't always feel good being around 33 weeks pregnant now. Evenings I can be just kind of exhausted from the day, and I just need to chill and prop my feet up and all that good stuff. And I've been really drawn to natural disaster like end-of-the-world-type movies, and I'm not sure what that says about me. But it's my truth; it's where I'm at. [chuckles] I watched Greenland recently, which I really enjoyed. I feel like they ended it well. I won't share any spoilers, but I feel like they ended it well. And they didn't take an easy shortcut out that I kind of thought that they might do, so that one was enjoyable. Geostorm, I watched that one just last night. San Andreas, I feel like that's one that I also watched recently. So yeah, that's what's new in my world, you know, your typical natural disaster end-of-the-world flicks. That's my new evening hobby. CHRIS: I feel like I haven't heard of any of the three that you just listed, which is wild to me because this is a category that I find enthralling. STEPH: Well, definitely start with Greenland. I feel like that one was the better of the three that I just mentioned. I don't know Geostorm or San Andreas which one you would prefer there. I feel like they're probably on par with each other in terms of like you're there for entertainment. We're not there to judge and be hypercritical of a storyline. You're there purely for the visual effects and for the ride. CHRIS: Gotcha. Interesting. So quick question then, since this seems like the category you're interested in, Armageddon or Deep Impact? STEPH: Ooh, I'm going to have to walk through the differences because I always get those mixed up. Armageddon is where they take Bruce Willis up to an asteroid, and they have to drill and drop a nuke, right? CHRIS: They sure do. STEPH: [laughs] And then what's Deep Impact about? I guess the fact that I know Armageddon better means I'm favoring that one. I can't place what...how does Deep Impact go? CHRIS: Deep Impact is just there's an asteroid coming, and it's the story and what the people do. So it's got less...it doesn't have the same pop. I believe Armageddon was a Michael Bay movie. And so it's got that Michael Bay special bit of something on it. But the interesting thing is they came out the same year; I want to say. It's one of those like Burger King and McDonald's being right next door to each other. It's like, what are you doing there? Why are you...like, asteroid devastation movies two of you at the same time, really? But yeah, Armageddon is the correct answer. Deep Impact is like a fine movie, but Armageddon is like, all right, we're going to have a movie about asteroids. Let's really go for it. Blow it out. Why not? STEPH: Yeah, I'm with you. Armageddon definitely sticks out in my memory, so I'd vote that one. Also, for your other question that you didn't ask, but you kind of implicitly asked, I'm going to go McDonald's because Burger King fries are trash, and also, McDonald's has better ice cream cones. CHRIS: Okay, so McDonald's fries. Oh no, I was thinking Wendy's, get a frosty from there, and then you make that combination because the frostys are great. STEPH: Oh yeah, that's a good combo. CHRIS: And you need the french fries to go with it, but then it's a third option that I'm introducing. Also, this wasn't a question, but I want to loop back briefly to Armageddon because it's an important piece of cinema. There's a really great...like it's DVD commentary, and it's Ben Affleck talking with Michael Bay about, "Hey, so in the movie, the premise is that the only way to possibly get this done is to train a bunch of oil drillers to be astronauts. Did we consider it all just having some astronauts learn to do oil drilling?" And Michael Bay's response is not safe for radio is how I would describe it. But it's very humorous hearing Ben Affleck describe Michael Bay responding to that. STEPH: I think they addressed that in the movie, though. They mentioned like, we're going to train them, but they're like, no, drilling is such an art and a science. There's no way. We don't have time to teach these astronauts how to drill. So instead, it's easier to teach them to be astronauts. CHRIS: Right. That is what they say in the movie. STEPH: [laughs] Okay. CHRIS: But just spending a minute teasing that one apart is like, being an astronaut is easy. You just sit in the spaceship, and it goes, boom. [laughs] It's like; actually, there's a little bit more to being an astronaut. Yes, drilling is very subtle science and art fusion. But the idea that being an astronaut [laughs] is just like, just push the go-to space button, then you go to space. STEPH: The training montage is definitely better if we get to watch people learn how to be astronauts than if we watch people learn how to drill. [laughs] So that might have also played a role. CHRIS: No question, it is the correct cinematic choice. But whether or not it's the true answer...say we were actually faced with this problem, I don't know that this is exactly how it would play out. STEPH: I think we should A/B test it. We'll have one group train to be drill experts and one group train to be astronauts, and we'll send them both up. CHRIS: This is smart. That's the way you got to do it. The one other thing that I'm going to go...you know what really grinds my gears? In the movie Armageddon, they have this robotic vehicle thing, the armadillo; I believe it's called. I know more than I thought I would remember about this movie. [chuckles] Anyway, continuing on, the armadillo, the vehicle that they use to do the drilling, has the drill arm on it that extends out and drills down into the asteroid. And it has gears on the end of it. It has three gears specifically. And the first gear is intermeshed with the second gear, which is intermeshed with the third gear, which is intermeshed with the first gear, so imagine which direction the first gear is turning, then imagine the second gear turning, then imagine the third gear turning. They can't. It's a physically impossible object. One tries to turn clockwise, and the other one is trying to go counterclockwise, and they're intermeshed. So the whole thing would just cease up. It just doesn't work. I've looked at it a bunch of times, and I want to just be wrong about this. I want to be like; I don't know what's going on. But I think the gears on the drilling machine just fundamentally at a very simple mechanical level cannot work. And again, if you're going to do it, really go for it, Michael Bay. I kind of like that, and I really hate it at the same time. STEPH: I have never noticed this. I'm intrigued. You know what? Maybe Armageddon will be the movie of choice tonight. [chuckles] Maybe that's what I'm going to watch. And I'm going to wait for the armadillo to come out so I can evaluate the gears. And I'm highly amused that this is the thing that grinds your gears are the gears on the armadillo. CHRIS: Yeah. I was a young child at the time, and I remember I actually went to Disney World, and I saw they had the prop vehicle there. And I just kind of looked up at it, and I was like, no, that's not how gears work. I may have been naive and wrong as a child, and now I've just anchored this memory deep within me. In a similar way, so I had a moment while traveling; actually, that reminded me of something that I said on a recent podcast episode where I was talking about names and pronunciation. And I was like, yeah, sometimes people ask me how to pronounce my name. And I can't imagine any variation. That was the thing I was just wrong about because 'Toomay' is a perfectly reasonable pronunciation of my name that I didn't even think... I was just so anchored to the one truth that I know in the world that my name is Toomey. And that's the only possible way anyone could pronounce it. Nope, totally wrong. So maybe the gears in Armageddon actually work really, really well, and maybe I'm just wrong. I'm willing to be wrong on the internet, which I believe is the name of the first episode that we recorded with you formally as a co-host. [chuckles] So yeah. STEPH: Yeah, that sounds true. So you're going to change the intro? It's now going to be like, and I'm Chris 'Toomay'. CHRIS: I might change it each time I come up with a new subtle pronunciation. We'll see. So far, I've got two that I know of. I can't imagine a third, but I was wrong about one. So maybe I'm wrong about two. STEPH: It would be fun to see who pays attention. As someone who deeply values pronouncing someone's name correctly, oh my goodness, that would stress me out to hear someone keep pronouncing their name differently. Or I would be like, okay, they're having fun, and they don't mind how it gets pronounced. I can't remember if we've talked about this on air but early on, I pronounced my last name differently for like one of the first episodes that we recorded. So it's 'Vicceri,' but it could also be 'Viccari'. And I've defaulted at times to saying 'Viccari' because people can spell that. It seems more natural. They understand it's V-I-C-C-A-R-I. But if I say 'Vicceri', then people want to add two Rs, or they want a Y. I don't know why it just seems to have a difference. And so then I was like, nope, I said it wrong. I need to say it right. It's 'Vicceri' even if it's more challenging for people. And I think Chad Pytel had just walked in at that moment when I was saying that to you that I had said my name differently. And he's like, "You can't do that." And I'm like, "Well, I did it. It's already out there in the world." [laughs] But also, I'm one of those people that's like, Viccari, 'Vicceri' I will accept either. In a slightly different topic and something that's going on in my world, there was a small win today with a client team that I really appreciated where someone brought up the conversation around the RuboCop RSpec rules and how RuboCop was fussing at them because they had too many lines in their test example. And so they're like, well, they're like, I feel like I'm competing, or I'm working against RuboCop. RuboCop wants me to shorten my test example lines, but yet, I'm not sure what else to do about it. And someone's like, "Well, you could extract more into before blocks and to lets and to helpers or things like that to then shorten the test. They're like, "But that does also work against readability of the test if you do that." So then there was a nice, short conversation around well, then we really need more flexibility. We shouldn't let the RuboCop metrics drive us in this particular decision when we really want to optimize for readability. And so then it was a discussion of okay, well, how much flexibility do we add to it? And I was like, "Well, what if we just got rid of it? Because I don't think there's an ideal length for how long your test should be. And I'd rather empower test authors to use all the space that they need to show their test setup and even lean into duplication before they extract things because this codebase has far more dry tests than they do duplication concerns. So I'd rather lean into the duplication at this point." And the others that happened to be in that conversation were like, "Yep, that sounds good." So then that person issued a PR that then removed the check for that particular; how long are the examples? And it was lovely. It was just like a nice, quick win and a wonderful discussion that someone had brought up. CHRIS: Ooh, I like that. That sounds like a great conversation that hit on why do we have this? What are the trade-offs? Let's actually remove it. And it's also nice that you got to that place. I've seen a lot of folks have a lot of opinions in the past in this space. And opinions can be tricky to work around, and just deeply, deeply entrenched opinions is the thing that I find interesting. And I think I'm increasingly in the space of those sort of, thou shalt not type linter rules are not ideal in my mind. I want true correctness checks that really tell some truth about the codebase. Like, we still don't have RuboCop on our project at Sagewell. I think that's true. Yeah, that's true. We have ESLint, but it's very minimal, what we have configured. And they more are in the what we deem to be true correctness checks, although that is a little bit of a blurry line there. But I really liked that idea. We turn on formatters. They just do the thing. We're not allowed to discuss the formatting, with the exception of that time that everybody snuck in and switched my 80-line length to a 120-line length, but I don't care. I'm obviously not still bitter about it. [chuckles] And then we've got a very minimal linting layer on top of that. But like TypeScript, I care deeply, and I think I've talked in previous episodes where I'm like, dial up the strictness to 14 because TypeScript tends to tell me more truths I find, even though I have to jump through some hoops to be like TypeScript, I know that this is fine, but I can't prove it. And TypeScript makes me prove it, which I appreciate about it. I also really liked the way you referred to RSpec's feedback to you was that RSpec was fussing at you. That was great. I like that. I'm going to internalize that. Whenever a linter or type system or anything like that when they tell me no, I'm going to be like, stop fussing, nope, nope. [chuckles] STEPH: I don't remember saying that, but I'm going to trust you that that's what I said. That's just my true southern self coming through on the mic, fussing, and then go get a biscuit, and it'll just be a delightful day. CHRIS: So if I give RuboCop a biscuit, it will stop fussing at me, potentially? STEPH: No, the biscuit is just for you. You get fussed at; you go get a biscuit. It makes you feel better, and then you deal with the fussing. CHRIS: Sold. STEPH: Fussing and cussing, [laughs] that's most of my work life lately, fussing and cussing. [laughs] CHRIS: And occasional biscuits, I hope. STEPH: And occasional biscuits. You got it. But that's what's new in my world. What's going on in your world? CHRIS: Let's see. In my world, it's a short week so far. So recording on Wednesday, Monday was a holiday. And I was out all last week, which very much enjoyed my vacation. It was lovely. Went over to Europe, hung out there for a bit, some time in Paris, some time in Amsterdam, precious little time on a computer, which is very rare for me. So it was very enjoyable. But yeah, back now trying to just get back into the swing of things. Thankfully, this turned out to be a really great time to step away from the work for a little while because we're still in this calm before the storm but in a good way is how I would describe it. We have a major facet of the Sagewell platform that we are in the planning modes for right now. But we need to get a couple of different considerations, pick a partner vendor, et cetera, that sort of thing. So right now, we're not really in a position to break ground on what we know will be a very large body of work. We're also not taking on anything else too big. We're using this time to shore up a lot of different things. As an example, one of the fun things that we've done in this period of time is we have a lot of webhooks in the app, like a lot of webhooks coming into the app, just due to the fact that we're an integration of a lot of services under the hood. And we have a pattern for how we interact with and process, so we actually persist the webhook data when they come in. And then we have a background job that processes and watch our pattern to make sure we're not losing anything and the ability to verify against our local version, and the remote version, a bunch of different things. Because turns out webhooks are critical to how our app works. And so that's something that we really want to take very seriously and build out how we work with that. I think we have eight different webhook integrations right now; maybe it's more. It's a lot. And with those, we've implemented the same pattern now eight times; I want to say. And in squinting at it from a distance, we're like; it is indeed identically the same pattern in all eight cases or with the tiniest little variation in one of them. And so we've now accepted like, okay, that's true. So the next one of them that we introduced, we opted to do it in a generic way. So we introduced the abstraction with the next iteration of this thing. And now we're in a position...we're very happy with what we ended up with there. It's like the best of all of the other versions of it. And now, the plan will be to slowly migrate each of the existing ones to be no longer a unique special version of webhook processing but use the generic webhook processing pattern that we have in the app. So that's nice. I feel good about how long we waited as well because it's like, we have webhooks. Let's introduce the webhook framework to rule them all within our app. It's like, no, wait until you see. Check and make sure they are, in fact, the same and not just incidental duplication. STEPH: I appreciate that so much. That's awesome. That sounds like a wonderful use of that in-between state that you're in where you still got to make progress but also introduce some refactoring and a new concept. And I also appreciate how long you waited because that's one of those areas where I've just learned, like, just wait. It's not going to hurt you. Just embrace the duplication and then make sure it's the right thing. Because even if you have to go in and update it in a couple of places, okay, sure, that feels a little tedious, but it feels very safe too. If it doesn't feel safe...I could talk myself back and forth on this one. If it doesn't feel safe, that's a different discussion. But if you're going through and you have to update something in a couple of different places, that's quick. And sure, you had to repeat yourself a little bit, but that's fine. Versus if you have two or three of something and you're like, oh, I immediately must extract. That's probably going to cause more pain than it's worth at this point. CHRIS: Yeah, exactly, exactly that. And we did get to that place where we were starting to feel a tiny bit of pain. We had a surprising bit of behavior that when we looked at it, we were like, oh, that's interesting, because of how we implemented the webhook pattern, this is happening. And so then we went to fix it, but we were like, oh, it would actually be really nice to have this fixed across everything. We've had conversations about other refinements, enhancements, et cetera; that we could do in this space. That, again, would be really nice to be able to do holistically across all of the different webhook integration things that we have. And so it feels like we waited the right amount of time. But then we also started to...we're trying to be very responsive to the pressure that the system is pushing back on us. As an aside, the crispy Brussels snack hour and the crispy Brussels work lunch continue to be utterly fantastic ways in which we work. For anyone that is unfamiliar or hasn't listened to episodes where I rambled about those nonsense phrases that I just said, they're basically just structured time where the engineering team at Sagewell looks at and discusses higher-level architecture, refactoring, developer experience, those sort of things that don't really belong on the core product board. So we have a separate place to organize them, to gather them. And then also, we have a session where we vote on them, decide which ones feel important to take on but try and make sure we're being intentional about how much of that work we're taking on relative to how much of core product work and try and keep sort of a good ratio in between the two. And thus far, that's been really fantastic and continues to be, I think, really effective. And also the sort of thing that just keeps the developer team really happy. So it's like, I'm happy to work in this system because we know we have a way to change it and improve it where there's pain. STEPH: I like the idea of this being a game show where it's like refactor island, and everybody gets together and gets to vote which refactor stays or gets booted off the island. I'm also going to go back and qualify something I said a moment ago, where if something feels safe in terms of duplication, where it starts to feel unsafe is if there's like an area that you forgot to update because you didn't realize it's duplicated in several areas and then that causes you pain. Then that's one of those areas where I'll start to say, "Okay, let's rethink the duplication and look to dry this up." CHRIS: Yep, indeed. It's definitely like a correction early on in my career and overcorrection back and trying to find that happy medium place. But as an aside, just throwing this out there, so webhooks are an interesting space. I wish it were a more commoditized offering of platforms. Every vendor that we're integrating with that does webhooks does it slightly differently. It's like, "Oh, do you folks have retries?" They're like, "No." It's like, oh, what do you mean no? I would love it if you had retries because, I don't know, we might have some reason to not receive one of them. And there's polling, and there are lots of different variations. But the one thing that I'm surprised by is that webhook signing I don't feel like people take it serious enough. It is a case where it's not a huge security vulnerability in your app. But I was reading someone who is a security analyst at one point. And they were describing sort of, I've done tons of in-the-code audits of security practices, and here are the things that I see. And so it's the normal like OWASP Top 10 Cross-Site Request Forgery, and SQL injection, and all that kind of stuff. But one of the other ones he highlighted is so often he finds webhooks that are not verified in any way. So it's just like anyone can post data into the system. And if you post it in the right shape, the system's going to do some stuff. And there's no way for the external system to enforce that you properly validate and verify a webhook coming in, verify that payload. It's an extra thing where you do the checksum math and whatnot and take the signature header. I've seen somewhere they just don't provide it. And it's like, what do you mean you don't provide it? You must provide it, please. So it's either have an API key so that we have some way to verify that you are who you say you are or add a signature, and then we'll calculate it. And it's a little bit of a dance, and everybody does it different, but whatever. But the cases where they just don't have it, I'm like, I'm sorry, what now? You're going to say whom? But yeah, then it's our job to definitely implement that. So this is just a notice out there to anyone that's listening. If you got a bunch of webhook handling code in your app, maybe spot-check that you're actually verifying the payloads because it's possible that you're not. And that's a weird, very open hole in the side of your application. STEPH: That's a really great point. I have not worked with webhooks recently. And in the past, I can't recall if that's something that I've really looked at closely. So I'm glad you shared that. CHRIS: It's such an easy thing to skip. Like, it's one of those things that there's no way to enforce it. And so, I'd be interested in a survey that can't be done because this is all proprietary data. But what percentage of webhook integrations are unverified? Is it 50%? Is it 10%? Is it 100%? It's definitely not 100. But it's somewhere in there that I find interesting. It's not a terribly exploitable vulnerability because you have to have deep knowledge of the system. In order to take advantage of it, you need to know what endpoint to hit to, what shape of data to send because otherwise, you're probably just going to cause an error or get a bunch of 404s. But like, it's, I don't know, it's discoverable. And yeah, it's an interesting one. So I will hop off my webhook soapbox now, but that's a thought. MIDROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help you cut your debugging time in half. So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today! CHRIS: But now that I'm off my soapbox, I believe we have a topic that was suggested. Do you want to provide a little bit of context here, Steph? STEPH: Yeah, I'd love to. So this came up when I was having a conversation with another thoughtboter. And given that we change projects fairly frequently, on the Boost team, we typically change projects around every six months. They asked a really thoughtful question that was "How do you get acquainted with a new codebase? So given that you're changing projects so often, what are some of the tips and tricks for ways that you've learned to then quickly get up to speed with a new codebase?" Because, frankly, that is one of the thoughtbot superpowers is that we are really good at onboarding each other and then also getting up to speed with a new team, and their processes, and their codebase. So I have a couple of ideas, and then I'd love to hear some of your thoughts as well. So I'll dive in with a couple. So the first one, this one's frankly my favorite. Like day one, if there's a team where I'm joining and they have someone that can walk me through the application from the users' perspective, maybe it's someone that's in sales, or maybe it's someone on the product team, maybe it's a recording that they've already done for other people, but that's my first and favorite way to get to know an application. I really want to know what are users experience as they're going through this app? That will help me focus on the more critical areas of the application based on usage. So if that's available, that's fabulous. I'm also going to tailor a lot of this more to like a Rails app since that's typically the type of project that I'm onboarding to. So the other types of questions that I like to find answers to are just like, what's my top-level structure? Like to look through the app and see how are things organized? Chris, you've mentioned in a previous episode where you have your client structure that then highlights all the third-party clients that you're working with. Are we using engines in the app? Is there anything that seems a bit more unique to that application that I'm going to want to brush up on or look into? What's the test coverage like? Do they have something that's already highlighting how much test coverage they have? If not, is there something that then I can run locally that will then show me that test coverage? I also really like to look at the routes file. That's one of my other favorite places because that also is very similar to getting an overview of the product. I get to see more from the user perspective. What are the common resources that people are going to, and what are the domain topics that I'm working with in this new application? I've got a couple more, but I'm going to pause there and see how you get acquainted with a new app. CHRIS: Well, unsurprisingly, I agree with all of those. We're still searching for that dare to disagree beyond Pop-Tarts and IPAs situation. To reiterate or to emphasize some of the points you made, the sales demo thing? I absolutely love that one because, yes, absolutely. What's the most customer-centric point of view that I can have? Can I then login to a staging version of the site so I can poke around and hopefully not break anything or move real money or anything like that? But understanding why is this thing, not in code, but in actual practical, observable, intractable software? Beyond that, your point about the routes, absolutely, that's one of my go-to's, although the routes there often is so much in the routes, and it's like some of those may actually be unused. So a corollary to the routes where available if there's an APM tool like Scout, or New Relic, or something like that, taking a look at that and seeing what are the heavily trafficked endpoints within this app? I like to think about it as the entry points into this codebase. So the routes file enumerates all of them, but some of them matter, and some of them don't. And so, an APM tool can actually tell you which are the ones that are seeing a ton of traffic. That's a really interesting question for me. Similarly, if we're on Heroku, I might look is there a scheduler? And if so, what are the tasks that are running in the background? That's another entry point into the app. And so I like to think about it from that idea of entry points. If it's not on Heroku, and then there's some other system, like, I've used Cronic. I think it's Cronic, Whenever the Cron thing. Whenever, that's what it is, the Whenever gem that allows you to implement that, but it's in a file within the codebase, which as an aside, I really love that that's committed and expressive in the code. Then that's another interesting one to see. If it's more exotic than that, I may have to chase it down or ask someone, but I'll try and find what are all of the entry points and which are the ones that matter the most? I can drill down from there and see, okay, what code then supports these entry points into the application? I want to give an answer that also includes something like, oh, I do fancy static analysis in the codebase, and I do a churn versus complexity graph, and I start to...but I never do that, if we're being honest. The thing that I do is after that initial cursory scan of the landscape, I try and work on something that is relatively through the layers of the app, so not like, oh, I'll fix the text in a button. But like, give me something weird and ideally, let me pair with someone and then try and move through the layers of the app. So okay, here's our UI. We're rendering in this way. The controllers are integrated in this way, et cetera. This is our database. Try and get through all the layers if possible to try and get as holistic of a view of how the application works. The other thing that I think is really interesting about what you just said is you're like, I'm going to give some answers that are somewhat specific to a Rails app. And that totally makes sense to me because I know how to answer this in the context of a Rails app because those organizational patterns are so useful that I can hop into different Rails apps. And I've certainly seen ones that I'm like, this is odd and unfamiliar to me, but most of them are so much more discoverable because of that consistency. Whereas I have worked on a number of React apps, and every single one I come into, I'm like, okay, wait, what are we doing? How are we doing state management? What's the routing like? Are we server-side rendering, are we not? And it is a thing that...I see that community really moving in the direction of finding the meta frameworks that stitch the pieces together and provide more organizational structure and answer more of the questions out of the box. But it continues to be something that I absolutely love about Rails is that Rails answers so many of the questions for me. New people joining the team are like, oh, it's a Rails app, cool. I know how to Rails, and we get to run with that. And so that's more of a pitch for Rails than an answer to the question, but it is a thing that I felt in answering this question. [laughs] But yeah, those are some thoughts. But interested, it sounds like you had some more as well. I would love to hear what else was in your mind when you were thinking about this. STEPH: I do. And I want to highlight you said some really wonderful things. One that really stuck out to me that I had not considered is using Scout APM to look at heavily-trafficked endpoints. I have that on my list in regards as something that I want to know what's my error tracking, observability. Like, if I break something or if you give me a bug ticket to work on, what am I going to use? How am I going to understand what's going wrong? But I hadn't thought of it in terms of seeing which endpoints are heavily used. So I really liked that one. I also liked how you highlighted that you wish you'd do something fancy around doing a churn versus complexity kind of graph because I thought of that too. I was like, oh, that would be such a nice answer. But the truth is I also don't do that. I think it's all those things. I think it would be fun to make it easy. So I do that with new applications. But I agree; I typically more just dive in like, hey, give me a ticket. Let me go from there. I might do some simple command-line checking. So, for example, if I want to look through app models, let's find out which model is the largest. I may look for that to see do we have a God object or something like that? So I may look there. I just want to know how long are some of these files? But I also don't use a particular tool for that churn versus complexity. CHRIS: I think you hit the nail on the head with like, I wish that were easier or more in our toolset. But here on The Bike Shed, we tell the truth. And that is aspirational code flexing that we do not yet have. But I agree, that would be a really nice way to explore exactly what you're describing of, like, who are the God models? I'll definitely do that check, but not some of the more subtle and sophisticated show me the change over time of all these...like nah, that's not what I'm doing, much as I would like to be able to answer that way. STEPH: But it also feels like one of those areas like, it would be nice, but I would be intrigued to see how much I use that. That might be a nice anecdote to have. But I find the diving into the codebase to be more fruitful because I guess it depends on what I'm really looking at. Am I looking to see how complicated of a codebase this is? Because then I need to give more of a high-level review to someone to say how long I think it's going to take for me to work on a particular feature or before I'm joining a team, like, who do I think are good teammates that would then enjoy working on this application? That feels like a very different question to me versus the I'm already part of the team. I'm here. We're going to have complexity and churn. So I can just learn some of that over time. I don't have to know that upfront. Although it may be nice to just know at a high level, say like, okay, if I pick up a ticket, and then I look at that churn and complexity, to be like, okay, my ticket falls right smack-dab in the middle of that. So it's going to be a fun first week. That could be a fun fact. But otherwise, I'm not sure. I mean, yeah, I'd be intrigued to see how much it helps me. One other place that I do browse is I go to the gem file. I'm just always curious, what do people have in their tool bag? I want to see are there any gems that have been pulled in that are helping the team process some deprecated behavior? So something that's been pulled out of Rails but then pulled into a separate gem. So then that way, they don't have to upgrade just yet, or they can upgrade but then still keep some of that existing old deprecated behavior. That kind of stuff is interesting to me. And also, you called it earlier pairing. That's my other favorite way. I want to hear how people talk about the codebase, how they navigate. What are they frustrated by? What brings them joy? All of that is really helpful too. I think that covers all the ways that I immediately will go to when getting acquainted with a new codebase. CHRIS: I think that covers most of what I have in mind, although the question is framed in an interesting way that I think really speaks to the consultant mindset. How do I get acquainted with a new codebase? But if you take the question and flip it around sort of 180 degrees, I think the question can be reframed as how does an organization help people onboard into a codebase? And so everything we just described are like, here's what I do, here's how I would go about it, and pairing starts to get to collaboration. I think we've talked in a number of episodes about our thoughts on onboarding and being intentional with that, pairing people up. A lot of things we described it's like, it's ideal actually if the organization is pushing this. And you and I both worked as consultants for long enough that we're really in the mindset of like, all right, let's assume I'm just showing up. There's no one else there. They give me a laptop and no documentation and no other humans I'm allowed to talk to. How do I figure this out and get the next feature out to production? And ideally, it's something slightly better than that that we experience, but we're ready for whatever it is. Versus, most people are working within the context of an organization for a longer period of time. And most organizations should be thinking about it from the perspective of how do I help the new hires come into this codebase and become effective as quickly as possible? And so I think a lot of what we said can just be flipped around and said from the other way, like, pair them up, put them on a feature early, give them a walkthrough of the codebase, give them a sales-centric demo. Yeah, I feel equally about those things when said from the other side, but I do want to emphasize that this shouldn't be you're out there in the middle of the jungle with only a machete, and you got to figure out this codebase. Ideally, the organization is actually like, no, no, we'll help you. It's ours, so we know it. We can help you find the weird stuff. STEPH: That's a really nice distinction, though, because you're right; I hadn't really thought about this. I was thinking about this from more of the perspective of you're out in the jungle with a machete, minus we did mention pairing in there [laughs] and maybe a demo. I was approaching it more from you're isolated or more solo and then getting accustomed to the codebase versus if you have more people to lean on. But then that also makes me think of all the other processes that I didn't mention that I would include in that onboarding that you're speaking of, of like, how does this team work in terms of where do I push my code? What hooks are going to run? And then what do I wait for? How many people need to review my code? There are all those process-y questions that I think would ideally be included on the onboarding. But that has happened before, I mean, where we've joined projects, and it's been like, okay, good luck. Let us know if you need anything. And so then you do need those machete skills to then start hacking away. [laughs] CHRIS: We've been burned before. STEPH: They come in handy. [laughs] So when you are in that situation, and there's a comet that's coming to destroy earth, and there's a Rails application that is preventing this big doomsday, the question is, do you take astronauts and train them to be Rails experts, or do you take Rails developers and train them to be astronauts? I think that's the big question. CHRIS: What would Michael Bay do? STEPH: On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at firstname.lastname@example.org via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
“Women are one aspect of diversifying, but if you bring in more perspectives in a more diverse group. I think that is exactly when the creative magic happens.” – Amanda Rabideau In today's episode, we welcome Amanda Rabideau. She is the Founder, CEO, & Fractional CMO at Arch Collective. She has over fifteen years of experience growing businesses using effective marketing, strategy, and scaling practices. Throughout her career, Amanda has worked with large enterprises like Dell, Microsoft, CoreLogic, Cloudstaff, New Relic, and OraMetrix (acquired by Dentsply Sirona in 2018). Amanda's devotion to empowering entrepreneurs led to the creation of Arch Collective, a successful business that handles all marketing strategy and execution for B2B (back-to-back) tech startups. She believes that operating with talented, freelance marketers is the future of work as well as a valuable advantage provided by Fractional CMOs allowing post-Series A startups to deploy new capital in the safest, most cost-efficient way. She shares that as a female leader, the expectations are different than for men, and it's important for leaders to reframe the conversation when talking to women in order to create a more diverse perspective. [00:01 - 06:21] Who is Amanda Rabideau? Zack introduces his guest Amanda Rabideau Amanda believes that having a fractional CMO is advantageous to post series funded startups They help companies grow revenue and acquire new customers Video is one of the key components of Amanda's marketing strategy It allows startups to demonstrate their growth to investors [06:22 - 22:58] Video Marketing are Great Modes for Content Marketing Videos are a better way to attract attention and engage people People are not going to read long blogs Content marketing is important for businesses of all stages and sizes Consistency is key for standing out in a saturated environment She discusses how to connect with people Think about who they are as a person and what their interests are When marketing to people: It's important to remember that you are communicating to humans It's important to have a personal brand that is aligned with the company's mission Employees want to work for companies that have a strong digital footprint and are led by leaders who share the same values [22:59 - 33:48] The Importance of a Diverse Leadership Team That Targets Specific Audience Amanda shares that women face different expectations than men in the corporate world It's up to men to reframe the conversation and look at the situation differently Leadership is important, but diversifying a leadership team with different perspectives is even more important The importance of targeting a specific audience when starting a business Being clear on your target audience and develop a persona to represent that audience Listening to customers and understanding their needs and frustrations [33:49 - 39:49] Closing Segment It is Amanda's legacy to leave on the world is that of, an entrepreneur who is successful in their field and helps others do the same Setting an example for her children Helping other entrepreneurs grow their business Connect with Amanda (links below) Join us for Tactical Friday! Head over tohttps://www.myvoicechallenge.com/discovermyvoice ( myvoicechallenge.com) to find out how you can discover your voice, claim your independence, and build that thriving business that you've always wanted! Key Quotes: “One of the things I talk about with startup founders, especially is the importance of being very clear on their target audience.” - Amanda Rabideau “I want to leave a legacy of, “Hey kids, whatever your dreams are, whatever it is that you enjoy doing, life is short.” So go after it and pursue that.” - Amanda Rabideau Connect with Amanda Learn more about Amanda through her website: https://www.arch-collective.com/ (https://www.arch-collective.com/),...
Creating demand for your product is more than just selling a product. You want customers to be the biggest advocates of your product. Having the right integration and partnerships at every level not only create a reliable service, but create fans who will take your products and run with them themselves. Listen in as Akhil Kapoor, Vice President of Technology and Cloud Partnerships at New Relic, Inc. discusses his team's tried and true strategies for building the perfect ecosystem that will have customers coming back for more. Join us as we discuss: Hyperscalers What it takes to be successful with cloud providers Solution development How to go to market Here are some additional episodes featuring other ecosystem leaders that might interest you: #121 Aligning Ecosystem Strategy with Your Customer as the North Star with Lara Caimi, Chief Partner Officer, ServiceNow #122 There's No Easy Button For Partnering with Nicole Napiltonia, VP Of Alliances and OEM Sales, at Barracuda #106 The Secrets to Managing Alliances Like Microsoft with David Totten, Chief Technology Officer, US Partner Ecosystem at Microsoft #97 Why Quality Always Beats Quantity in Software Ecosystems with Tom Roberts, Senior Vice President at the Global Partner Organization over at SAP. Links & Resources Learn more about how WorkSpan helps customers accelerate their ecosystem flywheel through Co-selling, Co-innovating, Co-investing, and Co-marketing. Subscribe to the Ecosystem Aces Podcast on Apple Podcast, Spotify, Stitcher, Google Podcast. Join the WorkSpan Community to engage with other partner ecosystem leaders on best practices, news, events, jobs, and other tips to advance your career in partnering. Find insightful articles on how to lead and get the most out of your partner ecosystem on the WorkSpan blog. Download the Best Practices Guide for Ecosystem Business Management Download the Ultimate Guide for Partner Incentives and Market Development Funds To contact the host, Chip Rodgers, with topic ideas, suggest a guest, or join the conversation about modern partnering, he can be reached on Twitter, LinkedIn, or send Chip an email at: email@example.com This episode of Ecosystem Aces is sponsored by WorkSpan. WorkSpan is the #1 ecosystem business management platform. We give CROs a digital platform to turbocharge indirect revenue with their partner teams at higher win rates and lower costs. We connect your partners on a live network with cross-company business applications to build, market, and sell together. We power the top 10 business ecosystems in the technology and communications industry today, managing over $50 billion in the joint pipeline.
Buyers of the 13-inch M2 MacBook Pro have discovered that the base model's SSD speeds are slower than the M1 counterpart. Benjamin and Zac weigh in on whether this is a big deal or not. Plus, the story around iPadOS 16 and home hubs gets even more complicated. There are also juicy new rumors surrounding the upcoming HomePod and Apple TV updates, Apple has apparently delayed plans to introduce its own in-house modems in the iPhone, and more. Sponsored by MindeNode: MindNode is the most delightful mind mapping and outlining app for your Mac, iPad, iPhone & Apple Watch. Featured as Apple's "Editor's Choice" & "App of the Day", MindNode is a free download on the App Store & Mac App Store. Sponsored by New Relic: That next 9:00 p.m. call is just waiting to happen, get New Relic before it does! You can get access to the whole New Relic platform and 100GB of data free, forever – no credit card required. Sign up at NewRelic.com/happyhour. Sponsored by Kolide: Got Slack? Got Macs? Get Kolide: Device security that fixes challenging problems by messaging your users on Slack. Try Kolide Today! Sponsored by LinkedIn Jobs: LinkedIn Jobs helps you find the candidates you want to talk to, faster. Post your job for free at LinkedIn.com/HAPPYHOUR. Follow Zac Hall @apollozac Benjamin Mayo @bzamayo Read More Apple increases iPhone 13 prices in Japan ahead of iPhone 14 launch this fall Should anyone actually buy the new M2 MacBook Pro? iFixit teardown shows M2 MacBook Pro is just a recycled laptop with a new chip inside Apple 5G chip problem in iPhone 15 is likely legal, not technical iOS 16 Home app: Hands-on with the overhauled HomeKit experience [Video] Apple says iPad can still be a home hub in iOS 16, as long as the new HomeKit features don't Matter to you Kuo: Apple in-house 5G modem faces development woes, will use Qualcomm chips for iPhone 15 Apple TV 4K (2022): Rumors, release date, features, Siri Remote, more Apple Watch Series 8 rumored to feature new Low Power Mode A new full-size HomePod could be a hit, as long as Apple doesn't overthink it US Supreme Court puts Apple vs Qualcomm battle to rest (for now) Entry-level M2 MacBook Pro has a slower SSD than M1 model AirPods Pro 2: Design, features, release date, price, more Listen to more Happy Hour Episodes Subscribe Apple Podcasts Overcast Spotify Listen to more 9to5 Podcasts Apple @ Work Alphabet Scoop Electrek The Buzz Podcast Space Explored Rapid Unscheduled Discussions Enjoy the podcast? Shop Apple at Amazon to support 9to5Mac Happy Hour or shop 9to5Mac Merch!
Aaron, Brian and Brandon Whichard (@bwhichard, Co-Host of @SoftwareDefTalk) talk about all the big stories, trends and transactions in the cloud in the first half of 2022. SHOW: 630CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"SHOW SPONSORS:New Relic (homepage)Services down? New Relic offers full stack visibility with 16 different monitoring products in a single platform.CloudZero - Cloud Cost Intelligence for Engineering TeamsSHOW NOTES:Software Defined Talk - go subscribe to Brandon's podcast!Topic 1 - VMware WTF!?Would VMware be a different company if it was never acquired by EMC/Dell?Nearly every successful Enterprise Software company becomes legacy at some point, was this always VMware's destiny?If VMware was an independent company, would the rise of Containers (Docker) have been any different?Should some other company buy VMware and if so, who?Topic 2 - I really liked the recent analysis on Clouded Judgement Substack about subscription vs. consumption in an economic downturn. (link) If SaaS is the future, does the consumption model really matter as long as value is there?Topic 3 - Is there any aspect of the crypto industry that hasn't been proven to be a sham this year? Decentralized (no). DAOs for governance (no). Controlled by the community (no). Secure (no). Not linked to fiat currencies (no). Good technology (no). A16z propaganda (no).Topic 4 - Are passwords finally dying? Companies like strongDM/Teleport are pushing certificate based authentication and FIDO is gaining adoption in Windows and macOS. Will there be a time when we don't need passwords?Topic 5 - We always talk about skills and keeping up with the industry waves on the show. SaaS is less about building skills and more about operating. How does you get/keep up to speed about SaaS and what's the future? Is the Cloud Architect going the way of the Infrastructure Admin?Topic 6 - Did the big 3 cloud providers all fire their marketing teams? I feel like there hasn't been one big announcement from any of them this year. FEEDBACK?Email: show at the cloudcast dot netTwitter: @thecloudcastnetDev InterruptedBehind every successful tech company is an engineering org. We tell their story.Listen on: Apple Podcasts Spotify
In this episode, we talk about how to be a successful soloprenuer with Jen Yip, founder of Lunch Money. Jen talks about the impetus for creating her popular personal finance app, Lunch Money, how to balance building something for yourself but that is also good for general consumption, and how the lines between personal life and work life can become even more blurred when being a solopreneur. Show Links DevDiscuss (sponsor) DevNews (sponsor) Cockroach Labs (sponsor) New Relic (sponsor) Porkbun (sponsor) Stellar (sponsor) Bright Data (sponsor) Lunch Money Neopets CSS University of Waterloo YC Fellowship 500 Startups The biggest mistakes I've made with Lunch Money (so far)
iOS 16 beta 2 includes new features such as iCloud backup over cellular data, AirPods gain Spatial Audio personalization, Stephen needs Passkey ASAP, Apple's new USB-C chargers, and more on the AppleInsider show.Contact our hosts Send tips on Signal: +1 863-703-0668 @stephenrobles on Twitter @Hillitech on Twitter Sponsored by: New Relic: Get access to the whole New Relic platform and 100GB of data free, forever – no credit card required! Sign up at: newrelic.com/appleinsider Incogni: The first 100 people to use promo code APPLEINSIDER or go to the link incogni.com/appleinsider will get 20% off Incogni! SuperBeets: Get up to 45% OFF and free shipping when you visit: superbeets.com/appleinsider Truebill: Save $100s a year by tracking and cancelling your subscriptions with Truebill! Visit truebill.com/appleinsider to get started. Support the showSupport the show on Patreon or Apple Podcasts to get ad-free episodes every week, access to our private Discord channel, and early release of the show! We would also appreciate a 5-star rating and review in Apple PodcastsLinks from the show iCloud backup over LTE and more - what's new in iOS 16 beta 2 Hands on with new AirPods features in iOS 16, including Personalized Spatial Audio How iOS 16 helps you track, manage, and monitor your medication What macOS Ventura features Intel Macs won't get, and what's coming later for Apple Silicon Apple issues updates to Pages, Numbers, Keynote Everything new coming to Apple TV in tvOS 16 Hands on with Apple's new dual-output USB-C chargers HyperJuice 100W GaN USB-C Anker USB C Charger(Nano II 65W) Anker USB C Charger 40W Satechi 108W Pro USB-C PD Desktop Charger Apple's TikTok filter lets you recreate Harry Styles' AirPods spot Twitter's new 2500-word limit won't fix the attention spans it has broken 'The iOS App Icon Book' review: A mesmerizing tribute to beautiful iPhone icons Wilson Wilson - Tweet Swimmer stuck in frigid Columbia River uses Apple Watch to call for help Man recovers iPhone lost at the bottom of a river for 10 months More AppleInsider podcastsTune in to our HomeKit Insider podcast covering the latest news, products, apps and everything HomeKit related. Subscribe in Apple Podcasts, Overcast, or just search for HomeKit Insider wherever you get your podcasts.Subscribe and listen to our AppleInsider Daily podcast for the latest Apple news Monday through Friday. You can find it on Apple Podcasts, Overcast, or anywhere you listen to podcasts.Podcast artwork from Basic Apple Guy. Download the free wallpaper pack here.Those interested in sponsoring the show can reach out to us at: firstname.lastname@example.org★ Support this podcast on Patreon ★
What are some of the common mistakes that you have seen with Apache Kafka® record production and consumption? Nikoleta Verbeck (Principal Solutions Architect at Professional Services, Confluent) has a role that specifically tasks her with performance tuning as well as troubleshooting Kafka installations of all kinds. Based on her field experience, she put together a comprehensive list of common issues with recommendations for building, maintaining, and improving Kafka systems that are applicable across use cases.Kris and Nikoleta begin by discussing the fact that it is common for those migrating to Kafka from other message brokers to implement too many producers, rather than the one per service. Kafka is thread safe and one producer instance can talk to multiple topics, unlike with traditional message brokers, where you may tend to use a client per topic. Monitoring is an unabashed good in any Kafka system. Nikoleta notes that it is better to monitor from the start of your installation as thoroughly as possible, even if you don't think you ultimately will require so much detail, because it will pay off in the long run. A major advantage of monitoring is that it lets you predict your potential resource growth in a more orderly fashion, as well as helps you to use your current resources more efficiently. Nikoleta mentions the many dashboards that have been built out by her team to accommodate leading monitoring platforms such as Prometheus, Grafana, New Relic, Datadog, and Splunk. They also discuss a number of useful elements that are optional in Kafka so people tend to be unaware of them. Compression is the first of these, and Nikoleta absolutely recommends that you enable it. Another is producer callbacks, which you can use to catch exceptions. A third is setting a `ConsumerRebalanceListener`, which notifies you about rebalancing events, letting you prepare for any issues that may result from them. Other topics covered in the episode are batching and the `linger.ms` Kafka producer setting, how to figure out your units of scale, and the metrics tool Trogdor.EPISODE LINKS5 Common Pitfalls when Using Apache KafkaKafka Internals courselinger.ms producer configs.Fault Injection—TrogdorFrom Apache Kafka to Performance in Confluent CloudKafka CompressionInterface ConsumerRebalanceListenerWatch the video version of this podcastNikoleta Verbeck's TwitterKris Jenkins' TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more on Confluent DeveloperUse PODCAST100 to get $100 of free Confluent Cloud usage (details)
Kent Bennett (Venture Capitalist @bessemervp) talks about the 2022 State of Cloud report, the evolution of the SaaS business model, new monetization models, and the Great Resignation.SHOW: 628CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"SHOW SPONSORS:CloudZero - Cloud Cost Intelligence for Engineering TeamsNew Relic (homepage)Services down? New Relic offers full stack visibility with 16 different monitoring products in a single platform.SHOW NOTES:2022 State of the Cloud (Bessemer Venture Partners)Kent Bennett (Bio - BVP)Topic 1 - Welcome to the show. Let's talk a little bit about your background, and where you focus your attention these days at Bessemer.Topic 2 - The media headlines for Cloud aren't great these days. What are the headlines from your State of Cloud 2022? Topic 3 - Help us understand how much money has come into the VC systems over the last couple years and where it's being targeted. Topic 4 - Let's talk about the $1 to $18 model in SaaS. How does that happen, and what separates the good companies ($18) from the good/mediocre companies ($7).Topic 5 - You talk about “First Act” and indirect models, focused on all their companies that deliver pieces of a monetization model. How long do we expect that approach to last, because we see a wave of consolidation? Topic 6 - A lot has been discussed about the “Great Resignation”. This has impacted both tech workers and non-tech. How do you see technology companies having an impact or influence on the global workforce?FEEDBACK?Email: show at the cloudcast dot netTwitter: @thecloudcastnet
Make compliance easy with Kolide at: https://www.kolide.com/WAN Save your time and sanity with New Relic at https://www.newrelic.com/wan Try FreshBooks free, for 30 days, no credit card required at https://www.freshbooks.com/wan Timestamps: (Courtesy of Bip Bop) 1:18 intro 1:50 Lab 2, yes, he complained, don't dox or harass people, Linus will take care of it his way. 10:15 Amazon turn over 15:50 Burn out 16:20 LMG vs FAANG Co. 17:51 mobile game dev 19:08 LMG financial commitments 20:35 return to mobile game dev 21:58 sponsor segue 26:10 crypto winter 34:14 crypto GPUs aspect 36:30 scrapyard wars remote control, How famous Linus isn't according to Linus 38:36 crypto 40:00 Linus known in oil patches 42:20 crypto discussion 47:22 product launch, Merch Messages 50:38 screwdriver silver and limited black 56:57 find the Linus in your life 1:00:10 Linus badminton & teachers 1:17:38 screwdriver Merch message question 1:18:16 Kaleidescape 1:28:56 absolute control of any tech company, pick one 1:30:08 most life impactful animated movie 1:33:26 favorite laptop 1:35:27 Daily reading 1:35:46 Lab in hindsight 1:36:49 any Steam Deck video plans? 1:39:07 Roku peeps 1:39:18 Framework transparency 1:39:37 Hydravion French for Floatplane Roku 1:40:13 TY all 1:40:22 outro 1:40:27 hydravienne clarification 1:40:55 outro 1:41:08 B4 we go
Alessya Visnjic (CEO, WhyLabs) talks about MLOps, the concept of ML Observability and why AI models can fail. Alyessa talks about the differences between data health and model health and why post production analysis of ML is so important.SHOW: 626CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"SHOW SPONSORS:New Relic (homepage)Services down? New Relic offers full stack visibility with 16 different monitoring products in a single platform.CloudZero - Cloud Cost Intelligence for Engineering TeamsSHOW NOTES:WhyLabs (homepage)TechCrunch Articlehttps://mlops.communityTopic 1 - Welcome Alessya! You are what is known in the AI/ML spaces as a veteran. For those who aren't familiar with your previous work, how about a quick introduction and background.Topic 2 - Give everyone a background in MLOps as it is still an emerging market. We are seeing an emerging trend of trust in data to train models. How did we get to this problem? Is this a transparency and observability problem once in production?Topic 3 - How is model health different from data health? Post deployment of models can actually be a factor, things like data drift over time…Topic 4 - What does a typical tool chain look like? Under the covers is this a logging platform to provide visibility into the model behavior to ensure accuracy over time? I would think every model is different, how do you “standardize/rationalize” the data to detect anomalies and incorrect results?Topic 5 - Every new category of tools has leading use cases. Where are you seeing the most traction today and how can you best help practitioners? Topic 6 - How can folks get started if they are interested?FEEDBACK?Email: show at the cloudcast dot netTwitter: @thecloudcastnetLeadership SchoolConversations from around the world inspiring you to be an extraordinary leader.Listen on: Apple Podcasts Spotify
What's that feature called?Stage Manager is only available on M1-based iPads.The Talk Show Live from WWDC featured John Gruber talking with Craig Federighi and Greg Joswiak.Our thanks to New Relic. Doordash, Github, Epic Games, and more than fourteen thousand other companies use New Relic to debug and improve their software. Get access to the whole New Relic platform and 100GB of data free, forever – no credit card required. Sign up at newrelic.com/rebound.If you want to help out the show and get some great bonus content, consider becoming a Rebound Prime member! Just go to prime.reboundcast.com to check it out!You can now also support the show by buying our NEW shirt featuring our catchphrase, TECHNOLOGY! Are we right?! (Prime members, check your email for a special deal on the shirt.)
In this episode, we talk about how to create successful mobile games, with Bria Sullivan CTO and founder of Honey B Games. Bria talks about her diverse tech background, deciding to dive into game development after years in web development, and how she still feels like a newbie when it comes to game development even with the massive success she has seen. Show Links DevDiscuss (sponsor) DevNews (sponsor) Cockroach Labs (sponsor) New Relic (sponsor) Porkbun (sponsor) Stellar (sponsor) Bright Data (sponsor) Honey B Games Boba Story Cal Poly 500 Startups Unity Java C# Pixar in a Box: The art of storytelling
Our hosts go in-depth on numerous features across iOS 16 and macOS Ventura, discuss Stage Manager in iPadOS 16, and what desktop-class apps could mean on M1 iPads, all on the new AppleInsider podcast.Contact our hosts Send tips on Signal: +1 863-703-0668 @stephenrobles on Twitter @Hillitech on Twitter Sponsored by: Backbone: Order your Backbone by June 30 and get FREE access to over 350 console games and perks! Shop now at: playbackbone.com/appleinsider Incogni: The first 100 people to use promo code APPLEINSIDER or go to the link incogni.com/appleinsider will get 20% off Incogni! New Relic: Get access to the whole New Relic platform and 100GB of data free, forever – no credit card required! Sign up at: newrelic.com/appleinsider Kolide: Send your employees automated Slack messages with security and privacy recommendations! Get a FREE Kolide Gift Bundle after trial activation when you visit: kolide.com/appleinsider Support the showSupport the show on Patreon or Apple Podcasts to get ad-free episodes every week, access to our private Discord channel, and early release of the show! We would also appreciate a 5-star rating and review in Apple PodcastsLinks from the show Hands on with customizable Lock Screens in iOS 16 First iOS 16 beta hints at always-on display support on the iPhone 14 Pro New iMessage edit and unsend features have 15-minute time limit How to lift subjects from photos in iOS 16 and macOS Ventura Apple is financing all the lending for the Apple Pay Later service Hands on with Stage Manager and external monitors with iPadOS 16 Hands on: Using the iPhone as a webcam with iOS 16 and macOS Ventura The five best features for Apple users that were announced at WWDC 2022 Apple simplifies System Settings for macOS Ventura, moves many items More AppleInsider podcastsTune in to our HomeKit Insider podcast covering the latest news, products, apps and everything HomeKit related. Subscribe in Apple Podcasts, Overcast, or just search for HomeKit Insider wherever you get your podcasts.Subscribe and listen to our AppleInsider Daily podcast for the latest Apple news Monday through Friday. You can find it on Apple Podcasts, Overcast, or anywhere you listen to podcasts.Podcast artwork from Basic Apple Guy. Download the free wallpaper pack here.Those interested in sponsoring the show can reach out to us at: email@example.com★ Support this podcast on Patreon ★
Apple held WWDC this week and to kick things off the company introduced new versions of its operating systems. Dave and I talk about the major new features in iOS, macOS and watchOS, as well as the newly introduced M2 computers. Brought to you by: LinkedIn Jobs: LinkedIn Jobs helps you find the candidates you want to talk to, faster. Did you know every week, nearly 40 million job seekers visit LinkedIn? Post your job for free at LinkedIn.com/DALRYMPLE. Terms and conditions apply. MasterClass: I highly recommend you check it out. Get unlimited access to EVERY MasterClass, and as a listener of The Dalrymple Report, you get 15% off an annual membership! Go to MASTERCLASS.com/dalrymple now. That's MASTERCLASS.com/dalrymple for 15% off MasterClass. New Relic: That next nine p.m. call is just waiting to happen. Get New Relic before it does! And you can get access to the whole New Relic platform and 100GB of data free, forever – no credit card required! Sign up at NewRelic.com/dalrymple. Show Notes: iOS 16 macOS Ventura watchOS 9 MacBook Air M2
Miles Ward (@milesward, CTO @SADA) talks about managing the transition to public cloud, customer vs. provider responsibilities, and how to best leverage cloud provider innovations. SHOW: 624CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"SHOW SPONSORS:CloudZero - Cloud Cost Intelligence for Engineering TeamsNew Relic (homepage)Services down? New Relic offers full stack visibility with 16 different monitoring products in a single platform.SHOW NOTES:SADA (homepage)Miles Ward (SADA CTO, bio)Topic 1 - Welcome to the show. Dude, you've done some things. Tell us a little bit about your background at a few of these clouds that people have heard of. Topic 2 - Let's talk about SADA. We're really curious about the perspective of Google Cloud vs. AWS. Topic 3 - Based on the revenue numbers, companies are using public cloud services a lot these days. The clouds seem great, because they take a lot of people's plates. Do you think they realize who is actually responsible for their cloud deployments? Topic 4 - Having build a number of best practices | well-architected designs, how do the cloud providers think about responsibility (let's take security as an example) and how do they expect customers to take responsibility? Topic 5 - What are the most common mistakes companies make in this shared responsibility model? What are the biggest gray areas? Topic 6 - Let's come back to Google Cloud. What are some of the cool things you're able to uniquely do in Google Cloud that really add business value for your clients?FEEDBACK?Email: show at the cloudcast dot netTwitter: @thecloudcastnet
In this episode, we about what are some fundamentals of machine learning and AI, with Oscar Beijbom, co-founder of Nyckel. Show Links DevDiscuss (sponsor) DevNews (sponsor) Cockroach Labs (sponsor) New Relic (sponsor) Porkbun (sponsor) Stellar (sponsor) Bright Data (sponsor) Nyckel BASIC Hövding CoralNet DeepMind Python PyTorch Determined AI MLOps fast.ai Vertex AI Jupyter Notebook